This repository contains the implementation and experimental code for the Master's thesis "Thermal Physical Learning in Sparse Networks" by Thomas Pluck, exploring whether physical learning systems can be understood through generalized Ising dynamics with finite-time field relaxation.
Recent physical learning systems have demonstrated that networks can learn computational tasks through local adaptation without central control, but the relationship between different physical learning approaches remains unclear. This work explores whether such systems can be understood through generalized Ising dynamics with finite-time field relaxation. We model spin expectations as beta-distributed random variables over finite time horizons, capturing uncertainty in non-equilibrium sampling while maintaining analytical tractability. Rather than requiring equilibrium spin correlations, we develop a field-discrepancy learning rule that operates on local voltage differences, similar to adaptation mechanisms observed in physical resistor networks. Experimental validation using logical routing tasks on sparse lattices demonstrates that the framework can reproduce learning behavior with only nearest-neighbor connectivity.
- Theoretical Framework: Connects discrete Ising dynamics with continuous physical learning through beta-distributed spin expectations and finite field relaxation times
- Field-Discrepancy Learning: A learning algorithm that operates on local voltage differences without requiring equilibrium states
- Sparse Network Learning: Experimental demonstration that sparse 2D lattice connectivity can support nonlinear learning tasks
├── beta_arch.py # Beta-distributed Ising model implementation
├── clln_arch.py # Classical Ising node-edge architecture
├── data.py # XOR training data definitions
├── debug_tools.py # Visualization utilities for debugging
├── parallel_experiments.py # Single experiment runner
├── sweep.py # Parameter sweep experiment framework
├── visualize_sweep_results.py # Analysis and visualization of sweep results
└── README.md # This file
The core innovation is modeling spin expectations as beta-distributed random variables:
ŝᵢᵗ⁺¹ ~ 2 · Beta(τ σ(βhᵢᵗ), τ (1 - σ(βhᵢᵗ))) - 1Where:
τis the finite relaxation time parameterβis the inverse temperaturehᵢᵗis the local magnetic field at time tσ(x) = (1 + tanh(x))/2is the sigmoid function
The learning rule updates connection weights based on squared field differences between free and clamped network states:
ΔJᵢⱼ ∝ [Δκᶠʳᵉᵉ]² - [Δκᶜˡᵃᵐᵖᵉᵈ]²This approach:
- Operates on local voltage differences
- Doesn't require equilibrium states
- Mimics adaptation mechanisms in physical resistor networks
The framework is validated on XOR logical routing using a 4×4 sparse lattice:
- Input Encoding: Corner nodes clamped to ±1 values (bipolar encoding: 0→-1, 1→+1)
- Output Reading: Computed as difference between designated output nodes
- Sparse Connectivity: Only nearest-neighbor connections in 2D grid
- Learning Protocol: Contrastive learning comparing free vs clamped network responses
Comprehensive parameter sweeps across:
- Relaxation times (τ): 5, 10, 20, 50, 100
- Learning rates (α): 10⁻⁵, 10⁻⁴, 10⁻³
- Nudging strengths (η): 0.1, 0.25, 0.5, 1.0
- Runs per configuration: 10 independent trials
- Training trials: 500 trials per run
python parallel_experiments.pyThis runs a single experiment configuration and generates:
- Training accuracy plots
- Final network state visualization
- Connection weight visualizations
python sweep.pyRuns comprehensive parameter sweeps across all configurations with:
- Parallel execution across multiple CPU cores
- Automatic checkpointing and resume capability
- Real-time progress monitoring
- Error handling and analytics
Results are saved in parameter_sweep_results/ directory.
python visualize_sweep_results.pyGenerates comprehensive analysis including:
- Parameter performance heatmaps
- Convergence analysis
- Statistical comparisons
- Best/worst configuration analysis
The experimental validation demonstrates:
- Learning Capability: Networks can learn XOR classification without explicit parameter programming
- Sparse Connectivity: Nearest-neighbor connections sufficient for nonlinear learning tasks
- Parameter Sensitivity: Performance varies significantly across different τ, α, and η values
- Robustness: Framework maintains reasonable performance across multiple operating regimes
git clone https://github.com/ThomasPluck/TPLISN.git
cd TPLISN
pip install -r requirements.txtThe system models physical learning networks as stochastic spin systems where:
- Node States: Evolve according to local magnetic fields with finite relaxation times
- Edge States: Beta-distributed spin expectations capture uncertainty in finite-time sampling
- Weight Updates: Based on field discrepancies rather than spin correlations
- Sparse Topology: 2D lattice with antisymmetric couplings preserving current conservation
| Framework | Control | Computation | Adaptation |
|---|---|---|---|
| Traditional | Explicit | Deterministic | Programmed |
| Probabilistic | Explicit | Equilibrium | Optimization |
| This Work | Implicit | Fluctuation | Self-Organization |
The framework opens several research directions:
- Scalability Studies: Larger network sizes and more complex tasks
- Hardware Implementation: Physics-based ASIC design implications
- Alternative Topologies: Beyond sparse 2D lattices
- Task Diversity: More complex computational primitives
- Energy Efficiency: Comparison with digital approaches
If you use this work, please cite:
@mastersthesis{pluck2025thermal,
title={Thermal Physical Learning in Sparse Networks},
author={Pluck, Thomas},
year={2025},
school={Maynooth University},
type={Master's Thesis}
}This work is provided for academic and research purposes. Please see the full thesis for detailed licensing information.
Grateful acknowledgment to:
- Gerry Lacey and Majid Sorouri for research guidance
- John Dooley, Barak Pearlmutter, and Zachary Belateche for technical support
- Kerem Camsari, Aida Todri-Sanial, Todd Hylton, and Philippe Talatchian for field insights
- Joana and family for support throughout this work
For questions about this research or collaboration opportunities, please contact Thomas Pluck through the GitHub repository issues or Maynooth University.