Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
52 commits
Select commit Hold shift + click to select a range
4bc3118
(i) renamed hamiltonian_neural_network.jl to standard_hamiltonian_neu…
benedict-96 Jul 13, 2025
7992f2b
Continued implementing PGHNNs.
benedict-96 Jul 14, 2025
fab9862
Changed name to remove conflicts with GHNNs.
benedict-96 Jul 14, 2025
fd6cf94
Made sure the case where QPT consists of vectors of different types a…
benedict-96 Jul 15, 2025
845f20c
Finished implementation of generalized hamiltonian neural networks (w…
benedict-96 Jul 15, 2025
760ba7e
Added forcing/dissipation layers.
benedict-96 Jul 25, 2025
ffaa11c
Finished implementing the parametric data loader including a test.
benedict-96 Jul 29, 2025
0bea4d7
Added specific methods designed for dealing with PGHNNs.
benedict-96 Jul 30, 2025
3a07584
Fixed compilation problems with training pghnns.
benedict-96 Jul 30, 2025
990d2a4
Added missing docstrings.
benedict-96 Jul 31, 2025
0e27eec
Removed ambiguity by pulling the definition of the pullback inside th…
benedict-96 Aug 4, 2025
3cc7ef2
Tried to improve performance by allocating closures explicitly.
benedict-96 Aug 4, 2025
521fd17
Added script for training a TimeDependentHarmonicOscillator.
benedict-96 Aug 4, 2025
fe26451
Changed some parameters (including batch size).
benedict-96 Aug 4, 2025
179ca6d
Made some changes to parameters.
benedict-96 Aug 8, 2025
3adf8a4
using relu now.
benedict-96 Aug 10, 2025
d519326
One does not have to specify an AbstractNeuralNetworks.Activation to …
benedict-96 Aug 10, 2025
2bded0b
Some changes that prohibited learning with a relu activation before. …
benedict-96 Aug 10, 2025
42cd54f
Implemented forced Hamiltonian systems (with forcing in the p coordin…
benedict-96 Aug 11, 2025
3a2f729
Added script for ForcedHamiltonianSystem.
benedict-96 Aug 11, 2025
27b944e
Renamed Last to ReturnParameters.
benedict-96 Aug 11, 2025
a737f6b
Changed output dimension to correct number.
benedict-96 Aug 13, 2025
911158d
Added test for single layer symbolic pullback for PGHNN.
benedict-96 Aug 15, 2025
45c2ed5
Fixed symbolic pullback for pghnn (improvements still needed?).
benedict-96 Aug 15, 2025
f454fd8
Using symbolic pullback now in the script.
benedict-96 Aug 15, 2025
96dfe9f
Changed constant determining training data.
benedict-96 Aug 17, 2025
ee9c89b
Implemented a forced version of PGHNNs.
benedict-96 Aug 18, 2025
2c2f4ca
Adjusted the previous script to use forced PGHNNs.
benedict-96 Aug 18, 2025
db492d4
Implemented parameterlength.
benedict-96 Aug 20, 2025
2f950c8
Implemented parameterlength for the forcing layers.
benedict-96 Aug 21, 2025
9bc1f15
Implemented a width parameter for the ResNet.
benedict-96 Aug 22, 2025
0134c87
Fixed problem that emerged when using matrices or tensors as inputs (…
benedict-96 Aug 22, 2025
9d0f27d
Added parametric resnet layers for comparison to PGHNNs.
benedict-96 Sep 21, 2025
fce7554
Added lines for special case when input is a tensor.
benedict-96 Sep 21, 2025
07bc2f6
Added script for parametric ResNet.
benedict-96 Sep 21, 2025
b146899
Fixed a ResNet test with an additional constructor.
benedict-96 Sep 21, 2025
5cc6a36
Added end.
benedict-96 Sep 21, 2025
b7aff8b
Adjusted script so that forcing can be specified.
benedict-96 Oct 4, 2025
268c1ef
Introduced forcing type that can be specified.
benedict-96 Oct 4, 2025
17228ed
Fixed typo.
benedict-96 Oct 4, 2025
353353c
Added Marsden West-reference.
benedict-96 Oct 7, 2025
5c65470
Removed comment that is outdated now.
benedict-96 Oct 7, 2025
8873c19
Fixed forcing for Q terms.
benedict-96 Oct 7, 2025
20805b4
Adjusted the SympNet/GHNN integrators to the Lagrange-d'Alembert inte…
benedict-96 Oct 7, 2025
01d64c6
Fixed equation.
benedict-96 Oct 7, 2025
d5924eb
Fixed ForcedSympNets. Problem was due to not specifying NullParameter…
benedict-96 Oct 20, 2025
81a991a
Added qp-forcing (i.e. forcing that depends on both q and p).
benedict-96 Oct 20, 2025
c563faa
Fixed reference.
benedict-96 Oct 27, 2025
5b5185e
Merge branch 'main' into parametrized-hamiltonian-general-neural-netw…
benedict-96 Nov 6, 2025
b0dfdd2
Added another constant N_FORCING_LAYERS that determines the number of…
benedict-96 Nov 6, 2025
cb6d9e8
Merge branch 'main' into parametrized-hamiltonian-general-neural-netw…
benedict-96 Dec 11, 2025
7b10ae2
Added script to determine parameters in forcing layers for :QP and :P.
benedict-96 Dec 11, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -13,13 +13,15 @@ Distances = "b4f34e82-e78d-54a5-968a-f98e89d6e8f7"
ForwardDiff = "f6369f11-7733-5829-9624-2563aa707210"
GeometricBase = "9a0b12b7-583b-4f04-aa1f-d8551b6addc9"
GeometricEquations = "c85262ba-a08a-430a-b926-d29770767bf2"
GeometricProblems = "18cb22b4-ad41-5c80-9c5f-710df63fbdc9"
GeometricSolutions = "7843afe4-64f4-4df4-9231-049495c56661"
HDF5 = "f67ccb44-e63f-5c2f-98bd-6dc0ccc4ba2f"
InteractiveUtils = "b77e0a4c-d291-57a0-90e8-8db25a27a240"
KernelAbstractions = "63c18a36-062a-441e-b654-da1e3ab1ce7c"
LazyArrays = "5078a376-72f3-5289-bfd5-ec5146d43c02"
LinearAlgebra = "37e2e46d-f89d-539d-b4ee-838fcccc9c8e"
NNlib = "872c559c-99b0-510c-b3b7-b6c96a88d5cd"
ParameterHandling = "2412ca09-6db7-441c-8e3a-88d5709968c5"
Printf = "de0858da-6303-5e67-8744-51eddeeeb8d7"
ProgressMeter = "92933f4c-e287-5a05-a399-4b506db050ca"
Random = "9a3f8284-a2c9-5f02-9a11-845980a1fd5c"
Expand Down Expand Up @@ -48,6 +50,7 @@ HDF5 = "0.16, 0.17"
KernelAbstractions = "0.9"
LazyArrays = "=2.3.2"
NNlib = "0.8, 0.9"
ParameterHandling = "0.5.0"
ProgressMeter = "1"
SafeTestsets = "0.1"
StatsBase = "0.33, 0.34"
Expand Down
15 changes: 13 additions & 2 deletions docs/src/GeometricMachineLearning.bib
Original file line number Diff line number Diff line change
Expand Up @@ -751,9 +751,10 @@ @article{bon2024optimal

@article{kingma2014adam,
title={Adam: a method for stochastic optimization},
author={Kingma, DP},
author={Kingma, Diederik P. and Ba, Jimmy Lei},
journal={arXiv preprint arXiv:1412.6980},
year={2014}
year={2014},
note={Published as a conference paper at ICLR 2015}
}

@article{toda1967vibration,
Expand Down Expand Up @@ -928,6 +929,16 @@ @article{ge1988lie
publisher={Elsevier}
}

@article{marsden2001discrete,
title={Discrete mechanics and variational integrators},
author={Marsden, Jerrold E and West, Matthew},
journal={Acta numerica},
volume={10},
pages={357--514},
year={2001},
publisher={Cambridge University Press}
}

@article{otto2023learning,
title={Learning nonlinear projections for reduced-order modeling of dynamical systems using constrained autoencoders},
author={Otto, Samuel E and Macchio, Gregory R and Rowley, Clarence W},
Expand Down
8 changes: 7 additions & 1 deletion docs/src/architectures/hamiltonian_neural_network.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,12 +42,18 @@ Here the derivatives (i.e. vector field data) ``\dot{q}_i^{(t)}`` and ``\dot{p}_
## Library Functions

```@docs
GeometricMachineLearning.hamiltonian_vector_field(::HamiltonianArchitecture)
GeometricMachineLearning.hamiltonian_vector_field(::StandardHamiltonianArchitecture)
GeometricMachineLearning.HamiltonianArchitecture
GeometricMachineLearning.StandardHamiltonianArchitecture
GeometricMachineLearning.HNNLoss
GeometricMachineLearning.symbolic_hamiltonian_vector_field(::GeometricMachineLearning.SymbolicNeuralNetwork)
GeometricMachineLearning.SymbolicPullback(::HamiltonianArchitecture)
GeometricMachineLearning.SymbolicEnergy
GeometricMachineLearning.SymbolicPotentialEnergy
GeometricMachineLearning.SymbolicKineticEnergy
GeometricMachineLearning.build_gradient
GeometricMachineLearning.GeneralizedHamiltonianArchitecture
GeometricMachineLearning.ForcingLayer
GeometricMachineLearning.ParametricDataLoader
GeometricMachineLearning._processing
```
Original file line number Diff line number Diff line change
@@ -0,0 +1,161 @@
using HDF5
using GeometricMachineLearning
using GeometricMachineLearning: QPT, QPT2, Activation, ParametricLoss, SymbolicNeuralNetwork, SymbolicPullback
using CairoMakie
using NNlib: relu

# PARAMETERS
omega = 1.0 # natural frequency of the harmonic Oscillator
Omega = 3.5 # frequency of the external sinusoidal forcing
F = .9 # amplitude of the external sinusoidal forcing
ni_dim = 10 # number of initial conditions per dimension (so ni_dim^2 total)
T = 2π * 20
nt = 1000 # number of time steps
dt = T/nt # time step

# Generating the initial condition array
IC = vec( [(q=q0, p=p0) for q0 in range(-1, 1, ni_dim), p0 in range(-1, 1, ni_dim)] )

# Generating the solution array
ni = ni_dim^2
q = zeros(Float64, ni, nt+1)
p = zeros(Float64, ni, nt+1)
t = collect(dt * range(0, nt, step=1))

"""
Turn a vector of numbers into a vector of `NamedTuple`s to be used by `ParametricDataLoader`.
"""
function turn_parameters_into_correct_format(t::AbstractVector, IC::AbstractVector{<:NamedTuple})
vec_of_params = NamedTuple[]
for time_step ∈ t
time_step == t[end] || push!(vec_of_params, (t = time_step, ))
end
vcat((vec_of_params for _ in axes(IC, 1))...)
end

for i in 1:nt+1
for j=1:ni
q[j,i] = ( IC[j].p - Omega*F/(omega^2-Omega^2) )/ omega *sin(omega*t[i]) + IC[j].q*cos(omega*t[i]) + F/(omega^2-Omega^2)*sin(Omega*t[i])
p[j,i] = -omega^2*IC[j].q*sin(omega*t[i]) + ( IC[j].p - Omega*F/(omega^2-Omega^2) )*cos(omega*t[i]) + Omega*F/(omega^2-Omega^2)*cos(Omega*t[i])
# q[j,i] = ( IC[j].p - Omega*F/(omega^2-Omega^2) )/ omega *exp(-omega*t[i]) - IC[j].q*exp(-omega*t[i]) + F/(omega^2-Omega^2)*exp(-Omega*t[i])
# p[j,i] = -omega^2*IC[j].q*exp(-omega*t[i]) + ( IC[j].p + Omega*F/(omega^2-Omega^2) )*exp(-omega*t[i]) - Omega*F/(omega^2-Omega^2)*exp(-Omega*t[i])
end

end

@doc raw"""
Turn a `NamedTuple` of ``(q,p)`` data into two tensors of the correct format.

This is the tricky part as the structure of the input array(s) needs to conform with the structure of the parameters.

Here the data are rearranged in an array of size ``(n, 2, t_f - 1)`` where ``[t_0, t_1, \ldots, t_f]`` is the vector storing the time steps.

If we deal with different initial conditions as well, we still put everything into the third (parameter) axis.

# Example

```jldoctest
using GeometricMachineLearning

q = [1. 2. 3.; 4. 5. 6.]
p = [1.5 2.5 3.5; 4.5 5.5 6.5]
qp = (q = q, p = p)
turn_q_p_data_into_correct_format(qp)

# output

(q = [1.0 2.0; 4.0 5.0;;; 2.0 3.0; 5.0 6.0], p = [1.5 2.5; 4.5 5.5;;; 2.5 3.5; 5.5 6.5])
```
"""
function turn_q_p_data_into_correct_format(qp::QPT2{T, 2}) where {T}
number_of_time_steps = size(qp.q, 2) - 1 # not counting t₀
number_of_initial_conditions = size(qp.q, 1)
q_array = zeros(T, 1, 2, number_of_time_steps * number_of_initial_conditions)
p_array = zeros(T, 1, 2, number_of_time_steps * number_of_initial_conditions)
for initial_condition_index ∈ 0:(number_of_initial_conditions - 1)
for time_index ∈ 1:number_of_time_steps
q_array[:, 1, initial_condition_index * number_of_time_steps + time_index] .= qp.q[initial_condition_index + 1, time_index]
q_array[:, 2, initial_condition_index * number_of_time_steps + time_index] .= qp.q[initial_condition_index + 1, time_index + 1]
p_array[:, 1, initial_condition_index * number_of_time_steps + time_index] .= qp.p[initial_condition_index + 1, time_index]
p_array[:, 2, initial_condition_index * number_of_time_steps + time_index] .= qp.p[initial_condition_index + 1, time_index + 1]
end
end
(q = q_array, p = p_array)
end

# SAVING TO FILE

# h5 = h5open(path, "w")
# write(h5, "q", q)
# write(h5, "p", p)
# write(h5, "t", t)
#
# attrs(h5)["ni"] = ni
# attrs(h5)["nt"] = nt
# attrs(h5)["dt"] = dt
#
# close(h5)

"""
This takes time as a single additional parameter (third axis).
"""
function load_time_dependent_harmonic_oscillator_with_parametric_data_loader(qp::QPT{T}, t::AbstractVector{T}, IC::AbstractVector) where {T}
qp_reformatted = turn_q_p_data_into_correct_format(qp)
t_reformatted = turn_parameters_into_correct_format(t, IC)
ParametricDataLoader(qp_reformatted, t_reformatted)
end

# This sets up the data loader
dl = load_time_dependent_harmonic_oscillator_with_parametric_data_loader((q = q, p = p), t, IC)

# This sets up the neural network
width::Int = 1
nhidden::Int = 1
n_integrators::Int = 2
# sigmoid_linear_unit(x::T) where {T<:Number} = x / (T(1) + exp(-x))
arch1 = ForcedGeneralizedHamiltonianArchitecture(2; activation = tanh, width = width, nhidden = nhidden, n_integrators = n_integrators, parameters = turn_parameters_into_correct_format(t, IC)[1], forcing_type = :P)
arch2 = ForcedGeneralizedHamiltonianArchitecture(2; activation = tanh, width = width, nhidden = nhidden, n_integrators = n_integrators, parameters = turn_parameters_into_correct_format(t, IC)[1], forcing_type = :Q)
arch3 = ForcedGeneralizedHamiltonianArchitecture(2; activation = tanh, width = 2width, nhidden = nhidden, n_integrators = n_integrators, parameters = turn_parameters_into_correct_format(t, IC)[1], forcing_type = :QP)
nn1 = NeuralNetwork(arch1)
nn2 = NeuralNetwork(arch2)
nn3 = NeuralNetwork(arch3)

# This is where training starts
batch_size = 128
n_epochs = 200
batch = Batch(batch_size)
o1 = Optimizer(AdamOptimizer(), nn1)
o2 = Optimizer(AdamOptimizer(), nn2)
o3 = Optimizer(AdamOptimizer(), nn3)
loss = ParametricLoss()
_pb = SymbolicPullback(nn1, loss, turn_parameters_into_correct_format(t, IC)[1]);
_pb = SymbolicPullback(nn2, loss, turn_parameters_into_correct_format(t, IC)[1]);
_pb = SymbolicPullback(nn3, loss, turn_parameters_into_correct_format(t, IC)[1]);

function train_network()
o1(nn1, dl, batch, n_epochs, loss, _pb)
o2(nn2, dl, batch, n_epochs, loss, _pb)
o3(nn3, dl, batch, n_epochs, loss, _pb)
end

loss_array = train_network()

trajectory_number = 20

# Testing the network
initial_conditions = (q = q[trajectory_number, 1], p = p[trajectory_number, 1])
n_steps = nt
trajectory = (q = zeros(1, n_steps), p = zeros(1, n_steps))
trajectory.q[:, 1] .= initial_conditions.q
trajectory.p[:, 1] .= initial_conditions.p
# note that we have to supply the parameters as a named tuple as well here:
for t_step ∈ 0:(n_steps-2)
qp_temporary = nn3.model((q = [trajectory.q[1, t_step+1]], p = [trajectory.p[1, t_step+1]]), (t = t[t_step+1],), nn3.params)
trajectory.q[:, t_step+2] .= qp_temporary.q
trajectory.p[:, t_step+2] .= qp_temporary.p
end

fig = Figure()
ax = Axis(fig[1,1])
lines!(ax, trajectory.q[1,:]; label="nn")
lines!(ax, q[trajectory_number,:]; label="analytic")
150 changes: 150 additions & 0 deletions scripts/TimeDependentHarmonicOscillatorParametricResnet.jl
Original file line number Diff line number Diff line change
@@ -0,0 +1,150 @@
using HDF5
using GeometricMachineLearning
using GeometricMachineLearning: QPT, QPT2, Activation, ParametricLoss, SymbolicNeuralNetwork, SymbolicPullback
using CairoMakie
using NNlib: relu

# PARAMETERS
omega = 1.0 # natural frequency of the harmonic Oscillator
Omega = 3.5 # frequency of the external sinusoidal forcing
F = .0 # .9 # amplitude of the external sinusoidal forcing
ni_dim = 10 # number of initial conditions per dimension (so ni_dim^2 total)
T = 2π * 5
nt = 1000 # number of time steps
dt = T/nt # time step

# Generating the initial condition array
IC = vec( [(q=q0, p=p0) for q0 in range(-1, 1, ni_dim), p0 in range(-1, 1, ni_dim)] )

# Generating the solution array
ni = ni_dim^2
q = zeros(Float64, ni, nt+1)
p = zeros(Float64, ni, nt+1)
t = collect(dt * range(0, nt, step=1))

"""
Turn a vector of numbers into a vector of `NamedTuple`s to be used by `ParametricDataLoader`.
"""
function turn_parameters_into_correct_format(t::AbstractVector, IC::AbstractVector{<:NamedTuple})
vec_of_params = NamedTuple[]
for time_step ∈ t
time_step == t[end] || push!(vec_of_params, (t = time_step, ))
end
vcat((vec_of_params for _ in axes(IC, 1))...)
end

for i in 1:nt+1
for j=1:ni
q[j,i] = ( IC[j].p - Omega*F/(omega^2-Omega^2) )/ omega *sin(omega*t[i]) + IC[j].q*cos(omega*t[i]) + F/(omega^2-Omega^2)*sin(Omega*t[i])
p[j,i] = -omega^2*IC[j].q*sin(omega*t[i]) + ( IC[j].p - Omega*F/(omega^2-Omega^2) )*cos(omega*t[i]) + Omega*F/(omega^2-Omega^2)*cos(Omega*t[i])
# q[j,i] = ( IC[j].p - Omega*F/(omega^2-Omega^2) )/ omega *exp(-omega*t[i]) - IC[j].q*exp(-omega*t[i]) + F/(omega^2-Omega^2)*exp(-Omega*t[i])
# p[j,i] = -omega^2*IC[j].q*exp(-omega*t[i]) + ( IC[j].p + Omega*F/(omega^2-Omega^2) )*exp(-omega*t[i]) - Omega*F/(omega^2-Omega^2)*exp(-Omega*t[i])
end

end

@doc raw"""
Turn a `NamedTuple` of ``(q,p)`` data into two tensors of the correct format.

This is the tricky part as the structure of the input array(s) needs to conform with the structure of the parameters.

Here the data are rearranged in an array of size ``(n, 2, t_f - 1)`` where ``[t_0, t_1, \ldots, t_f]`` is the vector storing the time steps.

If we deal with different initial conditions as well, we still put everything into the third (parameter) axis.

# Example

```jldoctest
using GeometricMachineLearning

q = [1. 2. 3.; 4. 5. 6.]
p = [1.5 2.5 3.5; 4.5 5.5 6.5]
qp = (q = q, p = p)
turn_q_p_data_into_correct_format(qp)

# output

(q = [1.0 2.0; 4.0 5.0;;; 2.0 3.0; 5.0 6.0], p = [1.5 2.5; 4.5 5.5;;; 2.5 3.5; 5.5 6.5])
```
"""
function turn_q_p_data_into_correct_format(qp::QPT2{T, 2}) where {T}
number_of_time_steps = size(qp.q, 2) - 1 # not counting t₀
number_of_initial_conditions = size(qp.q, 1)
q_array = zeros(T, 1, 2, number_of_time_steps * number_of_initial_conditions)
p_array = zeros(T, 1, 2, number_of_time_steps * number_of_initial_conditions)
for initial_condition_index ∈ 0:(number_of_initial_conditions - 1)
for time_index ∈ 1:number_of_time_steps
q_array[:, 1, initial_condition_index * number_of_time_steps + time_index] .= qp.q[initial_condition_index + 1, time_index]
q_array[:, 2, initial_condition_index * number_of_time_steps + time_index] .= qp.q[initial_condition_index + 1, time_index + 1]
p_array[:, 1, initial_condition_index * number_of_time_steps + time_index] .= qp.p[initial_condition_index + 1, time_index]
p_array[:, 2, initial_condition_index * number_of_time_steps + time_index] .= qp.p[initial_condition_index + 1, time_index + 1]
end
end
(q = q_array, p = p_array)
end

# SAVING TO FILE

# h5 = h5open(path, "w")
# write(h5, "q", q)
# write(h5, "p", p)
# write(h5, "t", t)
#
# attrs(h5)["ni"] = ni
# attrs(h5)["nt"] = nt
# attrs(h5)["dt"] = dt
#
# close(h5)

"""
This takes time as a single additional parameter (third axis).
"""
function load_time_dependent_harmonic_oscillator_with_parametric_data_loader(qp::QPT{T}, t::AbstractVector{T}, IC::AbstractVector) where {T}
qp_reformatted = turn_q_p_data_into_correct_format(qp)
t_reformatted = turn_parameters_into_correct_format(t, IC)
ParametricDataLoader(qp_reformatted, t_reformatted)
end

# This sets up the data loader
dl = load_time_dependent_harmonic_oscillator_with_parametric_data_loader((q = q, p = p), t, IC)

# This sets up the neural network
width::Int = 2
n_blocks::Int = 1
n_integrators::Int = 1
# sigmoid_linear_unit(x::T) where {T<:Number} = x / (T(1) + exp(-x))
arch = ResNet(2, n_blocks=n_blocks, width=width; activation=tanh, parameters=turn_parameters_into_correct_format(t, IC)[1])
nn = NeuralNetwork(arch)

# This is where training starts
batch_size = 128
n_epochs = 200
batch = Batch(batch_size)
o = Optimizer(AdamOptimizer(), nn)
loss = ParametricLoss()

function train_network()
o(nn, dl, batch, n_epochs, loss)
end

loss_array = train_network()

trajectory_number = 20

# Testing the network
initial_conditions = (q = q[trajectory_number, 1], p = p[trajectory_number, 1])
n_steps = nt
trajectory = (q = zeros(1, n_steps), p = zeros(1, n_steps))
trajectory.q[:, 1] .= initial_conditions.q
trajectory.p[:, 1] .= initial_conditions.p
# note that we have to supply the parameters as a named tuple as well here:
for t_step ∈ 0:(n_steps-2)
qp_temporary = nn.model((q = [trajectory.q[1, t_step+1]], p = [trajectory.p[1, t_step+1]]), (t = t[t_step+1],), nn.params)
trajectory.q[:, t_step+2] .= qp_temporary.q
trajectory.p[:, t_step+2] .= qp_temporary.p
end

fig = Figure()
ax = Axis(fig[1,1])
lines!(ax, trajectory.q[1,:]; label="nn")
lines!(ax, q[trajectory_number,:]; label="analytic")
Loading
Loading