Skip to content

Conversation

@andeplane
Copy link
Owner

Add the ability to continue a simulation after it finishes to allow users to run additional timesteps.


Open in Cursor Open in Web

This commit introduces the ability to continue a paused simulation. It includes state management for the 'finished' status and a new 'continueSimulation' action.

Co-authored-by: andershaf <andershaf@gmail.com>
@cursor
Copy link

cursor bot commented Nov 30, 2025

Cursor Agent can help with this pull request. Just @cursor in comments and I'll start working on changes in this branch.
Learn more about Cursor Agents

@andeplane andeplane marked this pull request as ready for review November 30, 2025 10:08
@andeplane
Copy link
Owner Author

/gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request successfully implements the continuous simulation loop feature by adding a 'continue' button and the corresponding logic in the store. The changes are generally well-structured. However, I've identified some key areas for improvement. There is significant code duplication in src/store/simulation.ts between the run and continueSimulation thunks which should be refactored. The newly added tests in src/store/simulation.test.ts are not correctly testing the action implementations and need to be revised. I've also pointed out some smaller issues like the use of @ts-ignore and a magic number. Addressing these points will improve the code's maintainability and testability.

Comment on lines +7 to +76
describe('finished state', () => {
it('should initialize with finished as false', () => {
// Arrange
const store = createStore<Pick<SimulationModel, 'finished' | 'setFinished'>>({
finished: false,
setFinished: action((state, value: boolean) => {
state.finished = value;
}),
});

// Act
const state = store.getState();

// Assert
expect(state.finished).toBe(false);
});

it('should set finished state to true', () => {
// Arrange
const store = createStore<Pick<SimulationModel, 'finished' | 'setFinished'>>({
finished: false,
setFinished: action((state, value: boolean) => {
state.finished = value;
}),
});

// Act
store.getActions().setFinished(true);

// Assert
expect(store.getState().finished).toBe(true);
});

it('should set finished state to false', () => {
// Arrange
const store = createStore<Pick<SimulationModel, 'finished' | 'setFinished'>>({
finished: true,
setFinished: action((state, value: boolean) => {
state.finished = value;
}),
});

// Act
store.getActions().setFinished(false);

// Assert
expect(store.getState().finished).toBe(false);
});

it('should reset finished state on reset action', () => {
// Arrange
const store = createStore<Pick<SimulationModel, 'finished' | 'files' | 'lammps' | 'reset'>>({
finished: true,
files: ['test.in'],
lammps: undefined,
reset: action((state) => {
state.files = [];
state.lammps = undefined;
state.finished = false;
}),
});

// Act
store.getActions().reset(undefined);

// Assert
expect(store.getState().finished).toBe(false);
expect(store.getState().files).toEqual([]);
});
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

These tests are not testing the actual implementation of the setFinished and reset actions from simulationModel. Instead, they re-implement the actions within the createStore call for each test. This means you are testing a mock implementation, not your production code.

To fix this, you should import the simulationModel and use it to create the store for your tests. This will ensure you are testing the real actions.

Here's an example of how you could restructure the tests:

import { createStore, Store } from 'easy-peasy';
import { simulationModel, SimulationModel } from './simulation';

describe('SimulationModel', () => {
  let store: Store<SimulationModel>;

  beforeEach(() => {
    // Create a fresh store for each test with the actual model
    store = createStore(simulationModel);
  });

  it('should set finished state to true', () => {
    // Arrange: initial state is false
    expect(store.getState().finished).toBe(false);

    // Act
    store.getActions().setFinished(true);

    // Assert
    expect(store.getState().finished).toBe(true);
  });

  // ... other tests
});

Comment on lines +435 to +534
continueSimulation: thunk(
async (actions, timesteps: number | undefined, { getStoreState, getStoreActions }) => {
// @ts-ignore
const simulation = getStoreState().simulation.simulation as Simulation;
if (!simulation) {
return;
}
// @ts-ignore
const lammps = getStoreState().simulation.lammps as LammpsWeb;
if (!lammps || lammps.getIsRunning()) {
return;
}

const allActions = getStoreActions() as Actions<StoreModel>;

actions.setShowConsole(false);
actions.setFinished(false);

lammps.start();
actions.setRunning(true);
track("Simulation.Continue", {
simulationId: simulation?.id,
timesteps: timesteps,
...getEmbeddingParams()
});
time_event("Simulation.Stop");

let errorMessage: string | undefined = undefined;
const startTime = performance.now();

try {
// Use the suggested command format: run with pre no post no
// This continues the simulation without re-initializing
const runCommand = timesteps ? `run ${timesteps} pre no post no` : `run 1000 pre no post no`;
lammps.runCommand(runCommand);
} catch (exception: any) {
console.log("Got exception: ", exception);
errorMessage = lammps.getExceptionMessage(exception);
console.log("Got error running LAMMPS: ", errorMessage);
}

if (!errorMessage) {
errorMessage = lammps.getErrorMessage();
}

// @ts-ignore
const computes = getStoreState().simulationStatus.computes as Compute[];

const endTime = performance.now();
const duration = (endTime - startTime) / 1000; // seconds
const metricsData = {
timesteps: lammps.getTimesteps(),
timestepsPerSecond: (lammps.getTimesteps() / duration).toFixed(3),
numAtoms: lammps.getNumAtoms(),
computes: Object.keys(computes),
};
actions.setPaused(false);

if (errorMessage) {
if (errorMessage.includes("Atomify::canceled")) {
allActions.processing.runPostTimestep(true);
// Simulation got canceled.
actions.setRunning(false);
actions.setShowConsole(true);
actions.setFinished(false);
track("Simulation.Stop", {
simulationId: simulation?.id,
stopReason: "canceled",
...metricsData,
});
} else {
notification.error({
message: errorMessage,
duration: 5,
});
actions.setRunning(false);
actions.setShowConsole(true);
actions.setFinished(false);
track("Simulation.Stop", {
simulationId: simulation?.id,
stopReason: "failed",
errorMessage,
...metricsData,
});
}
} else {
allActions.processing.runPostTimestep(true);
actions.setRunning(false);
actions.setShowConsole(true);
actions.setFinished(true);
track("Simulation.Stop", {
simulationId: simulation?.id,
stopReason: "completed",
...metricsData,
});
}
actions.syncFilesJupyterLite();
allActions.simulationStatus.setLastCommand(undefined);
},
),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

There is a large block of code for handling simulation completion, cancellation, and failure (lines 480-532) that is nearly identical to the logic in the run thunk (lines 381-433). This code duplication makes the code harder to maintain, as any changes would need to be applied in two places.

I recommend refactoring this common logic into a separate helper function. This function could take parameters like the error message and metrics data, and then handle setting the state and tracking events accordingly. This would significantly improve code clarity and maintainability.

Comment on lines +78 to +101
describe('continueSimulation types', () => {
it('should accept undefined timesteps parameter', () => {
// This test verifies that the type signature allows undefined
// The actual thunk implementation is tested through integration tests

// Arrange
type ContinueSimulationAction = (timesteps: number | undefined) => void;

// Act & Assert - This should compile without errors
const mockAction: ContinueSimulationAction = (timesteps) => {
if (timesteps === undefined) {
// Default behavior
} else {
// Custom timesteps
}
};

// Verify both call signatures work
mockAction(1000);
mockAction(undefined);

expect(mockAction).toBeDefined();
});
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This test case only verifies that the type signature of a mock action allows undefined, it doesn't test any runtime behavior of the actual continueSimulation thunk. The comment inside even states that the implementation is tested via integration tests.

While type checking is useful, a unit test should ideally verify some logic. This test as it stands provides little value and could be misleading. I'd recommend either:

  1. Removing this test if it's adequately covered by integration tests.
  2. Turning it into a proper unit test for the thunk. This would involve mocking dependencies (like lammps) and asserting that the correct methods are called with the expected arguments (e.g., lammps.runCommand is called with 'run 1000 pre no post no' when no timesteps are provided).

Comment on lines +437 to +443
// @ts-ignore
const simulation = getStoreState().simulation.simulation as Simulation;
if (!simulation) {
return;
}
// @ts-ignore
const lammps = getStoreState().simulation.lammps as LammpsWeb;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The use of // @ts-ignore and type casting with as here (and on line 481) should be avoided. It suppresses potential type errors and indicates that the store's types might not be correctly inferred. easy-peasy provides utilities to create a fully typed store. Please ensure the store is set up so that getStoreState() and getStoreActions() return correctly typed objects, which will eliminate the need for these comments and casts.

try {
// Use the suggested command format: run with pre no post no
// This continues the simulation without re-initializing
const runCommand = timesteps ? `run ${timesteps} pre no post no` : `run 1000 pre no post no`;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The number 1000 is used as a default value for timesteps. This is a 'magic number' and should be extracted into a named constant at the top of the file (e.g., const DEFAULT_CONTINUE_TIMESTEPS = 1000;) to improve readability and maintainability.

Suggested change
const runCommand = timesteps ? `run ${timesteps} pre no post no` : `run 1000 pre no post no`;
const runCommand = `run ${timesteps ?? 1000} pre no post no`;

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants