Skip to content

Automated regression tests #187

@jl-wynen

Description

@jl-wynen

We need a way to test whether our workflows still produce the accepted 'correct' results after we make some changes. E.g. in scipp/esssans#135 and scipp/esssans#143. However, there are changes that should change the result, such as adding a new correction or tuning a parameter. In Mantid, those accepted results are written to file and loaded to compare them to results from a new version of the code. This needs extra infrastructure to store and provide the files and extra work to update them. Here is a potential alternative.

Have a test script that does this procedure on each PR:

  1. Check out main.
  2. Run tests with a specific mark and save results of those tests to a folder results_main.
  3. Check out the head of the PR branch.
  4. Run tests with the same mark and save results to results_branch.
  5. For each file that exists in both results_main and results_branch, load the file and compare with sc.testing.assert_identical and sc.testing.assert_allclose.

The tests run this way can contain assertions to, e.g., make sure that the result has the expected shape. But the main purpose of these tests is writing data. That data can be any scipp object, e.g., the result of running a workflow.

This procedure would perform regression tests against main which we assume has the accepted 'correct' code. But it does not require storing result files in a public location.

What do you think? Does this make sense?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions