Skip to content

New command: dwirecon#2915

Merged
Lestropie merged 20 commits intodevfrom
dwirecon
Sep 30, 2025
Merged

New command: dwirecon#2915
Lestropie merged 20 commits intodevfrom
dwirecon

Conversation

@Lestropie
Copy link
Member

@Lestropie Lestropie commented May 23, 2024

Creating a draft PR to facilitate external tracking.

This work was done in collaboration with @tclose and @arkiev for creation of a DWI pre-processing pipeline in Pydra. We've ported much of dwifslpreproc over to that environment. But one of the final components, being the explicit weighted recombination of DWI volume pairs with identical diffusion sensitisation but opposing phase encoding direction, would have been exceptionally difficult to port to that environment.

Moreover, I am commonly utilising an acquisition strategy that is currently not explicitly handled by dwifslpreproc. I utilise a subset of the capabilities of the scattered-slice acquisition work, where I take the full gradient table, split it into two individually-well-distributed subsets, and acquire one subset with one phase encoding direction and then the other with the opposite phase encoding direction, such that after pre-processing and concatenation the aggregated data is well distributed. However in regions of strong susceptibility distortion, half of the images will be blurred. Ideally, this would be handled either by something like what the SHARD recon does, or with a weighted diffusion model fit. What I've done here instead, as an incremental improvement that's consistent with the current workflow, is to introduce an additional capability into the pre-processing, that will appear in the same place as the explicit volume pair combination. Here, where a volume is expected to be erroneously smooth due to EPI distortion, the reconstructed image intensity will be heuristically weighted with the intensity predicted by other volume groups that have different effects of distortions. This allows slightly better results of DWI model fitting in regions of strong susceptibility distortions. Unlike the SHARD recon this is done on a per-shell basis. This would have been very difficult to manage as Python code within dwifslpreproc.

Therefore here I'm proposing a new C++ command dwirecon, which can perform different operations involving reconstruction of DWI data depending on what is required for any given processing pipeline.

The implemented combine_pairs operation duplicates exactly what is currently done in dwifslpreproc. The combine_predicted operation seems to be doing a sensible job in terms of utilising predicted data in regions of strong susceptibility distortions but leaving data elsewhere unmodified; I'll present some examples here when I get the chance to come back to it.


  • Integrate use of dwirecon into dwifslpreproc.
    dwirecon using the "combine_pairs" option supersedes a reasonably large block of code in dwifslpreproc, which can be erased and replaced with a single command execution.

  • Implement -rpe_split option to dwifslpreproc.
    This will allow manual user specification that within the input DWI series, the first half of the volumes have the phase encoding direction as specified at the command-line, and the second half have the opposite phase encoding direction; but unlike -rpe_all, where this results in explicit volume recombination, here it would instead trigger the "combine_predicted" operation of dwirecon.

  • Implement -rpe_header detection that the combine_predicted operation should be performed.
    This will require some form of heuristic that looks at the different volume groups, their gradient tables, and the gradient table of the full concatenated set.
    I will require this functionality to be in place myself given that:

    • We are intentionally acquiring two phase encoding directions, with the number of volumes in the two sets differing by one, such that the natural user interface of -rpe_split will not apply
    • I want to be able to process these data using either my existing, or upcoming, pre-processing pipeline, which are based on automatically determining the appropriate pre-processing steps to apply given the input data.
  • Implement -volume_pairs option for combine_pairs operation.
    Given that it is expected that this command will be executed within dwifslpreproc after eddy, it will be taking the motion-corrected gradient table. This introduces ambiguity into the determination of matching pairs of volumes with equivalent diffusion sensitisation but opposing phase encoding directions (assuming you want to do explicit matching rather than assuming equal order). Instead, dwifslpreproc or some other processing pipeline might determine upstream that this is the type of acquisition that has occurred, store that volume index matching in a file, and provide that file as input to dwirecon.

  • Clean up excessive code comments from development process.

  • (optional) Implement "leave_one_out" operation.
    For each DWI volume, generate an A->SH->A transform using all other volumes in that shell, and write the predictions to the output image. This will do something akin to the "Patch2Self" algorithm, just exclusively on a per-voxel basis rather than a sliding window.
    Edit: Deferred to ENH dwirecon: leave-one-out prediction #3184.

  • (optional) Implement proximity weighting for A->SH transforms
    When generating a predicted intensity for a given direction, something more similar to eddy's GP predictor could be achieved by doing a weighted least squares fit, where the weights are determined by eg. cosine distance to the direction to be predicted. This means that there would be a separate transform for every direction to be predicted, so would be more computationally expensive; but I'm nevertheless interested to see what kind of impact this might have.
    Edit: Deferred to Proximity-weighted SH fit #3185.

  • Discuss command name.
    While dwirecon I think makes the most sense given the underlying operations, it conflicts with an existing command in external project https://github.com/dchristiaens/shard-recon.
    Longer term, I could see the operation of the SHARD dwirecon command being incorporated into this dwirecon command as one operation among many. However in the absence of such a merge, the command name conflict exists. Open to thoughts on how to best resolve this.

github-actions[bot]

This comment was marked as outdated.

@dchristiaens
Copy link
Member

The term "reconstruction" is so widely used medical imaging that I suppose a potential naming conflict for dwirecon was only a matter of time. That said, I would have preferred to merge SHARD-recon into the main repo before other commands with this name are added.

Moreover, the overarching purpose you describe could, I think, be achieved directly with dwirecon (SHARD edition). You would simply need to provide the field map and the PE table, and set the appropriate slice thickness for the split acquisition you describe. It is perfectly possible to run this single-shell, if you want. From what I understand, the only part that is not supported in dwirecon (SHARD edition) is the leave-one-out approach you describe, but this is not yet implemented in this PR either. Did you compare both approaches?

Longer term, I could see the operation of the SHARD dwirecon command being incorporated into this dwirecon command as one operation among many.

May I suggest to do it the other way around? The additional functionality to deal with subject motion, the slice profile, outlier weights, etc., bring about a lot more complexity.

@Lestropie
Copy link
Member Author

dwi2dwi? 🙃

the overarching purpose you describe could, I think, be achieved directly with dwirecon (SHARD edition).

Hmmm, potentially. The SHARD recon behaviour in the -rpe_all context would maybe be closer in behaviour to eddy's least-squares reconstruction, rather than the very simple weighted averaging currently in place. For the split PE design, it would certainly be more principled than what I'm doing here. But how the outcomes may differ, I don't know, I haven't tried it. For the Pydra pipeline we'd like to integrate SHARD pre-processing as an alternative workflow option in the future (was going to email you about that separately once we had a minimal working product). Here I'm currently only trying to facilitate utilisation of the current pipeline in that environment, and am conscious of potential road blocks given external time pressures.

What would be the prospect of integration of a subset of the SHARD capabilities into 3.1: enough to facilitate the functionalities required here, but not the entire pre-processing pipeline?

One of the tasks on my to-do list is to acquire single-session data using all of the different phase encoding strategies that can be handled by dwifslpreproc, to use as exemplar workshop data. Perhaps in the process of generating that data I can finally try getting my hands dirty with SHARD, not only as a potential substitute for dwifslpreproc but potentially also just for estimation of the final DWI data post-eddy.

May I suggest to do it the other way around?

My thinking was that if these were separate "operations" within the same command, akin to something like mrfilter, then the order of implementation / distribution wouldn't matter; there may just be some operations that have many command-line options only applicable to that operation. Whereas if I'm following your train of thought, there would be no need for such distinct "operations"; they would all fall under the banner of "given these input DWI data, and other input parameters, give me output DWI data".

github-actions[bot]

This comment was marked as outdated.

- dwirecon combine_pairs: New options -pairs_in, -pairs_out for manual specification of which volume indices constitute pairs to be aggregated together.
- Fixes to automatic determination of which volumes should be paired.
- dwifslpreproc: Make use of new command dwirecon.
- dwifslpreproc: New option -topup_field to augment option -topup_files: while the FSL eddy command may take as input the field coefficients "image" and movement parameters file, the downstream volume recombination requires a field offset image.
- New tests for dwirecon combine_predicted -lmax option.
This pre-processing configuration corresponds to the strategy where a diffusion gradient table is split in half, with one half acquired using one phase encoding direction and the other half acquired using the opposite phase encoding direction. This invokes dwirecon combine_predicted, which, in areas of signal compression in the empirical image data due to susceptibility distortions, partially influences the output image intensities with predictions generated from other volumes.
- dwirecon: New options -weights, -predicted; these are mostly for debugging purposes.
- dwirecon: Tweaks suggested by clang-tidy.
- dwirecon: Change default lmax for prediction generation if condition number is very poor, which can happen if the phase encoding groups happen to land exactly on the number of measurements necessary to fit some lmax.
- dwifslpreproc: Change the phase encoding design to an Enum, which prevents the manifestation of bugs due to differences in string capitalisation.
@Lestropie
Copy link
Member Author

Lestropie commented Sep 25, 2025

@dchristiaens: Given the push to get this into a 3.1, and to a lesser extent the desire to keep the dwifslpreproc behaviour reasonably unchanged, I'd like to proceed with this. When I resolve #2994 against the latest dev, the algorithm defined there as dwirecon can augment the dwirecon command as a third named operation.

Edit: And if demonstrably superior, dwifslpreproc could be modified to use that dwirecon function rather than combine_pairs / combine_predicted.

@Lestropie Lestropie marked this pull request as ready for review September 29, 2025 02:41
github-actions[bot]

This comment was marked as outdated.

@Lestropie
Copy link
Member Author

  • Update test data repository branches to reflect merge.

github-actions[bot]

This comment was marked as outdated.

github-actions[bot]

This comment was marked as outdated.

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

clang-tidy made some suggestions

There were too many comments to post at once. Showing the first 25 out of 31. Check the log or trigger a new build to see more.

assigned[col] = true;
// Ensure that there are no two unmatched volumes
// that are closer than any two matched volumes
dp_matrix(row, col) = -1.0;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

warning: narrowing conversion from 'size_t' (aka 'unsigned long') to signed type 'Index' (aka 'long') is implementation-defined [bugprone-narrowing-conversions]

          dp_matrix(row, col) = -1.0;
                         ^

// Ensure that there are no two unmatched volumes
// that are closer than any two matched volumes
dp_matrix(row, col) = -1.0;
dp_matrix(col, row) = -1.0;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

warning: 1st argument 'col' (passed to 'row') looks like it might be swapped with the 2nd, 'row' (passed to 'col') [readability-suspicious-call-argument]

          dp_matrix(col, row) = -1.0;
          ^
Additional context

_deps/eigen3-src/Eigen/src/Core/DenseCoeffsBase.h:363: in the call to 'operator()', declared here

    operator()(Index row, Index col)
    ^

// Ensure that there are no two unmatched volumes
// that are closer than any two matched volumes
dp_matrix(row, col) = -1.0;
dp_matrix(col, row) = -1.0;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

warning: narrowing conversion from 'size_t' (aka 'unsigned long') to signed type 'Index' (aka 'long') is implementation-defined [bugprone-narrowing-conversions]

          dp_matrix(col, row) = -1.0;
                    ^

first_volume.index(3) = volume_pairs[out_volume].first;
second_volume.index(3) = volume_pairs[out_volume].second;
for (auto l = Loop(dwi_out, 0, 3)(dwi_out, first_volume, second_volume); l; ++l)
dwi_out.value() = 0.5f * (first_volume.value() + second_volume.value());

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

warning: floating point literal has suffix 'f', which is not uppercase [readability-uppercase-literal-suffix]

Suggested change
dwi_out.value() = 0.5f * (first_volume.value() + second_volume.value());
dwi_out.value() = 0.5F * (first_volume.value() + second_volume.value());

default_type empirical_weight = std::max(0.0, std::min(1.0, default_type(jacdet.value())));
if (empirical_weight == 1.0) {
for (const auto volume : target_volumes) {
dwi_in.index(3) = volume;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

warning: narrowing conversion from 'unsigned long' to signed type 'ssize_t' (aka 'long') is implementation-defined [bugprone-narrowing-conversions]

              dwi_in.index(3) = volume;
                                ^

for (size_t source_index = 0; source_index != source_volumes.size(); ++source_index) {
source_weights[source_index] = std::max(0.0, jacdets[pe_indices[source_volumes[source_index]]]);
dwi_in.index(3) = source_volumes[source_index];
source_data[source_index] = dwi_in.value();

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

warning: narrowing conversion from 'size_t' (aka 'unsigned long') to signed type 'Index' (aka 'long') is implementation-defined [bugprone-narrowing-conversions]

              source_data[source_index] = dwi_in.value();
                          ^

source_data[source_index] = dwi_in.value();
}
// Build the transformation from data in all other phase encoding groups to SH
source2SH = Math::wls(Math::SH::init_transform(source_dirset, lmax), source_weights);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

warning: narrowing conversion from 'size_t' (aka 'unsigned long') to signed type 'int' is implementation-defined [bugprone-narrowing-conversions]

            source2SH = Math::wls(Math::SH::init_transform(source_dirset, lmax), source_weights);
                                                                          ^

predicted_data.noalias() = source2target * source_data;
// Write these to the output image
for (size_t target_index = 0; target_index != target_volumes.size(); ++target_index) {
dwi_in.index(3) = dwi_out.index(3) = target_volumes[target_index];

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

warning: narrowing conversion from 'value_type' (aka 'unsigned long') to signed type 'ssize_t' (aka 'long') is implementation-defined [bugprone-narrowing-conversions]

              dwi_in.index(3) = dwi_out.index(3) = target_volumes[target_index];
                                                   ^

for (size_t target_index = 0; target_index != target_volumes.size(); ++target_index) {
dwi_in.index(3) = dwi_out.index(3) = target_volumes[target_index];
dwi_out.value() =
(empirical_weight * dwi_in.value()) + ((1.0 - empirical_weight) * predicted_data[target_index]);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

warning: narrowing conversion from 'double' to 'value_type' (aka 'float') [bugprone-narrowing-conversions]

                  (empirical_weight * dwi_in.value()) + ((1.0 - empirical_weight) * predicted_data[target_index]);
                  ^

for (size_t target_index = 0; target_index != target_volumes.size(); ++target_index) {
dwi_in.index(3) = dwi_out.index(3) = target_volumes[target_index];
dwi_out.value() =
(empirical_weight * dwi_in.value()) + ((1.0 - empirical_weight) * predicted_data[target_index]);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

warning: narrowing conversion from 'size_t' (aka 'unsigned long') to signed type 'Index' (aka 'long') is implementation-defined [bugprone-narrowing-conversions]

                  (empirical_weight * dwi_in.value()) + ((1.0 - empirical_weight) * predicted_data[target_index]);
                                                                                                   ^

@Lestropie Lestropie merged commit 0717ba4 into dev Sep 30, 2025
6 checks passed
@Lestropie Lestropie deleted the dwirecon branch September 30, 2025 00:50
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants