Conversation
First version where this operation executes to completion and yields an identical result to the predecessor Python code in dwifslpreproc.
|
The term "reconstruction" is so widely used medical imaging that I suppose a potential naming conflict for Moreover, the overarching purpose you describe could, I think, be achieved directly with
May I suggest to do it the other way around? The additional functionality to deal with subject motion, the slice profile, outlier weights, etc., bring about a lot more complexity. |
|
Hmmm, potentially. The SHARD recon behaviour in the What would be the prospect of integration of a subset of the SHARD capabilities into 3.1: enough to facilitate the functionalities required here, but not the entire pre-processing pipeline? One of the tasks on my to-do list is to acquire single-session data using all of the different phase encoding strategies that can be handled by
My thinking was that if these were separate "operations" within the same command, akin to something like |
c2ba661 to
c4e275a
Compare
Conflicts: cpp/cmd/dwirecon.cpp
Applies exclusively to "combine_predicted" operation.
Modifies the stringency of matching of reversed phase encoding volume pairs based on corresponding diffusion sensitisation directions
- dwirecon combine_pairs: New options -pairs_in, -pairs_out for manual specification of which volume indices constitute pairs to be aggregated together. - Fixes to automatic determination of which volumes should be paired. - dwifslpreproc: Make use of new command dwirecon. - dwifslpreproc: New option -topup_field to augment option -topup_files: while the FSL eddy command may take as input the field coefficients "image" and movement parameters file, the downstream volume recombination requires a field offset image. - New tests for dwirecon combine_predicted -lmax option.
This pre-processing configuration corresponds to the strategy where a diffusion gradient table is split in half, with one half acquired using one phase encoding direction and the other half acquired using the opposite phase encoding direction. This invokes dwirecon combine_predicted, which, in areas of signal compression in the empirical image data due to susceptibility distortions, partially influences the output image intensities with predictions generated from other volumes. - dwirecon: New options -weights, -predicted; these are mostly for debugging purposes. - dwirecon: Tweaks suggested by clang-tidy. - dwirecon: Change default lmax for prediction generation if condition number is very poor, which can happen if the phase encoding groups happen to land exactly on the number of measurements necessary to fit some lmax. - dwifslpreproc: Change the phase encoding design to an Enum, which prevents the manifestation of bugs due to differences in string capitalisation.
|
@dchristiaens: Given the push to get this into a 3.1, and to a lesser extent the desire to keep the Edit: And if demonstrably superior, |
|
Necessary to pass CI checks on MacOS.
| assigned[col] = true; | ||
| // Ensure that there are no two unmatched volumes | ||
| // that are closer than any two matched volumes | ||
| dp_matrix(row, col) = -1.0; |
There was a problem hiding this comment.
warning: narrowing conversion from 'size_t' (aka 'unsigned long') to signed type 'Index' (aka 'long') is implementation-defined [bugprone-narrowing-conversions]
dp_matrix(row, col) = -1.0;
^| // Ensure that there are no two unmatched volumes | ||
| // that are closer than any two matched volumes | ||
| dp_matrix(row, col) = -1.0; | ||
| dp_matrix(col, row) = -1.0; |
There was a problem hiding this comment.
warning: 1st argument 'col' (passed to 'row') looks like it might be swapped with the 2nd, 'row' (passed to 'col') [readability-suspicious-call-argument]
dp_matrix(col, row) = -1.0;
^Additional context
_deps/eigen3-src/Eigen/src/Core/DenseCoeffsBase.h:363: in the call to 'operator()', declared here
operator()(Index row, Index col)
^| // Ensure that there are no two unmatched volumes | ||
| // that are closer than any two matched volumes | ||
| dp_matrix(row, col) = -1.0; | ||
| dp_matrix(col, row) = -1.0; |
There was a problem hiding this comment.
warning: narrowing conversion from 'size_t' (aka 'unsigned long') to signed type 'Index' (aka 'long') is implementation-defined [bugprone-narrowing-conversions]
dp_matrix(col, row) = -1.0;
^| first_volume.index(3) = volume_pairs[out_volume].first; | ||
| second_volume.index(3) = volume_pairs[out_volume].second; | ||
| for (auto l = Loop(dwi_out, 0, 3)(dwi_out, first_volume, second_volume); l; ++l) | ||
| dwi_out.value() = 0.5f * (first_volume.value() + second_volume.value()); |
There was a problem hiding this comment.
warning: floating point literal has suffix 'f', which is not uppercase [readability-uppercase-literal-suffix]
| dwi_out.value() = 0.5f * (first_volume.value() + second_volume.value()); | |
| dwi_out.value() = 0.5F * (first_volume.value() + second_volume.value()); |
| default_type empirical_weight = std::max(0.0, std::min(1.0, default_type(jacdet.value()))); | ||
| if (empirical_weight == 1.0) { | ||
| for (const auto volume : target_volumes) { | ||
| dwi_in.index(3) = volume; |
There was a problem hiding this comment.
warning: narrowing conversion from 'unsigned long' to signed type 'ssize_t' (aka 'long') is implementation-defined [bugprone-narrowing-conversions]
dwi_in.index(3) = volume;
^| for (size_t source_index = 0; source_index != source_volumes.size(); ++source_index) { | ||
| source_weights[source_index] = std::max(0.0, jacdets[pe_indices[source_volumes[source_index]]]); | ||
| dwi_in.index(3) = source_volumes[source_index]; | ||
| source_data[source_index] = dwi_in.value(); |
There was a problem hiding this comment.
warning: narrowing conversion from 'size_t' (aka 'unsigned long') to signed type 'Index' (aka 'long') is implementation-defined [bugprone-narrowing-conversions]
source_data[source_index] = dwi_in.value();
^| source_data[source_index] = dwi_in.value(); | ||
| } | ||
| // Build the transformation from data in all other phase encoding groups to SH | ||
| source2SH = Math::wls(Math::SH::init_transform(source_dirset, lmax), source_weights); |
There was a problem hiding this comment.
warning: narrowing conversion from 'size_t' (aka 'unsigned long') to signed type 'int' is implementation-defined [bugprone-narrowing-conversions]
source2SH = Math::wls(Math::SH::init_transform(source_dirset, lmax), source_weights);
^| predicted_data.noalias() = source2target * source_data; | ||
| // Write these to the output image | ||
| for (size_t target_index = 0; target_index != target_volumes.size(); ++target_index) { | ||
| dwi_in.index(3) = dwi_out.index(3) = target_volumes[target_index]; |
There was a problem hiding this comment.
warning: narrowing conversion from 'value_type' (aka 'unsigned long') to signed type 'ssize_t' (aka 'long') is implementation-defined [bugprone-narrowing-conversions]
dwi_in.index(3) = dwi_out.index(3) = target_volumes[target_index];
^| for (size_t target_index = 0; target_index != target_volumes.size(); ++target_index) { | ||
| dwi_in.index(3) = dwi_out.index(3) = target_volumes[target_index]; | ||
| dwi_out.value() = | ||
| (empirical_weight * dwi_in.value()) + ((1.0 - empirical_weight) * predicted_data[target_index]); |
There was a problem hiding this comment.
warning: narrowing conversion from 'double' to 'value_type' (aka 'float') [bugprone-narrowing-conversions]
(empirical_weight * dwi_in.value()) + ((1.0 - empirical_weight) * predicted_data[target_index]);
^| for (size_t target_index = 0; target_index != target_volumes.size(); ++target_index) { | ||
| dwi_in.index(3) = dwi_out.index(3) = target_volumes[target_index]; | ||
| dwi_out.value() = | ||
| (empirical_weight * dwi_in.value()) + ((1.0 - empirical_weight) * predicted_data[target_index]); |
There was a problem hiding this comment.
warning: narrowing conversion from 'size_t' (aka 'unsigned long') to signed type 'Index' (aka 'long') is implementation-defined [bugprone-narrowing-conversions]
(empirical_weight * dwi_in.value()) + ((1.0 - empirical_weight) * predicted_data[target_index]);
^
Creating a draft PR to facilitate external tracking.
This work was done in collaboration with @tclose and @arkiev for creation of a DWI pre-processing pipeline in Pydra. We've ported much of
dwifslpreprocover to that environment. But one of the final components, being the explicit weighted recombination of DWI volume pairs with identical diffusion sensitisation but opposing phase encoding direction, would have been exceptionally difficult to port to that environment.Moreover, I am commonly utilising an acquisition strategy that is currently not explicitly handled by
dwifslpreproc. I utilise a subset of the capabilities of the scattered-slice acquisition work, where I take the full gradient table, split it into two individually-well-distributed subsets, and acquire one subset with one phase encoding direction and then the other with the opposite phase encoding direction, such that after pre-processing and concatenation the aggregated data is well distributed. However in regions of strong susceptibility distortion, half of the images will be blurred. Ideally, this would be handled either by something like what the SHARD recon does, or with a weighted diffusion model fit. What I've done here instead, as an incremental improvement that's consistent with the current workflow, is to introduce an additional capability into the pre-processing, that will appear in the same place as the explicit volume pair combination. Here, where a volume is expected to be erroneously smooth due to EPI distortion, the reconstructed image intensity will be heuristically weighted with the intensity predicted by other volume groups that have different effects of distortions. This allows slightly better results of DWI model fitting in regions of strong susceptibility distortions. Unlike the SHARD recon this is done on a per-shell basis. This would have been very difficult to manage as Python code withindwifslpreproc.Therefore here I'm proposing a new C++ command
dwirecon, which can perform different operations involving reconstruction of DWI data depending on what is required for any given processing pipeline.The implemented
combine_pairsoperation duplicates exactly what is currently done indwifslpreproc. Thecombine_predictedoperation seems to be doing a sensible job in terms of utilising predicted data in regions of strong susceptibility distortions but leaving data elsewhere unmodified; I'll present some examples here when I get the chance to come back to it.Integrate use of
dwireconintodwifslpreproc.dwireconusing the "combine_pairs" option supersedes a reasonably large block of code indwifslpreproc, which can be erased and replaced with a single command execution.Implement
-rpe_splitoption todwifslpreproc.This will allow manual user specification that within the input DWI series, the first half of the volumes have the phase encoding direction as specified at the command-line, and the second half have the opposite phase encoding direction; but unlike
-rpe_all, where this results in explicit volume recombination, here it would instead trigger the "combine_predicted" operation ofdwirecon.Implement
-rpe_headerdetection that thecombine_predictedoperation should be performed.This will require some form of heuristic that looks at the different volume groups, their gradient tables, and the gradient table of the full concatenated set.
I will require this functionality to be in place myself given that:
-rpe_splitwill not applyImplement
-volume_pairsoption forcombine_pairsoperation.Given that it is expected that this command will be executed within
dwifslpreprocaftereddy, it will be taking the motion-corrected gradient table. This introduces ambiguity into the determination of matching pairs of volumes with equivalent diffusion sensitisation but opposing phase encoding directions (assuming you want to do explicit matching rather than assuming equal order). Instead,dwifslpreprocor some other processing pipeline might determine upstream that this is the type of acquisition that has occurred, store that volume index matching in a file, and provide that file as input todwirecon.Clean up excessive code comments from development process.
(optional) Implement "
leave_one_out" operation.For each DWI volume, generate an A->SH->A transform using all other volumes in that shell, and write the predictions to the output image. This will do something akin to the "Patch2Self" algorithm, just exclusively on a per-voxel basis rather than a sliding window.
Edit: Deferred to ENH dwirecon: leave-one-out prediction #3184.
(optional) Implement proximity weighting for A->SH transforms
When generating a predicted intensity for a given direction, something more similar to
eddy's GP predictor could be achieved by doing a weighted least squares fit, where the weights are determined by eg. cosine distance to the direction to be predicted. This means that there would be a separate transform for every direction to be predicted, so would be more computationally expensive; but I'm nevertheless interested to see what kind of impact this might have.Edit: Deferred to Proximity-weighted SH fit #3185.
Discuss command name.
While
dwireconI think makes the most sense given the underlying operations, it conflicts with an existing command in external project https://github.com/dchristiaens/shard-recon.Longer term, I could see the operation of the SHARD
dwireconcommand being incorporated into thisdwireconcommand as one operation among many. However in the absence of such a merge, the command name conflict exists. Open to thoughts on how to best resolve this.