-
Notifications
You must be signed in to change notification settings - Fork 46
1d demo #655
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
1d demo #655
Conversation
This script simulates and visualizes pressure fields in a one-dimensional heterogeneous medium using k-Wave.
* Introduces the `create_storage_variables` to define the creation of sensor data storage structure in kWaveSimulation which is passed to the simulator * updates `kWaveSimulation` to to use `create_storage_variables` also adds setting for nonlinearity * fixes shape handling in `ksource`, * improves 1D simulation logic in kspaceFirstOrder1D, * addresses minor issues in filters and signals utilities. Also updates dependencies to include tqdm.
WalkthroughThe changes add a complete 1D k-space first-order simulation feature to kwave: a new 1D solver with optional GPU support, storage variable management (triangulation/preallocation), sensor data extraction utilities, integrations into kWaveSimulation, example scripts/notebook, and related utilities/tests and dependency updates across multiple modules. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant kWaveSimulation
participant create_storage_variables
participant kspace_first_order_1D
participant extract_sensor_data
participant Backend as GPU/CPU Backend
User->>kWaveSimulation: Initialize grid, medium, source, sensor, options
kWaveSimulation->>kWaveSimulation: Validate inputs & prepare state
kWaveSimulation->>create_storage_variables: Request storage setup & triangulation
create_storage_variables-->>kWaveSimulation: Return flags, record, sensor_data, num_time_points
kWaveSimulation->>kspace_first_order_1D: Start 1D time-stepping simulation
kspace_first_order_1D->>Backend: Select NumPy or CuPy backend
loop per time step
kspace_first_order_1D->>kspace_first_order_1D: Compute gradients, apply sources, update fields
kspace_first_order_1D->>extract_sensor_data: Extract/store sensor outputs
extract_sensor_data-->>kspace_first_order_1D: Return updated sensor_data
end
kspace_first_order_1D-->>User: Return sensor_data and results
Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 inconclusive)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## master #655 +/- ##
==========================================
- Coverage 73.95% 67.79% -6.17%
==========================================
Files 50 52 +2
Lines 7000 7725 +725
Branches 1338 1567 +229
==========================================
+ Hits 5177 5237 +60
- Misses 1280 1895 +615
- Partials 543 593 +50
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 17
🤖 Fix all issues with AI agents
In @examples/ivp_1D_simulation/ivp_1D_simulation.ipynb:
- Around line 29-31: The notebook sentence "Now import dependencies, and party
of library which are required." contains a typo; update the cell text to read
"Now import dependencies, and parts of the library that are required." so it
uses "parts" and "that" for correct grammar and clarity; locate and edit the
markdown/source string in the cell where this sentence appears in
ivp_1D_simulation.ipynb.
- Around line 8-10: Update the markdown cell string "First install the pacakage
using the latests version." to correct the typos: change "pacakage" to "package"
and "latests" to "latest" so the sentence reads "First install the package using
the latest version."
- Around line 131-134: The inline comment for the mask array is incorrect: the
values in mask are in meters but the comment says [mm]; update the comment or
units to match the data. Locate the mask creation (variables named mask and
sensor.mask) and either change the comment from "[mm]" to "[m]" or convert the
numeric values to millimeters if you intend to keep "[mm]"; ensure sensor.mask
remains assigned to the corrected mask.
In @examples/ivp_1D_simulation/ivp_1D_simulation.py:
- Around line 62-65: The inline comment for the mask values is wrong: the array
mask = np.array([-10e-3, 10e-3]) is in meters, not millimeters; update the
comment to reflect the correct unit (e.g., change "[mm]" to "[m]" next to the
mask definition) so the documentation matches the values assigned to mask and
sensor.mask.
In @kwave/kspaceFirstOrder1D.py:
- Around line 505-506: The conditional uses the wrong attribute name: replace
the incorrect reference to options.source_ux with k_sim.source_ux; specifically
in the block that checks source ux (the lines with if (k_sim.source_ux is not
False and t_index < xp.shape(source.ux)[1]): if options.source_ux >= t_index:)
change the second condition to if k_sim.source_ux >= t_index so it references
the Simulation (k_sim) attribute instead of the nonexistent options.source_ux.
- Around line 77-84: The function dotdict_to_cpu currently calls isinstance(val,
cp.ndarray) which fails when cp is None; update dotdict_to_cpu to first guard
that cp is not None (or that cp has attribute ndarray) before using isinstance
with cp.ndarray, e.g. check "if cp is not None and isinstance(val, cp.ndarray):"
and otherwise skip the CuPy check (or fall back to checking numpy.ndarray) so no
TypeError occurs when CuPy isn't installed; keep the verbose print and setattr
behavior intact for the CuPy branch.
In @kwave/kWaveSimulation_helper/create_storage_variables.py:
- Around line 52-57: The barycentric coordinates are being computed using the
triangle vertex coordinates instead of the query interpolation_points; update
the bc calculation in create_storage_variables.py to subtract
tri.transform[simplex_indices, 2] from the interpolation_points for each simplex
and then multiply by tri.transform[simplex_indices, :2] (i.e., use
interpolation_points in the dot operand rather than tri.points[simplex_indices,
:]) so bc is computed from the query points; keep simplex_indices and the
returned tri.points[simplex_indices, :] as the vertices.
- Around line 29-33: The barycentric calculation uses the simplex vertices
instead of the query points; change the computation of bc to subtract
tri.transform[indices, 2] from interpolation_points (not tri.points[indices, :])
and multiply by tri.transform[indices, :2] accordingly so the matrix shapes
align (i.e., use interpolation_points - tri.transform[indices, 2] with the same
transpose arrangement used before). Update the expression that builds bc (which
currently references tri.points[indices, :]) to use interpolation_points so bc
contains barycentric coords for the query points.
- Around line 544-549: The code contains an unreachable and syntactically
incorrect block: the "if kgrid.dim == 1:" branch inside an else that only runs
when kgrid.dim != 1 should be removed, and the reshape call itself is wrong;
remove the entire unreachable if-block (including the sensor_x = np.reshape(...)
line) or, if the intent was to handle the kgrid.dim == 1 case, move that logic
out of the else and change the call to np.reshape(mask, (-1, 1)) so the first
argument is the array and the second is the shape; update the code around
create_storage_variables to either delete the unreachable branch or correctly
locate and fix the reshape using np.reshape(mask, (-1, 1)) where sensor_x is
set.
- Around line 85-86: The early return when flags.time_rev is not None returns
only flags causing unpack errors; update the branch in create_storage_variables
to return the same 4-tuple shape as the other path by constructing/initializing
record, sensor_data, and num_recorded_time_points (or appropriate placeholders)
and return (flags, record, sensor_data, num_recorded_time_points) instead of
just flags so callers can always unpack consistently.
In @kwave/kWaveSimulation_helper/extract_sensor_data.py:
- Around line 300-304: The logic for accumulating minimum corner pressures is
inverted: in the else branch where you currently call
np.maximum(sensor_data[cuboid_index].p_min, result), replace that with
np.minimum so the running p_min actually tracks the smallest values; keep the
file_index == 0 initialization (sensor_data[cuboid_index].p_min = result) and
only use np.minimum in the subsequent accumulation when flags.record_p_min is
true.
- Around line 537-547: The 3D branch inside the else (when file_index > 0)
updates ux_min and uy_min but omits updating uz_min; fix by adding a line
mirroring the ux/uy updates to set sensor_data.uz_min =
np.minimum(sensor_data.uz_min, np.sum(uz_sgz[record.tri] * record.bc, axis=1))
in the dim == 3 block (use the same record.tri and record.bc pattern and the
uz_sgz array).
- Around line 435-452: The np.interp calls for 1D Cartesian sensor masks use the
wrong argument order; change each np.interp(record.grid_x, p, record.sensor_x)
(and similar instances) to np.interp(record.sensor_x, record.grid_x, p) so that
x=query points is record.sensor_x, xp=data coordinates is record.grid_x, and
fp=data values is p; apply the same swap for all occurrences referenced (e.g.,
those setting sensor_data.p_max, sensor_data.p_min, and other 1D interpolations)
so subsequent np.minimum/np.maximum operations use correctly interpolated
arrays.
In @kwave/kWaveSimulation.py:
- Around line 210-214: The time_rev method has an infinite recursion because it
returns self.time_rev at the end; change that terminal return to a concrete
boolean (likely False) so the method returns False when no sensor or when sensor
is NotATransducer. Update the method time_rev to return False instead of
self.time_rev and keep the conditional branch that checks self.sensor,
isinstance(NotATransducer), options.simulation_type.is_elastic_simulation(), and
sensor.time_reversal_boundary_data to decide the True case.
In @tests/matlab_test_data_collectors/python_testers/getWin_test.py:
- Line 34: The logging call incorrectly passes multiple values to logging.log
without a format string; replace it with a properly formatted call such as
logging.info("N=%s type=%s param=%s rotation=%s symmetric=%s square=%s", N,
type_, param, rotation, symmetric, square) or use logging.log(logging.INFO,
"N=%s type=%s param=%s rotation=%s symmetric=%s square=%s", N, type_, param,
rotation, symmetric, square) so the variables (N, type_, param, rotation,
symmetric, square) are supplied as arguments to a single format string.
- Around line 36-44: The conditional uses N.all() > 1 which compares a boolean
to 1; change it to (N > 1).all() so the intent "all elements > 1" is correctly
checked for array inputs (locate the conditional around N, get_win call in
getWin_test.py). Also address the changed test coverage: decide whether inputs
with N <= 1 should still exercise get_win or instead be asserted to
raise/skip—either remove the conditional to run the test for those cases or add
explicit asserts/skips for N <= 1 so test behavior is intentional and
documented.
🧹 Nitpick comments (5)
kwave/ksource.py (1)
79-80: Consider squeezing both sides of the comparison for consistency.The validation now compares
p0.shapeagainstnp.squeeze(kgrid.k).shape, which removes singleton dimensions from the grid. For consistent comparison, consider also squeezingp0to ensure that singleton dimensions are handled uniformly on both sides.♻️ Proposed enhancement
- if self.p0.shape != np.squeeze(kgrid.k).shape: - + if np.squeeze(self.p0).shape != np.squeeze(kgrid.k).shape: # throw an error if p0 is not the correct size raise ValueError("source.p0 must be the same size as the computational grid.")examples/ivp_1D_simulation/ivp_1D_simulation.py (1)
75-76: Consider defaulting to CPU for broader compatibility.The example hardcodes
is_gpu_simulation=True. While the solver falls back to NumPy if CuPy is unavailable, defaulting to CPU (is_gpu_simulation=False) or detecting GPU availability would make this example more accessible to users without GPU support.kwave/kWaveSimulation.py (1)
528-600: Consider consolidating duplicate storage variable creation logic.The elastic and non-elastic branches (lines 531-563 and 564-600) are nearly identical, differing only in how
sensor_xis computed. This could be refactored to reduce duplication.♻️ Suggested consolidation
# Compute sensor_x based on code type if not self.blank_sensor: if is_elastic_code: sensor_x = self.sensor.mask[0, :] elif k_dim == 1: sensor_x = self.sensor_x else: sensor_x = self.sensor.mask[0, :] else: sensor_x = None record_old = copy.deepcopy(self.record) values = dotdict({ "sensor_x": sensor_x, "sensor_mask_index": self.sensor_mask_index, "record": record_old, "sensor_data_buffer_size": self.s_source_pos_index }) if self.record.u_split_field: self.record_u_split_field = self.record.u_split_field flags = dotdict({ "use_sensor": self.use_sensor, "blank_sensor": self.blank_sensor, # ... rest of flags }) flags, self.record, self.sensor_data, self.num_recorded_time_points = create_storage_variables( self.kgrid, self.sensor, opt, values, flags, self.record)kwave/kspaceFirstOrder1D.py (2)
3-4: Remove duplicate import.
SimulationExecutionOptionsis imported twice (lines 3 and 23).♻️ Remove duplicate
from typing import Union -from kwave.options.simulation_execution_options import SimulationExecutionOptions - try: import cupy as cp except ImportError:Also applies to: 23-24
474-474: Preferrange()overxp.arange()for loop iteration.Using
xp.arange()creates an array (potentially on GPU with CuPy), which may cause unexpected behavior when iterating. Use Python's built-inrange()for the loop index.♻️ Proposed fix
# start time loop - for t_index in tqdm(xp.arange(index_start, index_end, index_step, dtype=int)): + for t_index in tqdm(range(index_start, index_end, index_step)):
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (13)
examples/ivp_1D_simulation/ivp_1D_simulation.ipynbexamples/ivp_1D_simulation/ivp_1D_simulation.pykwave/kWaveSimulation.pykwave/kWaveSimulation_helper/__init__.pykwave/kWaveSimulation_helper/create_absorption_variables.pykwave/kWaveSimulation_helper/create_storage_variables.pykwave/kWaveSimulation_helper/extract_sensor_data.pykwave/ksource.pykwave/kspaceFirstOrder1D.pykwave/utils/filters.pykwave/utils/signals.pypyproject.tomltests/matlab_test_data_collectors/python_testers/getWin_test.py
🧰 Additional context used
🧠 Learnings (2)
📓 Common learnings
Learnt from: waltsims
Repo: waltsims/k-wave-python PR: 625
File: docs/get_started/first_simulation.rst:147-154
Timestamp: 2025-09-11T16:12:04.876Z
Learning: In k-Wave Python, kspaceFirstOrder2D function takes separate simulation_options and execution_options parameters, not a single options dict. Use SimulationOptions and SimulationExecutionOptions classes from kwave.options, with snake_case attributes like pml_inside instead of MATLAB-style PMLInside.
📚 Learning: 2025-09-11T16:12:04.876Z
Learnt from: waltsims
Repo: waltsims/k-wave-python PR: 625
File: docs/get_started/first_simulation.rst:147-154
Timestamp: 2025-09-11T16:12:04.876Z
Learning: In k-Wave Python, kspaceFirstOrder2D function takes separate simulation_options and execution_options parameters, not a single options dict. Use SimulationOptions and SimulationExecutionOptions classes from kwave.options, with snake_case attributes like pml_inside instead of MATLAB-style PMLInside.
Applied to files:
examples/ivp_1D_simulation/ivp_1D_simulation.pykwave/kspaceFirstOrder1D.pyexamples/ivp_1D_simulation/ivp_1D_simulation.ipynb
🧬 Code graph analysis (7)
tests/matlab_test_data_collectors/python_testers/getWin_test.py (2)
kwave/utils/signals.py (1)
get_win(58-324)tests/matlab_test_data_collectors/python_testers/utils/record_reader.py (1)
expected_value_of(15-20)
kwave/kWaveSimulation_helper/extract_sensor_data.py (2)
kwave/data.py (1)
numpy(107-108)kwave/kWaveSimulation.py (1)
compute_directivity(226-238)
examples/ivp_1D_simulation/ivp_1D_simulation.py (7)
kwave/data.py (2)
numpy(107-108)Vector(7-49)kwave/kgrid.py (5)
kWaveGrid(14-701)Nx(157-161)dx(178-182)x_size(353-357)makeTime(455-495)kwave/kmedium.py (1)
kWaveMedium(11-232)kwave/ksensor.py (1)
kSensor(9-81)kwave/kspaceFirstOrder1D.py (1)
kspace_first_order_1D(87-662)kwave/options/simulation_execution_options.py (1)
SimulationExecutionOptions(13-308)kwave/options/simulation_options.py (1)
SimulationOptions(46-348)
kwave/kWaveSimulation_helper/create_absorption_variables.py (1)
kwave/kWaveSimulation.py (1)
equation_of_state(156-167)
kwave/utils/signals.py (1)
kwave/kgrid.py (1)
size(297-301)
kwave/kWaveSimulation_helper/__init__.py (2)
kwave/kWaveSimulation_helper/create_storage_variables.py (1)
create_storage_variables(69-114)kwave/kWaveSimulation_helper/extract_sensor_data.py (1)
extract_sensor_data(8-714)
kwave/kWaveSimulation.py (7)
kwave/kWaveSimulation_helper/create_storage_variables.py (1)
create_storage_variables(69-114)kwave/recorder.py (1)
Recorder(9-143)kwave/kmedium.py (1)
is_nonlinear(107-114)kwave/ksensor.py (2)
mask(38-42)mask(45-46)kwave/ktransducer.py (1)
mask(305-311)kwave/utils/dotdictionary.py (1)
dotdict(4-57)kwave/options/simulation_options.py (1)
is_axisymmetric(41-42)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (16)
- GitHub Check: test (windows-latest, 3.12)
- GitHub Check: test (ubuntu-latest, 3.13)
- GitHub Check: test (ubuntu-latest, 3.11)
- GitHub Check: test (windows-latest, 3.10)
- GitHub Check: test (macos-latest, 3.13)
- GitHub Check: test (ubuntu-latest, 3.12)
- GitHub Check: test (windows-latest, 3.13)
- GitHub Check: test (ubuntu-latest, 3.10)
- GitHub Check: test (macos-latest, 3.11)
- GitHub Check: test (windows-latest, 3.11)
- GitHub Check: test_install (macos-latest, 3.12, all)
- GitHub Check: test_install (macos-latest, 3.12, docs)
- GitHub Check: test_install (macos-latest, 3.11, test)
- GitHub Check: test_install (windows-latest, 3.13, docs)
- GitHub Check: test_install (windows-latest, 3.12, dev)
- GitHub Check: test_install (windows-latest, 3.10, examples)
🔇 Additional comments (13)
kwave/utils/filters.py (1)
591-593: LGTM! Shape compatibility fix for 1D arrays.The added check correctly handles the case where the window is 2D but the input array is 1D by squeezing the window to match the input shape. This ensures proper broadcasting during the FFT-based smoothing operation.
tests/matlab_test_data_collectors/python_testers/getWin_test.py (1)
16-16: LGTM! Logging statement correctly formatted.The logging statement properly uses format string syntax to log the iteration index.
kwave/kWaveSimulation_helper/create_absorption_variables.py (1)
17-18: Confirmed: Callers safely handle None return values for lossless equation of state.The single caller (
kWaveSimulation.py, line 1461) assigns the four return values to instance attributes that are already initialized toNone. The absorption variables (absorb_nabla1,absorb_nabla2,absorb_tau,absorb_eta) are used only within equation-of-state-specific code paths (e.g., inkspaceFirstOrder1D.py), so returningNonefor the lossless case is safe since those code paths won't execute for lossless media.pyproject.toml (1)
36-36: The tqdm version 4.67.1 is valid and secure.Version 4.67.1 is the latest release on PyPI and not yanked. While a prior vulnerability (CVE-2024-34062) affected tqdm, it was fixed in v4.66.3, so this version is not affected.
kwave/kWaveSimulation_helper/__init__.py (1)
1-13: LGTM!The new imports for
create_storage_variablesandextract_sensor_datafollow the existing pattern, and__all__is correctly updated to expose these new utilities.kwave/utils/signals.py (3)
59-59: LGTM!The extended type hint to accept
Tuple[int]aligns with the updated handling logic below.
145-154: LGTM!Using
np.asarray(N).sizehandles scalar and array-like inputs uniformly, and the explicitint(N)cast ensures type consistency for downstream 1D window creation.
310-310: LGTM!Consistent use of
np.asarray(N).sizein the window enlargement branch.kwave/kWaveSimulation_helper/extract_sensor_data.py (1)
1-6: LGTM on CuPy import fallback.The try/except pattern for optional CuPy import with
cp = Nonefallback is appropriate.kwave/kWaveSimulation.py (2)
147-154: LGTM!The new
is_nonlinearproperty correctly delegates to the medium's nonlinearity check.
166-167: Good catch fixing the typo."loseless" → "lossless" is a valid spelling correction.
kwave/kspaceFirstOrder1D.py (1)
87-92: LGTM on function signature.The function follows the established pattern with separate
simulation_optionsandexecution_optionsparameters. Based on learnings, this aligns with k-Wave Python conventions.kwave/kWaveSimulation_helper/create_storage_variables.py (1)
516-521: Unused buffer variable and unusualdelon local parameter.
sensor_data_bufferis created but never used (suppressed with noqa). Additionally,del sensor_data_buffer_sizeremoves the local parameter which is an unusual pattern. This appears to be incomplete implementation.Consider whether the buffer should be stored in
sensor_dataor returned, and whether thedelstatement should instead set a flag or attribute to False.
| "source": [ | ||
| "First install the pacakage using the latests version." | ||
| ] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix typographical errors.
The markdown text contains typos: "pacakage" → "package" and "latests" → "latest".
🤖 Prompt for AI Agents
In @examples/ivp_1D_simulation/ivp_1D_simulation.ipynb around lines 8 - 10,
Update the markdown cell string "First install the pacakage using the latests
version." to correct the typos: change "pacakage" to "package" and "latests" to
"latest" so the sentence reads "First install the package using the latest
version."
| "source": [ | ||
| "Now import dependencies, and party of library which are required." | ||
| ] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix typographical error.
The text "party of library which are required" should likely read "parts of the library that are required".
🤖 Prompt for AI Agents
In @examples/ivp_1D_simulation/ivp_1D_simulation.ipynb around lines 29 - 31, The
notebook sentence "Now import dependencies, and party of library which are
required." contains a typo; update the cell text to read "Now import
dependencies, and parts of the library that are required." so it uses "parts"
and "that" for correct grammar and clarity; locate and edit the markdown/source
string in the cell where this sentence appears in ivp_1D_simulation.ipynb.
| "# this hack is needed to ensure that the sensor is in [1,2] dimensions\n", | ||
| "mask = np.array([-10e-3, 10e-3]) # [mm]\n", | ||
| "mask = mask[:, np.newaxis].T\n", | ||
| "sensor.mask = mask" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same unit comment issue as companion script.
The comment indicates [mm] but the values are in meters. See the corresponding fix in ivp_1D_simulation.py.
🤖 Prompt for AI Agents
In @examples/ivp_1D_simulation/ivp_1D_simulation.ipynb around lines 131 - 134,
The inline comment for the mask array is incorrect: the values in mask are in
meters but the comment says [mm]; update the comment or units to match the data.
Locate the mask creation (variables named mask and sensor.mask) and either
change the comment from "[mm]" to "[m]" or convert the numeric values to
millimeters if you intend to keep "[mm]"; ensure sensor.mask remains assigned to
the corrected mask.
| # this hack is needed to ensure that the sensor is in [1,2] dimensions | ||
| mask = np.array([-10e-3, 10e-3]) # [mm] | ||
| mask = mask[:, np.newaxis].T | ||
| sensor.mask = mask |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix incorrect unit comment.
The comment states [mm] but the values are in meters (-10e-3 and 10e-3 correspond to ±10 mm = ±0.01 m).
📝 Suggested fix
# create a Cartesian sensor mask recording the pressure
sensor = kSensor()
sensor.record = ["p"]
# this hack is needed to ensure that the sensor is in [1,2] dimensions
-mask = np.array([-10e-3, 10e-3]) # [mm]
+mask = np.array([-10e-3, 10e-3]) # [m]
mask = mask[:, np.newaxis].T
sensor.mask = mask 📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # this hack is needed to ensure that the sensor is in [1,2] dimensions | |
| mask = np.array([-10e-3, 10e-3]) # [mm] | |
| mask = mask[:, np.newaxis].T | |
| sensor.mask = mask | |
| # this hack is needed to ensure that the sensor is in [1,2] dimensions | |
| mask = np.array([-10e-3, 10e-3]) # [m] | |
| mask = mask[:, np.newaxis].T | |
| sensor.mask = mask |
🤖 Prompt for AI Agents
In @examples/ivp_1D_simulation/ivp_1D_simulation.py around lines 62 - 65, The
inline comment for the mask values is wrong: the array mask = np.array([-10e-3,
10e-3]) is in meters, not millimeters; update the comment to reflect the correct
unit (e.g., change "[mm]" to "[m]" next to the mask definition) so the
documentation matches the values assigned to mask and sensor.mask.
| def dotdict_to_cpu(obj, verbose: bool = False): | ||
| """Convert all numpy arrays in the given dotdict to numpy arrays.""" | ||
| for name, val in obj.items(): | ||
| if isinstance(val, cp.ndarray): | ||
| if verbose: | ||
| print(f"Converting {name} to numpy array") | ||
| setattr(obj, name, val.get()) | ||
| return obj |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Guard against cp being None before checking cp.ndarray.
When CuPy is not installed, cp is None, and isinstance(val, cp.ndarray) will raise TypeError: isinstance() arg 2 must be a type....
🐛 Proposed fix
def dotdict_to_cpu(obj, verbose: bool = False):
"""Convert all numpy arrays in the given dotdict to numpy arrays."""
+ if cp is None:
+ return obj
for name, val in obj.items():
if isinstance(val, cp.ndarray):
if verbose:
print(f"Converting {name} to numpy array")
setattr(obj, name, val.get())
return obj🤖 Prompt for AI Agents
In @kwave/kspaceFirstOrder1D.py around lines 77 - 84, The function
dotdict_to_cpu currently calls isinstance(val, cp.ndarray) which fails when cp
is None; update dotdict_to_cpu to first guard that cp is not None (or that cp
has attribute ndarray) before using isinstance with cp.ndarray, e.g. check "if
cp is not None and isinstance(val, cp.ndarray):" and otherwise skip the CuPy
check (or fall back to checking numpy.ndarray) so no TypeError occurs when CuPy
isn't installed; keep the verbose print and setattr behavior intact for the CuPy
branch.
| if dim == 1: | ||
| if file_index == 0: | ||
| sensor_data.p_max = np.interp(record.grid_x, p, record.sensor_x) | ||
| else: | ||
| sensor_data.p_max = np.maximum(sensor_data.p_max, np.interp(record.grid_x, p, record.sensor_x)) | ||
| else: | ||
| if file_index == 0: | ||
| sensor_data.p_max = np.sum(p[record.tri] * record.bc, axis=1) | ||
| else: | ||
| sensor_data.p_max = np.maximum(sensor_data.p_max, np.sum(p[record.tri] * record.bc, axis=1)) | ||
|
|
||
| # store the minimum acoustic pressure | ||
| if flags.record_p_min: | ||
| if dim == 1: | ||
| if file_index == 0: | ||
| sensor_data.p_min = np.interp(record.grid_x, p, record.sensor_x) | ||
| else: | ||
| sensor_data.p_min = np.minimum(sensor_data.p_min, np.interp(record.grid_x, p, record.sensor_x)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Incorrect np.interp argument order for 1D Cartesian sensor mask.
The np.interp signature is np.interp(x, xp, fp) where x = query points, xp = data x-coordinates (monotonic), fp = data values. The current calls appear to have inverted arguments, e.g., np.interp(record.grid_x, p, record.sensor_x) should likely be np.interp(record.sensor_x, record.grid_x, p).
This pattern repeats in multiple locations (lines 437, 439, 450, 452, 462, 470, 484, 499, 511, 526, 539, 552).
🐛 Example fix for record_p_max 1D case
if flags.record_p_max:
if dim == 1:
if file_index == 0:
- sensor_data.p_max = np.interp(record.grid_x, p, record.sensor_x)
+ sensor_data.p_max = np.interp(record.sensor_x, record.grid_x, np.real(p))
else:
- sensor_data.p_max = np.maximum(sensor_data.p_max, np.interp(record.grid_x, p, record.sensor_x))
+ sensor_data.p_max = np.maximum(sensor_data.p_max, np.interp(record.sensor_x, record.grid_x, np.real(p)))Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In @kwave/kWaveSimulation_helper/extract_sensor_data.py around lines 435 - 452,
The np.interp calls for 1D Cartesian sensor masks use the wrong argument order;
change each np.interp(record.grid_x, p, record.sensor_x) (and similar instances)
to np.interp(record.sensor_x, record.grid_x, p) so that x=query points is
record.sensor_x, xp=data coordinates is record.grid_x, and fp=data values is p;
apply the same swap for all occurrences referenced (e.g., those setting
sensor_data.p_max, sensor_data.p_min, and other 1D interpolations) so subsequent
np.minimum/np.maximum operations use correctly interpolated arrays.
| else: | ||
| if (dim == 1): | ||
| sensor_data.ux_min = np.minimum(sensor_data.ux_min, np.interp(record.grid_x, ux_sgx, record.sensor_x)) | ||
| elif (dim == 2): | ||
| sensor_data.ux_min = np.minimum(sensor_data.ux_min, np.sum(ux_sgx[record.tri] * record.bc, axis=1)) | ||
| sensor_data.uy_min = np.minimum(sensor_data.uy_min, np.sum(uy_sgy[record.tri] * record.bc, axis=1)) | ||
| elif (dim == 3): | ||
| sensor_data.ux_min = np.minimum(sensor_data.ux_min, np.sum(ux_sgx[record.tri] * record.bc, axis=1)) | ||
| sensor_data.uy_min = np.minimum(sensor_data.uy_min, np.sum(uy_sgy[record.tri] * record.bc, axis=1)) | ||
| else: | ||
| raise RuntimeError("Wrong dimensions") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Incomplete 3D uz_min handling when file_index > 0.
The 3D case for record_u_min is missing the uz_min update when file_index > 0. The branch ends at uy_min without including uz_min.
🐛 Proposed fix
else:
if (dim == 1):
sensor_data.ux_min = np.minimum(sensor_data.ux_min, np.interp(record.grid_x, ux_sgx, record.sensor_x))
elif (dim == 2):
sensor_data.ux_min = np.minimum(sensor_data.ux_min, np.sum(ux_sgx[record.tri] * record.bc, axis=1))
sensor_data.uy_min = np.minimum(sensor_data.uy_min, np.sum(uy_sgy[record.tri] * record.bc, axis=1))
elif (dim == 3):
sensor_data.ux_min = np.minimum(sensor_data.ux_min, np.sum(ux_sgx[record.tri] * record.bc, axis=1))
sensor_data.uy_min = np.minimum(sensor_data.uy_min, np.sum(uy_sgy[record.tri] * record.bc, axis=1))
+ sensor_data.uz_min = np.minimum(sensor_data.uz_min, np.sum(uz_sgz[record.tri] * record.bc, axis=1))
else:
raise RuntimeError("Wrong dimensions")🤖 Prompt for AI Agents
In @kwave/kWaveSimulation_helper/extract_sensor_data.py around lines 537 - 547,
The 3D branch inside the else (when file_index > 0) updates ux_min and uy_min
but omits updating uz_min; fix by adding a line mirroring the ux/uy updates to
set sensor_data.uz_min = np.minimum(sensor_data.uz_min,
np.sum(uz_sgz[record.tri] * record.bc, axis=1)) in the dim == 3 block (use the
same record.tri and record.bc pattern and the uz_sgz array).
| def time_rev(self) -> bool: | ||
| if self.sensor and not isinstance(self.sensor, NotATransducer): | ||
| return not self.options.simulation_type.is_elastic_simulation() and self.sensor.time_reversal_boundary_data is not None | ||
| return self.userarg_time_rev | ||
| return self.time_rev | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Critical: Infinite recursion in time_rev property.
The property returns self.time_rev on line 213, which will recursively call itself, causing a RecursionError. Based on the commented-out code paths, this should likely return False.
🐛 Proposed fix
@property
@deprecated(version="0.4.1", reason="Use TimeReversal class instead")
def time_rev(self) -> bool:
if self.sensor and not isinstance(self.sensor, NotATransducer):
return not self.options.simulation_type.is_elastic_simulation() and self.sensor.time_reversal_boundary_data is not None
- return self.time_rev
+ return False🤖 Prompt for AI Agents
In @kwave/kWaveSimulation.py around lines 210 - 214, The time_rev method has an
infinite recursion because it returns self.time_rev at the end; change that
terminal return to a concrete boolean (likely False) so the method returns False
when no sensor or when sensor is NotATransducer. Update the method time_rev to
return False instead of self.time_rev and keep the conditional branch that
checks self.sensor, isinstance(NotATransducer),
options.simulation_type.is_elastic_simulation(), and
sensor.time_reversal_boundary_data to decide the True case.
| N = np.squeeze(N) | ||
|
|
||
| # logging.log(logging.INFO, N, type_, param, rotation, symmetric, square, win) | ||
| logging.log(logging.INFO, N, type_, param, rotation, symmetric, square) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix incorrect logging.log usage.
The logging statement is incorrectly formatted. logging.log expects (level, msg, *args), but multiple values are passed as positional arguments without a format string, which will cause incorrect behavior.
🐛 Proposed fix
- logging.log(logging.INFO, N, type_, param, rotation, symmetric, square)
+ logging.log(logging.INFO, "N=%s, type_=%s, param=%s, rotation=%s, symmetric=%s, square=%s", N, type_, param, rotation, symmetric, square)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| logging.log(logging.INFO, N, type_, param, rotation, symmetric, square) | |
| logging.log(logging.INFO, "N=%s, type_=%s, param=%s, rotation=%s, symmetric=%s, square=%s", N, type_, param, rotation, symmetric, square) |
🤖 Prompt for AI Agents
In @tests/matlab_test_data_collectors/python_testers/getWin_test.py at line 34,
The logging call incorrectly passes multiple values to logging.log without a
format string; replace it with a properly formatted call such as
logging.info("N=%s type=%s param=%s rotation=%s symmetric=%s square=%s", N,
type_, param, rotation, symmetric, square) or use logging.log(logging.INFO,
"N=%s type=%s param=%s rotation=%s symmetric=%s square=%s", N, type_, param,
rotation, symmetric, square) so the variables (N, type_, param, rotation,
symmetric, square) are supplied as arguments to a single format string.
| if (np.isscalar(N) and N > 1) or (isinstance(N, np.ndarray) and N.all() > 1): | ||
| win_py, cg_py = get_win(N, type_, param=param, rotation=rotation, symmetric=symmetric, square=square) | ||
|
|
||
| cg = reader.expected_value_of("cg") | ||
| win = reader.expected_value_of("win") | ||
| win_py = np.squeeze(win_py) | ||
| assert np.shape(win_py) == np.shape(win) | ||
| assert np.allclose(win_py, win, equal_nan=True) | ||
| assert np.allclose(cg_py, cg, equal_nan=True) | ||
| cg = reader.expected_value_of("cg") | ||
| win = reader.expected_value_of("win") | ||
| win_py = np.squeeze(win_py) | ||
| assert np.shape(win_py) == np.shape(win) | ||
| assert np.allclose(win_py, win, equal_nan=True) | ||
| assert np.allclose(cg_py, cg, equal_nan=True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix conditional logic bug and clarify test behavior change.
There are two issues with this code:
-
Logic bug: Line 36 has
N.all() > 1, which first evaluatesN.all()(returns a boolean), then compares that boolean with 1. This is incorrect. It should be(N > 1).all()to check if all elements are greater than 1. -
Test behavior change: The conditional wrapping now skips test execution for certain inputs (when N ≤ 1). This changes test coverage and could hide bugs. Verify this is intentional and consider whether these cases should raise an error or be tested differently.
🐛 Proposed fix for the logic bug
- if (np.isscalar(N) and N > 1) or (isinstance(N, np.ndarray) and N.all() > 1):
+ if (np.isscalar(N) and N > 1) or (isinstance(N, np.ndarray) and (N > 1).all()):
win_py, cg_py = get_win(N, type_, param=param, rotation=rotation, symmetric=symmetric, square=square)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (np.isscalar(N) and N > 1) or (isinstance(N, np.ndarray) and N.all() > 1): | |
| win_py, cg_py = get_win(N, type_, param=param, rotation=rotation, symmetric=symmetric, square=square) | |
| cg = reader.expected_value_of("cg") | |
| win = reader.expected_value_of("win") | |
| win_py = np.squeeze(win_py) | |
| assert np.shape(win_py) == np.shape(win) | |
| assert np.allclose(win_py, win, equal_nan=True) | |
| assert np.allclose(cg_py, cg, equal_nan=True) | |
| cg = reader.expected_value_of("cg") | |
| win = reader.expected_value_of("win") | |
| win_py = np.squeeze(win_py) | |
| assert np.shape(win_py) == np.shape(win) | |
| assert np.allclose(win_py, win, equal_nan=True) | |
| assert np.allclose(cg_py, cg, equal_nan=True) | |
| if (np.isscalar(N) and N > 1) or (isinstance(N, np.ndarray) and (N > 1).all()): | |
| win_py, cg_py = get_win(N, type_, param=param, rotation=rotation, symmetric=symmetric, square=square) | |
| cg = reader.expected_value_of("cg") | |
| win = reader.expected_value_of("win") | |
| win_py = np.squeeze(win_py) | |
| assert np.shape(win_py) == np.shape(win) | |
| assert np.allclose(win_py, win, equal_nan=True) | |
| assert np.allclose(cg_py, cg, equal_nan=True) |
🤖 Prompt for AI Agents
In @tests/matlab_test_data_collectors/python_testers/getWin_test.py around lines
36 - 44, The conditional uses N.all() > 1 which compares a boolean to 1; change
it to (N > 1).all() so the intent "all elements > 1" is correctly checked for
array inputs (locate the conditional around N, get_win call in getWin_test.py).
Also address the changed test coverage: decide whether inputs with N <= 1 should
still exercise get_win or instead be asserted to raise/skip—either remove the
conditional to run the test for those cases or add explicit asserts/skips for N
<= 1 so test behavior is intentional and documented.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 6
🤖 Fix all issues with AI agents
In @kwave/kspaceFirstOrder1D.py:
- Around line 375-386: The hasattr check for the s_source_pos_index block is
wrong: it tests for 's_source_sig_index' but then accesses
k_sim.s_source_pos_index, so when s_source_pos_index exists and
s_source_sig_index does not the conversion is skipped; change the guard to check
hasattr(k_sim, 's_source_pos_index') and ensure the block sets
k_sim.s_source_pos_index = np.squeeze(k_sim.s_source_pos_index) - int(1) (mirror
the pattern used for u_source_pos_index and p_source_pos_index) so the 1→0
indexing conversion is applied correctly.
- Around line 354-363: When options.use_kspace is True, source_kappa may remain
undefined if the complex conditional on k_sim.source.p_mode/u_mode is false;
always initialize source_kappa in that branch (e.g., default to 1.0 or to the
same spectral taper as kappa) so later uses (lines referencing source_kappa
around the additive source handling) cannot raise UnboundLocalError. Locate the
block with symbols options.use_kspace, kappa, source_kappa, c_ref, kgrid.k,
kgrid.dt, and k_sim.source.* and set source_kappa unconditionally inside the
options.use_kspace branch (then override it only when the additive-source
condition is true).
- Around line 561-579: The bug is that source_mat = source_mat.fill(0) assigns
None (since ndarray.fill mutates in-place), so replace that assignment with an
in-place call or reinitialize the array; e.g. call source_mat.fill(0) without
assignment or use source_mat = xp.zeros_like(source_mat) before populating it;
ensure this change is made in the 'additive' case where source_mat,
k_sim.p_source_pos_index, and k_sim.source.p[k_sim.p_source_sig_index, t_index]
are used so subsequent FFT/ifft with source_kappa and xp will operate on a valid
array.
- Around line 653-664: sensor.record_start_index is 1-based while t_index is
0-based, causing an off-by-one skip; convert record_start_index to 0-based
before comparing and computing file_index. Replace uses of
sensor.record_start_index in the condition and file_index calculation (the block
gating on k_sim.use_sensor and t_index, and the file_index = t_index -
sensor.record_start_index) so they use a zero-based value (e.g.,
sensor.record_start_index - 1) or compute a local start_index0 =
sensor.record_start_index - 1 and use t_index >= start_index0 and file_index =
t_index - start_index0 when calling extract_sensor_data.
- Around line 504-510: The GPU path fails because scipy.fft is being called on
CuPy arrays and some medium.absorb_nabla1/2 arrays remain NumPy; pick a single
FFT backend at init (alias it as xp.fft or fft_backend) and replace direct
scipy.fft.ifft/fftn/fft calls (e.g., the ifft used in the ux_sgx update that
references ddx_k, ddx_k_shift_pos, kappa, p_k) with the selected backend's fft
functions so they operate on xp arrays, and ensure cp_fft import is removed or
used consistently; additionally, when using_gpu=True convert
medium.absorb_nabla1 and medium.absorb_nabla2 (and any similar medium arrays) to
xp arrays so multiplications with rho0 and duxdx use the same array type.
🧹 Nitpick comments (2)
kwave/kspaceFirstOrder1D.py (2)
1-31: Remove duplicate import and avoid split CuPy import patterns.
SimulationExecutionOptionsis imported twice (Line 3 and Line 25). Also, you importcupyat module import time and dynamically inget_array_module, which makes GPU enablement harder to reason about.Proposed cleanup
-from kwave.options.simulation_execution_options import SimulationExecutionOptions - try: import cupy as cp from cupy import fft as cp_fft except ImportError: cp = None cp_fft = None @@ -from kwave.options.simulation_execution_options import SimulationExecutionOptions from kwave.options.simulation_options import SimulationOptions
34-57: Makeget_array_module()deterministic and consistent withcpglobal.Right now
get_array_module()re-importscupyand returns a localcp, while other helpers (to_gpu,dotdict_to_cpu) use the module-globalcp. It’s easy to end up withusing_gpu=Truebut helpers referencing a differentcpsymbol.One way to align behavior
def get_array_module(verbose: bool = False): @@ - # Try to import cupy and check for a GPU + # If module import already failed, don't try to use it + if cp is None: + return np, False + + # Check for a visible GPU try: - import cupy as cp - # this returns the number of visible GPUs - ngpus = cp.cuda.runtime.getDeviceCount() + ngpus = cp.cuda.runtime.getDeviceCount() if ngpus > 0: return cp, True except Exception: pass
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
examples/ivp_1D_simulation/README.mdkwave/kspaceFirstOrder1D.py
✅ Files skipped from review due to trivial changes (1)
- examples/ivp_1D_simulation/README.md
🧰 Additional context used
🧠 Learnings (2)
📓 Common learnings
Learnt from: waltsims
Repo: waltsims/k-wave-python PR: 625
File: docs/get_started/first_simulation.rst:147-154
Timestamp: 2025-09-11T16:12:04.876Z
Learning: In k-Wave Python, kspaceFirstOrder2D function takes separate simulation_options and execution_options parameters, not a single options dict. Use SimulationOptions and SimulationExecutionOptions classes from kwave.options, with snake_case attributes like pml_inside instead of MATLAB-style PMLInside.
📚 Learning: 2025-09-11T16:12:04.876Z
Learnt from: waltsims
Repo: waltsims/k-wave-python PR: 625
File: docs/get_started/first_simulation.rst:147-154
Timestamp: 2025-09-11T16:12:04.876Z
Learning: In k-Wave Python, kspaceFirstOrder2D function takes separate simulation_options and execution_options parameters, not a single options dict. Use SimulationOptions and SimulationExecutionOptions classes from kwave.options, with snake_case attributes like pml_inside instead of MATLAB-style PMLInside.
Applied to files:
kwave/kspaceFirstOrder1D.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (12)
- GitHub Check: test (ubuntu-latest, 3.12)
- GitHub Check: test (ubuntu-latest, 3.13)
- GitHub Check: test (macos-latest, 3.11)
- GitHub Check: test (macos-latest, 3.12)
- GitHub Check: test (macos-latest, 3.13)
- GitHub Check: test (windows-latest, 3.13)
- GitHub Check: test (windows-latest, 3.11)
- GitHub Check: test (windows-latest, 3.12)
- GitHub Check: test (ubuntu-latest, 3.10)
- GitHub Check: test (ubuntu-latest, 3.11)
- GitHub Check: test (windows-latest, 3.10)
- GitHub Check: test_install (windows-latest, 3.10, dev)
🔇 Additional comments (3)
kwave/kspaceFirstOrder1D.py (3)
89-95: API shape matches repo conventions (separate options objects).Signature uses
simulation_options: SimulationOptionsandexecution_options: SimulationExecutionOptions(not a single options dict), consistent with existing k-Wave Python patterns. Based on learnings, this is the right direction.
490-504: No issue. The repository explicitly requires Python 3.10+ as declared inpyproject.toml(requires-python = ">=3.10"), so thematch/casesyntax is fully supported.
511-514: Verify the conditionif options.source_ux >= t_index:at line 513.The comparison direction appears backwards for a "start at index" gate. Typically, time gates check
t_index >= start_indexorstart_index <= t_index. Ifoptions.source_uxrepresents a start index, the condition should bet_index >= options.source_ux. Additionally, verify the type and semantics ofoptions.source_uxto ensure the>=comparison is appropriate.
| def to_gpu(obj, verbose: bool = False): | ||
| """Convert all numpy arrays in the given object to cupy arrays.""" | ||
| for name, val in vars(obj).items(): | ||
| if isinstance(val, np.ndarray): | ||
| if verbose: | ||
| print(f"Converting {name} to cupy array") | ||
| setattr(obj, name, cp.asarray(val)) | ||
| return obj | ||
|
|
||
|
|
||
| def dotdict_to_gpu(obj, verbose: bool = False): | ||
| """Convert all numpy arrays in the given dotdict to cupy arrays.""" | ||
| for name, val in obj.items(): | ||
| if isinstance(val, np.ndarray): | ||
| if verbose: | ||
| print(f"Converting {name} to cupy array") | ||
| setattr(obj, name, cp.asarray(val)) | ||
| return obj | ||
|
|
||
|
|
||
| def dotdict_to_cpu(obj, verbose: bool = False): | ||
| """Convert all numpy arrays in the given dotdict to numpy arrays.""" | ||
| for name, val in obj.items(): | ||
| if isinstance(val, cp.ndarray): | ||
| if verbose: | ||
| print(f"Converting {name} to numpy array") | ||
| setattr(obj, name, val.get()) | ||
| return obj | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Guard against cp is None, and set dotdict items via keys (not setattr).
dotdict_to_cpu()will crash if called whencp is None(isinstance(val, cp.ndarray)).dotdict_to_gpu()/dotdict_to_cpu()iterateobj.items()but then usesetattr(obj, ...)instead ofobj[name] = ...(works only ifdotdict.__setattr__mirrors keys).
Safer conversions
def dotdict_to_gpu(obj, verbose: bool = False):
"""Convert all numpy arrays in the given dotdict to cupy arrays."""
+ if cp is None:
+ raise RuntimeError("CuPy is not available, cannot move dotdict to GPU")
for name, val in obj.items():
if isinstance(val, np.ndarray):
if verbose:
print(f"Converting {name} to cupy array")
- setattr(obj, name, cp.asarray(val))
+ obj[name] = cp.asarray(val)
return obj
@@
def dotdict_to_cpu(obj, verbose: bool = False):
"""Convert all numpy arrays in the given dotdict to numpy arrays."""
+ if cp is None:
+ return obj
for name, val in obj.items():
if isinstance(val, cp.ndarray):
if verbose:
print(f"Converting {name} to numpy array")
- setattr(obj, name, val.get())
+ obj[name] = val.get()
return obj| if options.use_kspace: | ||
| kappa = scipy.fft.ifftshift(sinc(c_ref * kgrid.k * kgrid.dt / 2.0)) | ||
| kappa = np.squeeze(kappa) | ||
| if (hasattr(options, 'source_p') and hasattr(k_sim.source, 'p_mode')) and (k_sim.source.p_mode == 'additive') or \ | ||
| (hasattr(options, 'source_ux') and hasattr(k_sim.source, 'u_mode')) and (k_sim.source.u_mode == 'additive'): | ||
| source_kappa = scipy.fft.ifftshift(xp.cos (c_ref * kgrid.k * kgrid.dt / 2.0)) | ||
| else: | ||
| kappa = 1.0 | ||
| source_kappa = 1.0 | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Initialize source_kappa unconditionally when use_kspace is on.
If options.use_kspace is True and the if (...) condition doesn’t trigger, source_kappa is never defined but is later used in additive source modes (Line 523 / Line 572). That’s an UnboundLocalError risk.
Proposed fix
if options.use_kspace:
kappa = scipy.fft.ifftshift(sinc(c_ref * kgrid.k * kgrid.dt / 2.0))
kappa = np.squeeze(kappa)
+ source_kappa = 1.0
if (hasattr(options, 'source_p') and hasattr(k_sim.source, 'p_mode')) and (k_sim.source.p_mode == 'additive') or \
(hasattr(options, 'source_ux') and hasattr(k_sim.source, 'u_mode')) and (k_sim.source.u_mode == 'additive'):
source_kappa = scipy.fft.ifftshift(xp.cos (c_ref * kgrid.k * kgrid.dt / 2.0))
else:
kappa = 1.0
source_kappa = 1.0🤖 Prompt for AI Agents
In @kwave/kspaceFirstOrder1D.py around lines 354 - 363, When options.use_kspace
is True, source_kappa may remain undefined if the complex conditional on
k_sim.source.p_mode/u_mode is false; always initialize source_kappa in that
branch (e.g., default to 1.0 or to the same spectral taper as kappa) so later
uses (lines referencing source_kappa around the additive source handling) cannot
raise UnboundLocalError. Locate the block with symbols options.use_kspace,
kappa, source_kappa, c_ref, kgrid.k, kgrid.dt, and k_sim.source.* and set
source_kappa unconditionally inside the options.use_kspace branch (then override
it only when the additive-source condition is true).
| if hasattr(k_sim, 's_source_sig_index') and k_sim.s_source_pos_index is not None: | ||
| k_sim.s_source_pos_index = np.squeeze(k_sim.s_source_pos_index) - int(1) | ||
|
|
||
| if hasattr(k_sim, 'u_source_pos_index') and k_sim.u_source_pos_index is not None: | ||
| k_sim.u_source_pos_index = np.squeeze(k_sim.u_source_pos_index) - int(1) | ||
|
|
||
| if hasattr(k_sim, 'p_source_pos_index') and k_sim.p_source_pos_index is not None: | ||
| k_sim.p_source_pos_index = np.squeeze(k_sim.p_source_pos_index) - int(1) | ||
|
|
||
| if hasattr(k_sim, 's_source_sig_index') and k_sim.s_source_sig_index is not None: | ||
| k_sim.s_source_sig_index = np.squeeze(k_sim.s_source_sig_index) - int(1) | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix the hasattr typo guarding s_source_pos_index.
This block checks hasattr(k_sim, 's_source_sig_index') but then reads k_sim.s_source_pos_index (Line 375-377). If s_source_pos_index exists but s_source_sig_index doesn’t, you’ll skip the intended 1→0 indexing conversion.
Proposed fix
- if hasattr(k_sim, 's_source_sig_index') and k_sim.s_source_pos_index is not None:
+ if hasattr(k_sim, 's_source_pos_index') and k_sim.s_source_pos_index is not None:
k_sim.s_source_pos_index = np.squeeze(k_sim.s_source_pos_index) - int(1)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if hasattr(k_sim, 's_source_sig_index') and k_sim.s_source_pos_index is not None: | |
| k_sim.s_source_pos_index = np.squeeze(k_sim.s_source_pos_index) - int(1) | |
| if hasattr(k_sim, 'u_source_pos_index') and k_sim.u_source_pos_index is not None: | |
| k_sim.u_source_pos_index = np.squeeze(k_sim.u_source_pos_index) - int(1) | |
| if hasattr(k_sim, 'p_source_pos_index') and k_sim.p_source_pos_index is not None: | |
| k_sim.p_source_pos_index = np.squeeze(k_sim.p_source_pos_index) - int(1) | |
| if hasattr(k_sim, 's_source_sig_index') and k_sim.s_source_sig_index is not None: | |
| k_sim.s_source_sig_index = np.squeeze(k_sim.s_source_sig_index) - int(1) | |
| if hasattr(k_sim, 's_source_pos_index') and k_sim.s_source_pos_index is not None: | |
| k_sim.s_source_pos_index = np.squeeze(k_sim.s_source_pos_index) - int(1) | |
| if hasattr(k_sim, 'u_source_pos_index') and k_sim.u_source_pos_index is not None: | |
| k_sim.u_source_pos_index = np.squeeze(k_sim.u_source_pos_index) - int(1) | |
| if hasattr(k_sim, 'p_source_pos_index') and k_sim.p_source_pos_index is not None: | |
| k_sim.p_source_pos_index = np.squeeze(k_sim.p_source_pos_index) - int(1) | |
| if hasattr(k_sim, 's_source_sig_index') and k_sim.s_source_sig_index is not None: | |
| k_sim.s_source_sig_index = np.squeeze(k_sim.s_source_sig_index) - int(1) |
🤖 Prompt for AI Agents
In @kwave/kspaceFirstOrder1D.py around lines 375 - 386, The hasattr check for
the s_source_pos_index block is wrong: it tests for 's_source_sig_index' but
then accesses k_sim.s_source_pos_index, so when s_source_pos_index exists and
s_source_sig_index does not the conversion is skipped; change the guard to check
hasattr(k_sim, 's_source_pos_index') and ensure the block sets
k_sim.s_source_pos_index = np.squeeze(k_sim.s_source_pos_index) - int(1) (mirror
the pattern used for u_source_pos_index and p_source_pos_index) so the 1→0
indexing conversion is applied correctly.
| else: | ||
| # calculate gradient using the k-space method on a non-uniform grid | ||
| # via the mapped pseudospectral method | ||
| ux_sgx = pml_x_sgx * (pml_x_sgx * ux_sgx - | ||
| dt * rho0_sgx_inv * k_sim.kgrid.dxudxn_sgx * xp.real(scipy.fft.ifft(ddx_k * ddx_k_shift_pos * kappa * p_k)) ) | ||
|
|
||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# First, let's check the file structure and size
wc -l kwave/kspaceFirstOrder1D.pyRepository: waltsims/k-wave-python
Length of output: 97
🏁 Script executed:
#!/bin/bash
# Check imports and FFT-related code at the top of the file
head -100 kwave/kspaceFirstOrder1D.py | grep -n "import\|fft"Repository: waltsims/k-wave-python
Length of output: 1151
🏁 Script executed:
#!/bin/bash
# Check the actual lines mentioned in the review (504-510, 588-593, 608-610)
echo "=== Lines 504-510 ==="
sed -n '504,510p' kwave/kspaceFirstOrder1D.py
echo -e "\n=== Lines 588-593 ==="
sed -n '588,593p' kwave/kspaceFirstOrder1D.py
echo -e "\n=== Lines 608-610 ==="
sed -n '608,610p' kwave/kspaceFirstOrder1D.pyRepository: waltsims/k-wave-python
Length of output: 1312
🏁 Script executed:
#!/bin/bash
# Check if cp_fft is imported
grep -n "cp_fft\|import.*fft" kwave/kspaceFirstOrder1D.py | head -20Repository: waltsims/k-wave-python
Length of output: 143
🏁 Script executed:
#!/bin/bash
# Look for the function definition and GPU-related code
sed -n '1,100p' kwave/kspaceFirstOrder1D.py | grep -A 50 "def kspace_first_order"Repository: waltsims/k-wave-python
Length of output: 551
🏁 Script executed:
#!/bin/bash
# Find the GPU initialization logic and xp setup
sed -n '50,150p' kwave/kspaceFirstOrder1D.py | grep -A 30 "using_gpu\|get_array_module"Repository: waltsims/k-wave-python
Length of output: 48
🏁 Script executed:
#!/bin/bash
# Look for get_array_module function or xp assignment
grep -n "using_gpu\|get_array_module\|xp.*=" kwave/kspaceFirstOrder1D.py | head -30Repository: waltsims/k-wave-python
Length of output: 2178
🏁 Script executed:
#!/antml:thinking
I need to understand:
1. Where xp is assigned
2. If arrays are created using xp
3. Whether the FFT operations happen on those xp arrays
</thinking>
bash
# Look at broader context around the GPU setup
sed -n '200,250p' kwave/kspaceFirstOrder1D.pyRepository: waltsims/k-wave-python
Length of output: 333
🏁 Script executed:
#!/bin/bash
# Check what arrays are being used in the problematic FFT calls
# First, let's see the context around line 508
sed -n '490,520p' kwave/kspaceFirstOrder1D.pyRepository: waltsims/k-wave-python
Length of output: 1869
🏁 Script executed:
#!/bin/bash
# Look at how arrays like p_k, ddx_k, etc. are created
grep -n "p_k\s*=\|ddx_k\s*=\|rho0\s*=" kwave/kspaceFirstOrder1D.py | head -20Repository: waltsims/k-wave-python
Length of output: 466
🏁 Script executed:
#!/bin/bash
# Look for all scipy.fft usage in the file
grep -n "scipy\.fft" kwave/kspaceFirstOrder1D.pyRepository: waltsims/k-wave-python
Length of output: 1222
🏁 Script executed:
#!/an//bash
# Also check xp.fft usage to see the pattern
grep -n "xp\.fft" kwave/kspaceFirstOrder1D.pyRepository: waltsims/k-wave-python
Length of output: 905
🏁 Script executed:
#!/bin/bash
# Check if there's any handling for medium.absorb_* arrays with xp casting
sed -n '580,620p' kwave/kspaceFirstOrder1D.pyRepository: waltsims/k-wave-python
Length of output: 2321
🏁 Script executed:
#!/bin/bash
# Check the get_array_module function implementation
sed -n '34,50p' kwave/kspaceFirstOrder1D.pyRepository: waltsims/k-wave-python
Length of output: 659
🏁 Script executed:
#!/bin/bash
# Let's verify if medium.absorb_* arrays are created as numpy or xp arrays
grep -n "absorb_nabla\|absorb_tau\|absorb_eta" kwave/kspaceFirstOrder1D.py | head -20Repository: waltsims/k-wave-python
Length of output: 797
🏁 Script executed:
#!/bin/bash
# Check if these absorb attributes come from the medium object and how they're handled
grep -n "medium\\.absorb" kwave/kspaceFirstOrder1D.pyRepository: waltsims/k-wave-python
Length of output: 797
🏁 Script executed:
#!/bin/bash
# Check the kWaveMedium class to understand where absorb_* arrays come from
find . -name "*.py" -path "*/kmedium*" -o -name "*medium*" | head -5Repository: waltsims/k-wave-python
Length of output: 266
🏁 Script executed:
#!/bin/bash
# Find where absorb_nabla etc. are defined/set
fd "kmedium" -t fRepository: waltsims/k-wave-python
Length of output: 105
🏁 Script executed:
#!/bin/bash
# Check if absorb_nabla1, absorb_nabla2 etc. are created in kmedium.py
grep -n "absorb_nabla\|absorb_tau\|absorb_eta" kwave/kmedium.py | head -20Repository: waltsims/k-wave-python
Length of output: 48
🏁 Script executed:
#!/bin/bash
# Let's verify the actual issue: check if scipy.fft can work on CuPy arrays
# First, let's understand the problem by testing the assertion
python3 << 'EOF'
# Check if scipy.fft works with CuPy arrays (it won't)
try:
import scipy.fft
print("scipy.fft imported successfully")
print("scipy.fft.ifft works on numpy arrays")
print("scipy.fft.ifft will NOT work on CuPy arrays - it requires NumPy arrays")
except Exception as e:
print(f"Error: {e}")
EOFRepository: waltsims/k-wave-python
Length of output: 152
🏁 Script executed:
#!/bin/bash
# Now let's check what happens at the problematic lines
# Line 508: scipy.fft.ifft is called on the result of:
# ddx_k * ddx_k_shift_pos * kappa * p_k
# All of these are xp arrays (CuPy when using_gpu=True)
# Check if ddx_k, ddx_k_shift_pos, kappa, p_k are converted to xp
sed -n '437,455p' kwave/kspaceFirstOrder1D.pyRepository: waltsims/k-wave-python
Length of output: 863
🏁 Script executed:
#!/bin/bash
# Look at line 343 onwards - where are these arrays created initially?
sed -n '343,360p' kwave/kspaceFirstOrder1D.pyRepository: waltsims/k-wave-python
Length of output: 1044
🏁 Script executed:
#!/bin/bash
# Check the kmedium.py file for where absorb_nabla is created
head -100 kwave/kmedium.py | grep -A 5 "absorb"Repository: waltsims/k-wave-python
Length of output: 357
🏁 Script executed:
#!/bin/bash
# Search more broadly in kmedium.py
grep -n "absorb_nabla\|absorb_tau\|absorb_eta" kwave/kmedium.py | head -30Repository: waltsims/k-wave-python
Length of output: 48
🏁 Script executed:
#!/bin/bash
# Let's check if medium.absorb_nabla* arrays come from NumPy
# by looking at how medium is populated with these attributes
rg "absorb_nabla1\|absorb_nabla2" --type py -A 2 -B 2 | head -50Repository: waltsims/k-wave-python
Length of output: 48
🏁 Script executed:
#!/antml:thinking>
I should also check if there's a clear confirmation that the medium.absorb_* arrays are numpy arrays and not being converted to xp. This would confirm the secondary issue the reviewer mentioned about multiplying CuPy arrays with NumPy arrays.
</thinking>
bash
# Search for where absorb_nabla is assigned in the codebase
rg "absorb_nabla\s*=" --type py | head -20Repository: waltsims/k-wave-python
Length of output: 134
🏁 Script executed:
#!/bin/bash
# Search for absorb_nabla assignment with escaped quotes
grep -r "absorb_nabla.*=" kwave/ --include="*.py" | head -20Repository: waltsims/k-wave-python
Length of output: 1500
🏁 Script executed:
#!/bin/bash
# Let's check the simulation file that processes the medium
rg "absorb_nabla" --type py -B 3 -A 3 | head -80Repository: waltsims/k-wave-python
Length of output: 4054
🏁 Script executed:
#!/bin/bash
# Check if there are any helper files that process the medium
find kwave -name "*helper*" -o -name "*process*" | grep -i mediumRepository: waltsims/k-wave-python
Length of output: 48
🏁 Script executed:
#!/bin/bash
# Look for files that might process medium attributes for GPU
rg "medium\\.absorb" kwave/ --type py -B 2 -A 2 | head -60Repository: waltsims/k-wave-python
Length of output: 3142
🏁 Script executed:
#!/bin/bash
# Let's check the create_absorption_variables to see what dtype these arrays are
cat kwave/kWaveSimulation_helper/create_absorption_variables.py | head -100Repository: waltsims/k-wave-python
Length of output: 3610
🏁 Script executed:
#!/bin/bash
# Verify that lines 534, 549, etc. correctly use xp.fft
echo "=== Correct usage with xp.fft ==="
sed -n '534p' kwave/kspaceFirstOrder1D.py
sed -n '549p' kwave/kspaceFirstOrder1D.py
sed -n '488p' kwave/kspaceFirstOrder1D.py
echo ""
echo "=== Incorrect usage with scipy.fft ==="
sed -n '508p' kwave/kspaceFirstOrder1D.py
sed -n '591p' kwave/kspaceFirstOrder1D.pyRepository: waltsims/k-wave-python
Length of output: 808
🏁 Script executed:
#!/bin/bash
# Check if there are tests for GPU mode to understand the severity
find . -name "*test*" -type f | xargs grep -l "using_gpu\|gpu" | head -5Repository: waltsims/k-wave-python
Length of output: 234
🏁 Script executed:
#!/bin/bash
# Verify the critical issue: what happens when scipy.fft receives CuPy arrays
python3 << 'EOF'
# Document the actual incompatibility
print("Critical Issue Summary:")
print("=" * 60)
print("1. scipy.fft.ifft/ifftn/fftn do NOT support CuPy arrays")
print("2. When using_gpu=True, arrays like p_k, ddx_k, etc. are CuPy")
print("3. scipy.fft operations on CuPy arrays will raise TypeError")
print("")
print("Lines with scipy.fft on CuPy arrays:")
print(" - Line 508: scipy.fft.ifft(ddx_k * ...) where ddx_k is CuPy")
print(" - Lines 591-592: scipy.fft.fftn/ifftn on CuPy arrays")
print(" - Lines 608-609: scipy.fft.fftn/ifftn on CuPy arrays")
print("")
print("Correct pattern (used elsewhere):")
print(" - Lines 488, 534, 549: xp.fft.ifft/ifftn/fftn (works for both NumPy and CuPy)")
EOFRepository: waltsims/k-wave-python
Length of output: 653
GPU path is broken: scipy.fft cannot operate on CuPy arrays.
The code calls scipy.fft.ifft/ifftn/fftn on CuPy arrays in GPU mode, which will raise at runtime (lines 508, 591–592, 608–609). Additionally, medium.absorb_nabla1/2 arrays remain NumPy after setup, causing errors when multiplied with CuPy arrays like rho0 and duxdx. Meanwhile, cp_fft is imported but never used, and the correct pattern (xp.fft) is already used elsewhere (lines 488, 534, 549).
Select an FFT backend once during initialization and use it consistently:
Suggested approach
import scipy.fft
@@
def kspace_first_order_1D(...):
@@
if execution_options.is_gpu_simulation:
xp, using_gpu = get_array_module()
print(f"Using {'cupy (GPU)' if using_gpu else 'numpy (CPU)'}")
else:
xp = np
using_gpu = False
+
+ # Pick an FFT backend that matches the array module
+ fft = cp_fft if using_gpu else scipy.fft
@@
- ux_sgx = pml_x_sgx * (pml_x_sgx * ux_sgx -
- dt * rho0_sgx_inv * k_sim.kgrid.dxudxn_sgx * xp.real(scipy.fft.ifft(ddx_k * ddx_k_shift_pos * kappa * p_k)) )
+ ux_sgx = pml_x_sgx * (pml_x_sgx * ux_sgx -
+ dt * rho0_sgx_inv * k_sim.kgrid.dxudxn_sgx * xp.real(fft.ifft(ddx_k * ddx_k_shift_pos * kappa * p_k)) )
@@
- p = xp.squeeze(c0**2 * (rhox
- + medium.absorb_tau * xp.real(scipy.fft.ifftn(medium.absorb_nabla1 * scipy.fft.fftn(rho0 * duxdx, axes=(0,)), axes=(0,) ))
- - medium.absorb_eta * xp.real(scipy.fft.ifftn(medium.absorb_nabla2 * scipy.fft.fftn(rhox, axes=(0,)), axes=(0,))) ) )
+ p = xp.squeeze(c0**2 * (rhox
+ + medium.absorb_tau * xp.real(fft.ifftn(medium.absorb_nabla1 * fft.fftn(rho0 * duxdx, axes=(0,)), axes=(0,)))
+ - medium.absorb_eta * xp.real(fft.ifftn(medium.absorb_nabla2 * fft.fftn(rhox, axes=(0,)), axes=(0,))) ) )Also convert medium.absorb_nabla1/2 to xp arrays when using_gpu=True.
Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In @kwave/kspaceFirstOrder1D.py around lines 504 - 510, The GPU path fails
because scipy.fft is being called on CuPy arrays and some medium.absorb_nabla1/2
arrays remain NumPy; pick a single FFT backend at init (alias it as xp.fft or
fft_backend) and replace direct scipy.fft.ifft/fftn/fft calls (e.g., the ifft
used in the ux_sgx update that references ddx_k, ddx_k_shift_pos, kappa, p_k)
with the selected backend's fft functions so they operate on xp arrays, and
ensure cp_fft import is removed or used consistently; additionally, when
using_gpu=True convert medium.absorb_nabla1 and medium.absorb_nabla2 (and any
similar medium arrays) to xp arrays so multiplications with rho0 and duxdx use
the same array type.
| # add in the pre-scaled pressure source term as a mass source | ||
| if (k_sim.source_p is not False and t_index < xp.shape(k_sim.source.p)[1]): | ||
| match k_sim.source.p_mode: | ||
| case 'dirichlet': | ||
| # enforce source values as a dirichlet boundary condition | ||
| rhox[k_sim.p_source_pos_index] = k_sim.source.p[k_sim.p_source_sig_index, t_index] | ||
| case 'additive': | ||
| # extract the source values into a matrix | ||
| source_mat = source_mat.fill(0) | ||
| source_mat[k_sim.p_source_pos_index] = k_sim.source.p[k_sim.p_source_sig_index, t_index] | ||
| # apply the k-space correction | ||
| source_mat = xp.real(xp.fft.ifft(source_kappa * xp.fft.fft(source_mat))) | ||
| # add the source values to the existing field values | ||
| # including the k-space correction | ||
| rhox = rhox + source_mat | ||
| case 'additive-no-correction': | ||
| # add the source values to the existing field values | ||
| rhox[k_sim.p_source_pos_index] = rhox[k_sim.p_source_pos_index] + source.p[k_sim.p_source_sig_index, t_index] | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: source_mat = source_mat.fill(0) sets source_mat to None.
In the pressure-source additive path (Line 569-571), ndarray.fill returns None. This will crash on the next line when indexing.
Proposed fix
case 'additive':
# extract the source values into a matrix
- source_mat = source_mat.fill(0)
+ source_mat.fill(0)
source_mat[k_sim.p_source_pos_index] = k_sim.source.p[k_sim.p_source_sig_index, t_index]🤖 Prompt for AI Agents
In @kwave/kspaceFirstOrder1D.py around lines 561 - 579, The bug is that
source_mat = source_mat.fill(0) assigns None (since ndarray.fill mutates
in-place), so replace that assignment with an in-place call or reinitialize the
array; e.g. call source_mat.fill(0) without assignment or use source_mat =
xp.zeros_like(source_mat) before populating it; ensure this change is made in
the 'additive' case where source_mat, k_sim.p_source_pos_index, and
k_sim.source.p[k_sim.p_source_sig_index, t_index] are used so subsequent
FFT/ifft with source_kappa and xp will operate on a valid array.
| # extract required sensor data from the pressure and particle velocity | ||
| # fields if the number of time steps elapsed is greater than | ||
| # sensor.record_start_index (defaults to 1) | ||
| if k_sim.use_sensor and (t_index >= sensor.record_start_index): | ||
|
|
||
| # update index for data storage | ||
| file_index: int = t_index - sensor.record_start_index | ||
|
|
||
| # run sub-function to extract the required data from the acoustic variables | ||
| sensor_data = extract_sensor_data(kdim, xp, sensor_data, file_index, sensor_mask_index, | ||
| extract_options, record, p, ux_sgx) | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
rg -n --hidden -S "record_start_index" kwave | head -n 200Repository: waltsims/k-wave-python
Length of output: 1988
🏁 Script executed:
rg -n -B 5 "t_index >= sensor.record_start_index" kwave/kspaceFirstOrder1D.pyRepository: waltsims/k-wave-python
Length of output: 368
🏁 Script executed:
sed -n '600,680p' kwave/kspaceFirstOrder1D.pyRepository: waltsims/k-wave-python
Length of output: 3645
🏁 Script executed:
rg -n "for.*t_index" kwave/kspaceFirstOrder1D.pyRepository: waltsims/k-wave-python
Length of output: 137
🏁 Script executed:
rg -n -B 3 "index_start" kwave/kspaceFirstOrder1D.py | head -n 30Repository: waltsims/k-wave-python
Length of output: 370
🏁 Script executed:
rg -n -A 10 "record_start_index" kwave/ksensor.py | head -n 50Repository: waltsims/k-wave-python
Length of output: 1317
Convert sensor.record_start_index to 0-based before using it in the comparison and calculation.
The record_start_index is 1-based (defaults to 1, validated as >= 1 per MATLAB convention), but t_index is 0-based (starts at 0 from range(0, ...)). The condition t_index >= sensor.record_start_index causes an off-by-one shift: when record_start_index=1, recording skips at t_index=0 and starts at t_index=1, losing the first time step. Use t_index >= sensor.record_start_index - 1 or convert record_start_index to 0-based at the start of the loop.
🤖 Prompt for AI Agents
In @kwave/kspaceFirstOrder1D.py around lines 653 - 664,
sensor.record_start_index is 1-based while t_index is 0-based, causing an
off-by-one skip; convert record_start_index to 0-based before comparing and
computing file_index. Replace uses of sensor.record_start_index in the condition
and file_index calculation (the block gating on k_sim.use_sensor and t_index,
and the file_index = t_index - sensor.record_start_index) so they use a
zero-based value (e.g., sensor.record_start_index - 1) or compute a local
start_index0 = sensor.record_start_index - 1 and use t_index >= start_index0 and
file_index = t_index - start_index0 when calling extract_sensor_data.
This is adds some increased functionality to include a basic one dimensional simulation example in numpy/cupy.
It
Summary by CodeRabbit
New Features
Bug Fixes
Documentation
Chores
✏️ Tip: You can customize this high-level summary in your review settings.