Skip to content

Conversation

@djps
Copy link
Collaborator

@djps djps commented Jan 9, 2026

This is adds some increased functionality to include a basic one dimensional simulation example in numpy/cupy.

It

  • provides a one-dimensional example which runs without writing/reading hdf files.
  • adds tqdm as a requirement.
  • Fixes a spelling error of lossless
  • extends the filters and signals to partially handle one-dimensional grids
  • extends kwavesimulation to handle one-dimensional inputs
  • includes create absorption and storage variables, which can run on gpu via cupy in the case specified by the example.

Summary by CodeRabbit

  • New Features

    • Added 1D acoustic wave simulation examples (Jupyter notebook + script)
    • Added a 1D k‑space first‑order solver with optional GPU acceleration
    • Exposed an is_nonlinear property on the simulation class
  • Bug Fixes

    • Fixed spelling ("lossless")
    • Improved initial‑pressure grid size validation and filter/window shape handling
  • Documentation

    • Added README and Colab badge for the 1D example
  • Chores

    • Added progress tracking dependency (tqdm)

✏️ Tip: You can customize this high-level summary in your review settings.

djps added 22 commits December 4, 2025 16:23
This script simulates and visualizes pressure fields in a one-dimensional heterogeneous medium using k-Wave.
* Introduces the `create_storage_variables` to define the creation of sensor data storage structure in kWaveSimulation which is passed to the simulator
* updates `kWaveSimulation` to  to use `create_storage_variables` also adds setting for nonlinearity
* fixes shape handling in `ksource`,
* improves 1D simulation logic in kspaceFirstOrder1D,
* addresses minor issues in filters and signals utilities.

Also updates dependencies to include tqdm.
@djps djps added the enhancement New feature or request label Jan 9, 2026
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 9, 2026

Walkthrough

The changes add a complete 1D k-space first-order simulation feature to kwave: a new 1D solver with optional GPU support, storage variable management (triangulation/preallocation), sensor data extraction utilities, integrations into kWaveSimulation, example scripts/notebook, and related utilities/tests and dependency updates across multiple modules.

Changes

Cohort / File(s) Summary
Example Demonstrations
examples/ivp_1D_simulation/ivp_1D_simulation.ipynb, examples/ivp_1D_simulation/ivp_1D_simulation.py, examples/ivp_1D_simulation/README.md
New 1D simulation examples and README: grid/medium/source/sensor setup, GPU-enabled execution options, and visualization.
1D Solver Implementation
kwave/kspaceFirstOrder1D.py
New 1D k-space first-order solver with NumPy/CuPy backend selection, GPU helpers, time-stepping loop, source integration, PML/operators, and sensor extraction hooks.
Storage Management Infrastructure
kwave/kWaveSimulation_helper/create_storage_variables.py
New comprehensive storage variable builder: triangulation utilities (2D/3D), OutputSensor class, shift operators, wavenumber vectors, sensor buffers, cuboid-corner support, and allocation logic for 1D/2D/3D.
Sensor Data Extraction Module
kwave/kWaveSimulation_helper/extract_sensor_data.py
New extraction logic handling binary/cartesian masks, interpolation/triangulation, grid staggering conversions, multi-dim support, stats aggregation, and optional CuPy acceleration.
Core Integration and API Extensions
kwave/kWaveSimulation.py
Added is_nonlinear property, integrated storage-variable creation into input checking, adjusted sensor reshaping/logging and several time-reversal related code paths, plus minor text correction.
Helper Module Updates
kwave/kWaveSimulation_helper/__init__.py, kwave/kWaveSimulation_helper/create_absorption_variables.py
Exported create_storage_variables and extract_sensor_data; added graceful handling for equation_of_state == "lossless" in absorption variables (returns placeholders).
Utilities and Input Validation
kwave/utils/filters.py, kwave/utils/signals.py, kwave/ksource.py
Window smoothing shape alignment for 1D inputs, expanded get_win() input type handling and size checks, and p0 size validation using squeezed kgrid dimensions for degenerate grids.
Tests and Configuration
tests/matlab_test_data_collectors/python_testers/getWin_test.py, pyproject.toml
Enabled/adjusted logging and conditional test invocation for get_win; added dependency tqdm==4.67.1.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant kWaveSimulation
    participant create_storage_variables
    participant kspace_first_order_1D
    participant extract_sensor_data
    participant Backend as GPU/CPU Backend

    User->>kWaveSimulation: Initialize grid, medium, source, sensor, options
    kWaveSimulation->>kWaveSimulation: Validate inputs & prepare state
    kWaveSimulation->>create_storage_variables: Request storage setup & triangulation
    create_storage_variables-->>kWaveSimulation: Return flags, record, sensor_data, num_time_points
    kWaveSimulation->>kspace_first_order_1D: Start 1D time-stepping simulation
    kspace_first_order_1D->>Backend: Select NumPy or CuPy backend
    loop per time step
        kspace_first_order_1D->>kspace_first_order_1D: Compute gradients, apply sources, update fields
        kspace_first_order_1D->>extract_sensor_data: Extract/store sensor outputs
        extract_sensor_data-->>kspace_first_order_1D: Return updated sensor_data
    end
    kspace_first_order_1D-->>User: Return sensor_data and results
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~75 minutes

Poem

🐰 I hopped through grids of one dim, bright and small,
I shaped a p0 and listened for every call.
Triangles, sensors, GPU wings in flight,
Together we tracked each pressure and light.
Hop—simulation done—what a lovely sight!

🚥 Pre-merge checks | ✅ 2 | ❌ 1
❌ Failed checks (1 inconclusive)
Check name Status Explanation Resolution
Title check ❓ Inconclusive The title '1d demo' is vague and generic, using non-descriptive terms that don't convey meaningful information about the substantial changes in this pull request. Use a more descriptive title that highlights the main change, such as 'Add 1D k-Wave simulation example with GPU support' or 'Add 1D acoustic wave simulation example and storage variable helpers'.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Docstring Coverage ✅ Passed Docstring coverage is 84.85% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@codecov
Copy link

codecov bot commented Jan 9, 2026

Codecov Report

❌ Patch coverage is 17.63122% with 612 lines in your changes missing coverage. Please review.
✅ Project coverage is 67.79%. Comparing base (6ec3d6d) to head (924457a).

Files with missing lines Patch % Lines
...wave/kWaveSimulation_helper/extract_sensor_data.py 1.53% 384 Missing ⚠️
...kWaveSimulation_helper/create_storage_variables.py 35.14% 170 Missing and 33 partials ⚠️
kwave/kWaveSimulation.py 35.71% 15 Missing and 3 partials ⚠️
...veSimulation_helper/create_absorption_variables.py 0.00% 2 Missing ⚠️
kwave/utils/filters.py 0.00% 1 Missing and 1 partial ⚠️
kwave/utils/signals.py 50.00% 0 Missing and 2 partials ⚠️
kwave/ksource.py 0.00% 0 Missing and 1 partial ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master     #655      +/-   ##
==========================================
- Coverage   73.95%   67.79%   -6.17%     
==========================================
  Files          50       52       +2     
  Lines        7000     7725     +725     
  Branches     1338     1567     +229     
==========================================
+ Hits         5177     5237      +60     
- Misses       1280     1895     +615     
- Partials      543      593      +50     
Flag Coverage Δ
3.10 67.79% <17.63%> (-6.17%) ⬇️
3.11 67.79% <17.63%> (-6.17%) ⬇️
3.12 67.79% <17.63%> (-6.17%) ⬇️
3.13 67.79% <17.63%> (-6.17%) ⬇️
macos-latest 67.76% <17.63%> (-6.17%) ⬇️
ubuntu-latest 67.76% <17.63%> (-6.17%) ⬇️
windows-latest 67.77% <17.63%> (-6.17%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 17

🤖 Fix all issues with AI agents
In @examples/ivp_1D_simulation/ivp_1D_simulation.ipynb:
- Around line 29-31: The notebook sentence "Now import dependencies, and party
of library which are required." contains a typo; update the cell text to read
"Now import dependencies, and parts of the library that are required." so it
uses "parts" and "that" for correct grammar and clarity; locate and edit the
markdown/source string in the cell where this sentence appears in
ivp_1D_simulation.ipynb.
- Around line 8-10: Update the markdown cell string "First install the pacakage
using the latests version." to correct the typos: change "pacakage" to "package"
and "latests" to "latest" so the sentence reads "First install the package using
the latest version."
- Around line 131-134: The inline comment for the mask array is incorrect: the
values in mask are in meters but the comment says [mm]; update the comment or
units to match the data. Locate the mask creation (variables named mask and
sensor.mask) and either change the comment from "[mm]" to "[m]" or convert the
numeric values to millimeters if you intend to keep "[mm]"; ensure sensor.mask
remains assigned to the corrected mask.

In @examples/ivp_1D_simulation/ivp_1D_simulation.py:
- Around line 62-65: The inline comment for the mask values is wrong: the array
mask = np.array([-10e-3, 10e-3]) is in meters, not millimeters; update the
comment to reflect the correct unit (e.g., change "[mm]" to "[m]" next to the
mask definition) so the documentation matches the values assigned to mask and
sensor.mask.

In @kwave/kspaceFirstOrder1D.py:
- Around line 505-506: The conditional uses the wrong attribute name: replace
the incorrect reference to options.source_ux with k_sim.source_ux; specifically
in the block that checks source ux (the lines with if (k_sim.source_ux is not
False and t_index < xp.shape(source.ux)[1]): if options.source_ux >= t_index:)
change the second condition to if k_sim.source_ux >= t_index so it references
the Simulation (k_sim) attribute instead of the nonexistent options.source_ux.
- Around line 77-84: The function dotdict_to_cpu currently calls isinstance(val,
cp.ndarray) which fails when cp is None; update dotdict_to_cpu to first guard
that cp is not None (or that cp has attribute ndarray) before using isinstance
with cp.ndarray, e.g. check "if cp is not None and isinstance(val, cp.ndarray):"
and otherwise skip the CuPy check (or fall back to checking numpy.ndarray) so no
TypeError occurs when CuPy isn't installed; keep the verbose print and setattr
behavior intact for the CuPy branch.

In @kwave/kWaveSimulation_helper/create_storage_variables.py:
- Around line 52-57: The barycentric coordinates are being computed using the
triangle vertex coordinates instead of the query interpolation_points; update
the bc calculation in create_storage_variables.py to subtract
tri.transform[simplex_indices, 2] from the interpolation_points for each simplex
and then multiply by tri.transform[simplex_indices, :2] (i.e., use
interpolation_points in the dot operand rather than tri.points[simplex_indices,
:]) so bc is computed from the query points; keep simplex_indices and the
returned tri.points[simplex_indices, :] as the vertices.
- Around line 29-33: The barycentric calculation uses the simplex vertices
instead of the query points; change the computation of bc to subtract
tri.transform[indices, 2] from interpolation_points (not tri.points[indices, :])
and multiply by tri.transform[indices, :2] accordingly so the matrix shapes
align (i.e., use interpolation_points - tri.transform[indices, 2] with the same
transpose arrangement used before). Update the expression that builds bc (which
currently references tri.points[indices, :]) to use interpolation_points so bc
contains barycentric coords for the query points.
- Around line 544-549: The code contains an unreachable and syntactically
incorrect block: the "if kgrid.dim == 1:" branch inside an else that only runs
when kgrid.dim != 1 should be removed, and the reshape call itself is wrong;
remove the entire unreachable if-block (including the sensor_x = np.reshape(...)
line) or, if the intent was to handle the kgrid.dim == 1 case, move that logic
out of the else and change the call to np.reshape(mask, (-1, 1)) so the first
argument is the array and the second is the shape; update the code around
create_storage_variables to either delete the unreachable branch or correctly
locate and fix the reshape using np.reshape(mask, (-1, 1)) where sensor_x is
set.
- Around line 85-86: The early return when flags.time_rev is not None returns
only flags causing unpack errors; update the branch in create_storage_variables
to return the same 4-tuple shape as the other path by constructing/initializing
record, sensor_data, and num_recorded_time_points (or appropriate placeholders)
and return (flags, record, sensor_data, num_recorded_time_points) instead of
just flags so callers can always unpack consistently.

In @kwave/kWaveSimulation_helper/extract_sensor_data.py:
- Around line 300-304: The logic for accumulating minimum corner pressures is
inverted: in the else branch where you currently call
np.maximum(sensor_data[cuboid_index].p_min, result), replace that with
np.minimum so the running p_min actually tracks the smallest values; keep the
file_index == 0 initialization (sensor_data[cuboid_index].p_min = result) and
only use np.minimum in the subsequent accumulation when flags.record_p_min is
true.
- Around line 537-547: The 3D branch inside the else (when file_index > 0)
updates ux_min and uy_min but omits updating uz_min; fix by adding a line
mirroring the ux/uy updates to set sensor_data.uz_min =
np.minimum(sensor_data.uz_min, np.sum(uz_sgz[record.tri] * record.bc, axis=1))
in the dim == 3 block (use the same record.tri and record.bc pattern and the
uz_sgz array).
- Around line 435-452: The np.interp calls for 1D Cartesian sensor masks use the
wrong argument order; change each np.interp(record.grid_x, p, record.sensor_x)
(and similar instances) to np.interp(record.sensor_x, record.grid_x, p) so that
x=query points is record.sensor_x, xp=data coordinates is record.grid_x, and
fp=data values is p; apply the same swap for all occurrences referenced (e.g.,
those setting sensor_data.p_max, sensor_data.p_min, and other 1D interpolations)
so subsequent np.minimum/np.maximum operations use correctly interpolated
arrays.

In @kwave/kWaveSimulation.py:
- Around line 210-214: The time_rev method has an infinite recursion because it
returns self.time_rev at the end; change that terminal return to a concrete
boolean (likely False) so the method returns False when no sensor or when sensor
is NotATransducer. Update the method time_rev to return False instead of
self.time_rev and keep the conditional branch that checks self.sensor,
isinstance(NotATransducer), options.simulation_type.is_elastic_simulation(), and
sensor.time_reversal_boundary_data to decide the True case.

In @tests/matlab_test_data_collectors/python_testers/getWin_test.py:
- Line 34: The logging call incorrectly passes multiple values to logging.log
without a format string; replace it with a properly formatted call such as
logging.info("N=%s type=%s param=%s rotation=%s symmetric=%s square=%s", N,
type_, param, rotation, symmetric, square) or use logging.log(logging.INFO,
"N=%s type=%s param=%s rotation=%s symmetric=%s square=%s", N, type_, param,
rotation, symmetric, square) so the variables (N, type_, param, rotation,
symmetric, square) are supplied as arguments to a single format string.
- Around line 36-44: The conditional uses N.all() > 1 which compares a boolean
to 1; change it to (N > 1).all() so the intent "all elements > 1" is correctly
checked for array inputs (locate the conditional around N, get_win call in
getWin_test.py). Also address the changed test coverage: decide whether inputs
with N <= 1 should still exercise get_win or instead be asserted to
raise/skip—either remove the conditional to run the test for those cases or add
explicit asserts/skips for N <= 1 so test behavior is intentional and
documented.
🧹 Nitpick comments (5)
kwave/ksource.py (1)

79-80: Consider squeezing both sides of the comparison for consistency.

The validation now compares p0.shape against np.squeeze(kgrid.k).shape, which removes singleton dimensions from the grid. For consistent comparison, consider also squeezing p0 to ensure that singleton dimensions are handled uniformly on both sides.

♻️ Proposed enhancement
-        if self.p0.shape != np.squeeze(kgrid.k).shape:
-            
+        if np.squeeze(self.p0).shape != np.squeeze(kgrid.k).shape:
             # throw an error if p0 is not the correct size
             raise ValueError("source.p0 must be the same size as the computational grid.")
examples/ivp_1D_simulation/ivp_1D_simulation.py (1)

75-76: Consider defaulting to CPU for broader compatibility.

The example hardcodes is_gpu_simulation=True. While the solver falls back to NumPy if CuPy is unavailable, defaulting to CPU (is_gpu_simulation=False) or detecting GPU availability would make this example more accessible to users without GPU support.

kwave/kWaveSimulation.py (1)

528-600: Consider consolidating duplicate storage variable creation logic.

The elastic and non-elastic branches (lines 531-563 and 564-600) are nearly identical, differing only in how sensor_x is computed. This could be refactored to reduce duplication.

♻️ Suggested consolidation
# Compute sensor_x based on code type
if not self.blank_sensor:
    if is_elastic_code:
        sensor_x = self.sensor.mask[0, :]
    elif k_dim == 1:
        sensor_x = self.sensor_x
    else:
        sensor_x = self.sensor.mask[0, :]
else:
    sensor_x = None

record_old = copy.deepcopy(self.record)
values = dotdict({
    "sensor_x": sensor_x,
    "sensor_mask_index": self.sensor_mask_index,
    "record": record_old,
    "sensor_data_buffer_size": self.s_source_pos_index
})

if self.record.u_split_field:
    self.record_u_split_field = self.record.u_split_field

flags = dotdict({
    "use_sensor": self.use_sensor,
    "blank_sensor": self.blank_sensor,
    # ... rest of flags
})

flags, self.record, self.sensor_data, self.num_recorded_time_points = create_storage_variables(
    self.kgrid, self.sensor, opt, values, flags, self.record)
kwave/kspaceFirstOrder1D.py (2)

3-4: Remove duplicate import.

SimulationExecutionOptions is imported twice (lines 3 and 23).

♻️ Remove duplicate
 from typing import Union
 
-from kwave.options.simulation_execution_options import SimulationExecutionOptions
-
 try: 
     import cupy as cp
 except ImportError: 

Also applies to: 23-24


474-474: Prefer range() over xp.arange() for loop iteration.

Using xp.arange() creates an array (potentially on GPU with CuPy), which may cause unexpected behavior when iterating. Use Python's built-in range() for the loop index.

♻️ Proposed fix
     # start time loop
-    for t_index in tqdm(xp.arange(index_start, index_end, index_step, dtype=int)):
+    for t_index in tqdm(range(index_start, index_end, index_step)):
📜 Review details

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 6ec3d6d and cdff8f9.

📒 Files selected for processing (13)
  • examples/ivp_1D_simulation/ivp_1D_simulation.ipynb
  • examples/ivp_1D_simulation/ivp_1D_simulation.py
  • kwave/kWaveSimulation.py
  • kwave/kWaveSimulation_helper/__init__.py
  • kwave/kWaveSimulation_helper/create_absorption_variables.py
  • kwave/kWaveSimulation_helper/create_storage_variables.py
  • kwave/kWaveSimulation_helper/extract_sensor_data.py
  • kwave/ksource.py
  • kwave/kspaceFirstOrder1D.py
  • kwave/utils/filters.py
  • kwave/utils/signals.py
  • pyproject.toml
  • tests/matlab_test_data_collectors/python_testers/getWin_test.py
🧰 Additional context used
🧠 Learnings (2)
📓 Common learnings
Learnt from: waltsims
Repo: waltsims/k-wave-python PR: 625
File: docs/get_started/first_simulation.rst:147-154
Timestamp: 2025-09-11T16:12:04.876Z
Learning: In k-Wave Python, kspaceFirstOrder2D function takes separate simulation_options and execution_options parameters, not a single options dict. Use SimulationOptions and SimulationExecutionOptions classes from kwave.options, with snake_case attributes like pml_inside instead of MATLAB-style PMLInside.
📚 Learning: 2025-09-11T16:12:04.876Z
Learnt from: waltsims
Repo: waltsims/k-wave-python PR: 625
File: docs/get_started/first_simulation.rst:147-154
Timestamp: 2025-09-11T16:12:04.876Z
Learning: In k-Wave Python, kspaceFirstOrder2D function takes separate simulation_options and execution_options parameters, not a single options dict. Use SimulationOptions and SimulationExecutionOptions classes from kwave.options, with snake_case attributes like pml_inside instead of MATLAB-style PMLInside.

Applied to files:

  • examples/ivp_1D_simulation/ivp_1D_simulation.py
  • kwave/kspaceFirstOrder1D.py
  • examples/ivp_1D_simulation/ivp_1D_simulation.ipynb
🧬 Code graph analysis (7)
tests/matlab_test_data_collectors/python_testers/getWin_test.py (2)
kwave/utils/signals.py (1)
  • get_win (58-324)
tests/matlab_test_data_collectors/python_testers/utils/record_reader.py (1)
  • expected_value_of (15-20)
kwave/kWaveSimulation_helper/extract_sensor_data.py (2)
kwave/data.py (1)
  • numpy (107-108)
kwave/kWaveSimulation.py (1)
  • compute_directivity (226-238)
examples/ivp_1D_simulation/ivp_1D_simulation.py (7)
kwave/data.py (2)
  • numpy (107-108)
  • Vector (7-49)
kwave/kgrid.py (5)
  • kWaveGrid (14-701)
  • Nx (157-161)
  • dx (178-182)
  • x_size (353-357)
  • makeTime (455-495)
kwave/kmedium.py (1)
  • kWaveMedium (11-232)
kwave/ksensor.py (1)
  • kSensor (9-81)
kwave/kspaceFirstOrder1D.py (1)
  • kspace_first_order_1D (87-662)
kwave/options/simulation_execution_options.py (1)
  • SimulationExecutionOptions (13-308)
kwave/options/simulation_options.py (1)
  • SimulationOptions (46-348)
kwave/kWaveSimulation_helper/create_absorption_variables.py (1)
kwave/kWaveSimulation.py (1)
  • equation_of_state (156-167)
kwave/utils/signals.py (1)
kwave/kgrid.py (1)
  • size (297-301)
kwave/kWaveSimulation_helper/__init__.py (2)
kwave/kWaveSimulation_helper/create_storage_variables.py (1)
  • create_storage_variables (69-114)
kwave/kWaveSimulation_helper/extract_sensor_data.py (1)
  • extract_sensor_data (8-714)
kwave/kWaveSimulation.py (7)
kwave/kWaveSimulation_helper/create_storage_variables.py (1)
  • create_storage_variables (69-114)
kwave/recorder.py (1)
  • Recorder (9-143)
kwave/kmedium.py (1)
  • is_nonlinear (107-114)
kwave/ksensor.py (2)
  • mask (38-42)
  • mask (45-46)
kwave/ktransducer.py (1)
  • mask (305-311)
kwave/utils/dotdictionary.py (1)
  • dotdict (4-57)
kwave/options/simulation_options.py (1)
  • is_axisymmetric (41-42)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (16)
  • GitHub Check: test (windows-latest, 3.12)
  • GitHub Check: test (ubuntu-latest, 3.13)
  • GitHub Check: test (ubuntu-latest, 3.11)
  • GitHub Check: test (windows-latest, 3.10)
  • GitHub Check: test (macos-latest, 3.13)
  • GitHub Check: test (ubuntu-latest, 3.12)
  • GitHub Check: test (windows-latest, 3.13)
  • GitHub Check: test (ubuntu-latest, 3.10)
  • GitHub Check: test (macos-latest, 3.11)
  • GitHub Check: test (windows-latest, 3.11)
  • GitHub Check: test_install (macos-latest, 3.12, all)
  • GitHub Check: test_install (macos-latest, 3.12, docs)
  • GitHub Check: test_install (macos-latest, 3.11, test)
  • GitHub Check: test_install (windows-latest, 3.13, docs)
  • GitHub Check: test_install (windows-latest, 3.12, dev)
  • GitHub Check: test_install (windows-latest, 3.10, examples)
🔇 Additional comments (13)
kwave/utils/filters.py (1)

591-593: LGTM! Shape compatibility fix for 1D arrays.

The added check correctly handles the case where the window is 2D but the input array is 1D by squeezing the window to match the input shape. This ensures proper broadcasting during the FFT-based smoothing operation.

tests/matlab_test_data_collectors/python_testers/getWin_test.py (1)

16-16: LGTM! Logging statement correctly formatted.

The logging statement properly uses format string syntax to log the iteration index.

kwave/kWaveSimulation_helper/create_absorption_variables.py (1)

17-18: Confirmed: Callers safely handle None return values for lossless equation of state.

The single caller (kWaveSimulation.py, line 1461) assigns the four return values to instance attributes that are already initialized to None. The absorption variables (absorb_nabla1, absorb_nabla2, absorb_tau, absorb_eta) are used only within equation-of-state-specific code paths (e.g., in kspaceFirstOrder1D.py), so returning None for the lossless case is safe since those code paths won't execute for lossless media.

pyproject.toml (1)

36-36: The tqdm version 4.67.1 is valid and secure.

Version 4.67.1 is the latest release on PyPI and not yanked. While a prior vulnerability (CVE-2024-34062) affected tqdm, it was fixed in v4.66.3, so this version is not affected.

kwave/kWaveSimulation_helper/__init__.py (1)

1-13: LGTM!

The new imports for create_storage_variables and extract_sensor_data follow the existing pattern, and __all__ is correctly updated to expose these new utilities.

kwave/utils/signals.py (3)

59-59: LGTM!

The extended type hint to accept Tuple[int] aligns with the updated handling logic below.


145-154: LGTM!

Using np.asarray(N).size handles scalar and array-like inputs uniformly, and the explicit int(N) cast ensures type consistency for downstream 1D window creation.


310-310: LGTM!

Consistent use of np.asarray(N).size in the window enlargement branch.

kwave/kWaveSimulation_helper/extract_sensor_data.py (1)

1-6: LGTM on CuPy import fallback.

The try/except pattern for optional CuPy import with cp = None fallback is appropriate.

kwave/kWaveSimulation.py (2)

147-154: LGTM!

The new is_nonlinear property correctly delegates to the medium's nonlinearity check.


166-167: Good catch fixing the typo.

"loseless" → "lossless" is a valid spelling correction.

kwave/kspaceFirstOrder1D.py (1)

87-92: LGTM on function signature.

The function follows the established pattern with separate simulation_options and execution_options parameters. Based on learnings, this aligns with k-Wave Python conventions.

kwave/kWaveSimulation_helper/create_storage_variables.py (1)

516-521: Unused buffer variable and unusual del on local parameter.

sensor_data_buffer is created but never used (suppressed with noqa). Additionally, del sensor_data_buffer_size removes the local parameter which is an unusual pattern. This appears to be incomplete implementation.

Consider whether the buffer should be stored in sensor_data or returned, and whether the del statement should instead set a flag or attribute to False.

Comment on lines +8 to +10
"source": [
"First install the pacakage using the latests version."
]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fix typographical errors.

The markdown text contains typos: "pacakage" → "package" and "latests" → "latest".

🤖 Prompt for AI Agents
In @examples/ivp_1D_simulation/ivp_1D_simulation.ipynb around lines 8 - 10,
Update the markdown cell string "First install the pacakage using the latests
version." to correct the typos: change "pacakage" to "package" and "latests" to
"latest" so the sentence reads "First install the package using the latest
version."

Comment on lines +29 to +31
"source": [
"Now import dependencies, and party of library which are required."
]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fix typographical error.

The text "party of library which are required" should likely read "parts of the library that are required".

🤖 Prompt for AI Agents
In @examples/ivp_1D_simulation/ivp_1D_simulation.ipynb around lines 29 - 31, The
notebook sentence "Now import dependencies, and party of library which are
required." contains a typo; update the cell text to read "Now import
dependencies, and parts of the library that are required." so it uses "parts"
and "that" for correct grammar and clarity; locate and edit the markdown/source
string in the cell where this sentence appears in ivp_1D_simulation.ipynb.

Comment on lines +131 to +134
"# this hack is needed to ensure that the sensor is in [1,2] dimensions\n",
"mask = np.array([-10e-3, 10e-3]) # [mm]\n",
"mask = mask[:, np.newaxis].T\n",
"sensor.mask = mask"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Same unit comment issue as companion script.

The comment indicates [mm] but the values are in meters. See the corresponding fix in ivp_1D_simulation.py.

🤖 Prompt for AI Agents
In @examples/ivp_1D_simulation/ivp_1D_simulation.ipynb around lines 131 - 134,
The inline comment for the mask array is incorrect: the values in mask are in
meters but the comment says [mm]; update the comment or units to match the data.
Locate the mask creation (variables named mask and sensor.mask) and either
change the comment from "[mm]" to "[m]" or convert the numeric values to
millimeters if you intend to keep "[mm]"; ensure sensor.mask remains assigned to
the corrected mask.

Comment on lines +62 to +65
# this hack is needed to ensure that the sensor is in [1,2] dimensions
mask = np.array([-10e-3, 10e-3]) # [mm]
mask = mask[:, np.newaxis].T
sensor.mask = mask
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fix incorrect unit comment.

The comment states [mm] but the values are in meters (-10e-3 and 10e-3 correspond to ±10 mm = ±0.01 m).

📝 Suggested fix
 # create a Cartesian sensor mask recording the pressure
 sensor = kSensor()
 sensor.record = ["p"]

 # this hack is needed to ensure that the sensor is in [1,2] dimensions
-mask = np.array([-10e-3, 10e-3]) # [mm]
+mask = np.array([-10e-3, 10e-3])  # [m]
 mask = mask[:, np.newaxis].T
 sensor.mask = mask 
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# this hack is needed to ensure that the sensor is in [1,2] dimensions
mask = np.array([-10e-3, 10e-3]) # [mm]
mask = mask[:, np.newaxis].T
sensor.mask = mask
# this hack is needed to ensure that the sensor is in [1,2] dimensions
mask = np.array([-10e-3, 10e-3]) # [m]
mask = mask[:, np.newaxis].T
sensor.mask = mask
🤖 Prompt for AI Agents
In @examples/ivp_1D_simulation/ivp_1D_simulation.py around lines 62 - 65, The
inline comment for the mask values is wrong: the array mask = np.array([-10e-3,
10e-3]) is in meters, not millimeters; update the comment to reflect the correct
unit (e.g., change "[mm]" to "[m]" next to the mask definition) so the
documentation matches the values assigned to mask and sensor.mask.

Comment on lines +77 to +84
def dotdict_to_cpu(obj, verbose: bool = False):
"""Convert all numpy arrays in the given dotdict to numpy arrays."""
for name, val in obj.items():
if isinstance(val, cp.ndarray):
if verbose:
print(f"Converting {name} to numpy array")
setattr(obj, name, val.get())
return obj
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Guard against cp being None before checking cp.ndarray.

When CuPy is not installed, cp is None, and isinstance(val, cp.ndarray) will raise TypeError: isinstance() arg 2 must be a type....

🐛 Proposed fix
 def dotdict_to_cpu(obj, verbose: bool = False):
     """Convert all numpy arrays in the given dotdict to numpy arrays."""
+    if cp is None:
+        return obj
     for name, val in obj.items():
         if isinstance(val, cp.ndarray):
             if verbose:
                 print(f"Converting {name} to numpy array")
             setattr(obj, name, val.get())
     return obj
🤖 Prompt for AI Agents
In @kwave/kspaceFirstOrder1D.py around lines 77 - 84, The function
dotdict_to_cpu currently calls isinstance(val, cp.ndarray) which fails when cp
is None; update dotdict_to_cpu to first guard that cp is not None (or that cp
has attribute ndarray) before using isinstance with cp.ndarray, e.g. check "if
cp is not None and isinstance(val, cp.ndarray):" and otherwise skip the CuPy
check (or fall back to checking numpy.ndarray) so no TypeError occurs when CuPy
isn't installed; keep the verbose print and setattr behavior intact for the CuPy
branch.

Comment on lines +435 to +452
if dim == 1:
if file_index == 0:
sensor_data.p_max = np.interp(record.grid_x, p, record.sensor_x)
else:
sensor_data.p_max = np.maximum(sensor_data.p_max, np.interp(record.grid_x, p, record.sensor_x))
else:
if file_index == 0:
sensor_data.p_max = np.sum(p[record.tri] * record.bc, axis=1)
else:
sensor_data.p_max = np.maximum(sensor_data.p_max, np.sum(p[record.tri] * record.bc, axis=1))

# store the minimum acoustic pressure
if flags.record_p_min:
if dim == 1:
if file_index == 0:
sensor_data.p_min = np.interp(record.grid_x, p, record.sensor_x)
else:
sensor_data.p_min = np.minimum(sensor_data.p_min, np.interp(record.grid_x, p, record.sensor_x))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Incorrect np.interp argument order for 1D Cartesian sensor mask.

The np.interp signature is np.interp(x, xp, fp) where x = query points, xp = data x-coordinates (monotonic), fp = data values. The current calls appear to have inverted arguments, e.g., np.interp(record.grid_x, p, record.sensor_x) should likely be np.interp(record.sensor_x, record.grid_x, p).

This pattern repeats in multiple locations (lines 437, 439, 450, 452, 462, 470, 484, 499, 511, 526, 539, 552).

🐛 Example fix for record_p_max 1D case
         if flags.record_p_max:
             if dim == 1:
                 if file_index == 0:
-                    sensor_data.p_max = np.interp(record.grid_x, p, record.sensor_x)
+                    sensor_data.p_max = np.interp(record.sensor_x, record.grid_x, np.real(p))
                 else:
-                    sensor_data.p_max = np.maximum(sensor_data.p_max, np.interp(record.grid_x, p, record.sensor_x))
+                    sensor_data.p_max = np.maximum(sensor_data.p_max, np.interp(record.sensor_x, record.grid_x, np.real(p)))

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In @kwave/kWaveSimulation_helper/extract_sensor_data.py around lines 435 - 452,
The np.interp calls for 1D Cartesian sensor masks use the wrong argument order;
change each np.interp(record.grid_x, p, record.sensor_x) (and similar instances)
to np.interp(record.sensor_x, record.grid_x, p) so that x=query points is
record.sensor_x, xp=data coordinates is record.grid_x, and fp=data values is p;
apply the same swap for all occurrences referenced (e.g., those setting
sensor_data.p_max, sensor_data.p_min, and other 1D interpolations) so subsequent
np.minimum/np.maximum operations use correctly interpolated arrays.

Comment on lines +537 to +547
else:
if (dim == 1):
sensor_data.ux_min = np.minimum(sensor_data.ux_min, np.interp(record.grid_x, ux_sgx, record.sensor_x))
elif (dim == 2):
sensor_data.ux_min = np.minimum(sensor_data.ux_min, np.sum(ux_sgx[record.tri] * record.bc, axis=1))
sensor_data.uy_min = np.minimum(sensor_data.uy_min, np.sum(uy_sgy[record.tri] * record.bc, axis=1))
elif (dim == 3):
sensor_data.ux_min = np.minimum(sensor_data.ux_min, np.sum(ux_sgx[record.tri] * record.bc, axis=1))
sensor_data.uy_min = np.minimum(sensor_data.uy_min, np.sum(uy_sgy[record.tri] * record.bc, axis=1))
else:
raise RuntimeError("Wrong dimensions")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Incomplete 3D uz_min handling when file_index > 0.

The 3D case for record_u_min is missing the uz_min update when file_index > 0. The branch ends at uy_min without including uz_min.

🐛 Proposed fix
             else:
                 if (dim == 1):
                     sensor_data.ux_min = np.minimum(sensor_data.ux_min, np.interp(record.grid_x, ux_sgx, record.sensor_x))
                 elif (dim == 2):
                     sensor_data.ux_min = np.minimum(sensor_data.ux_min, np.sum(ux_sgx[record.tri] * record.bc, axis=1))
                     sensor_data.uy_min = np.minimum(sensor_data.uy_min, np.sum(uy_sgy[record.tri] * record.bc, axis=1))
                 elif (dim == 3):
                     sensor_data.ux_min = np.minimum(sensor_data.ux_min, np.sum(ux_sgx[record.tri] * record.bc, axis=1))
                     sensor_data.uy_min = np.minimum(sensor_data.uy_min, np.sum(uy_sgy[record.tri] * record.bc, axis=1))
+                    sensor_data.uz_min = np.minimum(sensor_data.uz_min, np.sum(uz_sgz[record.tri] * record.bc, axis=1))
                 else:
                     raise RuntimeError("Wrong dimensions")
🤖 Prompt for AI Agents
In @kwave/kWaveSimulation_helper/extract_sensor_data.py around lines 537 - 547,
The 3D branch inside the else (when file_index > 0) updates ux_min and uy_min
but omits updating uz_min; fix by adding a line mirroring the ux/uy updates to
set sensor_data.uz_min = np.minimum(sensor_data.uz_min,
np.sum(uz_sgz[record.tri] * record.bc, axis=1)) in the dim == 3 block (use the
same record.tri and record.bc pattern and the uz_sgz array).

Comment on lines 210 to 214
def time_rev(self) -> bool:
if self.sensor and not isinstance(self.sensor, NotATransducer):
return not self.options.simulation_type.is_elastic_simulation() and self.sensor.time_reversal_boundary_data is not None
return self.userarg_time_rev
return self.time_rev

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Critical: Infinite recursion in time_rev property.

The property returns self.time_rev on line 213, which will recursively call itself, causing a RecursionError. Based on the commented-out code paths, this should likely return False.

🐛 Proposed fix
     @property
     @deprecated(version="0.4.1", reason="Use TimeReversal class instead")
     def time_rev(self) -> bool:
         if self.sensor and not isinstance(self.sensor, NotATransducer):
             return not self.options.simulation_type.is_elastic_simulation() and self.sensor.time_reversal_boundary_data is not None
-        return self.time_rev
+        return False
🤖 Prompt for AI Agents
In @kwave/kWaveSimulation.py around lines 210 - 214, The time_rev method has an
infinite recursion because it returns self.time_rev at the end; change that
terminal return to a concrete boolean (likely False) so the method returns False
when no sensor or when sensor is NotATransducer. Update the method time_rev to
return False instead of self.time_rev and keep the conditional branch that
checks self.sensor, isinstance(NotATransducer),
options.simulation_type.is_elastic_simulation(), and
sensor.time_reversal_boundary_data to decide the True case.

N = np.squeeze(N)

# logging.log(logging.INFO, N, type_, param, rotation, symmetric, square, win)
logging.log(logging.INFO, N, type_, param, rotation, symmetric, square)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Fix incorrect logging.log usage.

The logging statement is incorrectly formatted. logging.log expects (level, msg, *args), but multiple values are passed as positional arguments without a format string, which will cause incorrect behavior.

🐛 Proposed fix
-    logging.log(logging.INFO, N, type_, param, rotation, symmetric, square)
+    logging.log(logging.INFO, "N=%s, type_=%s, param=%s, rotation=%s, symmetric=%s, square=%s", N, type_, param, rotation, symmetric, square)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
logging.log(logging.INFO, N, type_, param, rotation, symmetric, square)
logging.log(logging.INFO, "N=%s, type_=%s, param=%s, rotation=%s, symmetric=%s, square=%s", N, type_, param, rotation, symmetric, square)
🤖 Prompt for AI Agents
In @tests/matlab_test_data_collectors/python_testers/getWin_test.py at line 34,
The logging call incorrectly passes multiple values to logging.log without a
format string; replace it with a properly formatted call such as
logging.info("N=%s type=%s param=%s rotation=%s symmetric=%s square=%s", N,
type_, param, rotation, symmetric, square) or use logging.log(logging.INFO,
"N=%s type=%s param=%s rotation=%s symmetric=%s square=%s", N, type_, param,
rotation, symmetric, square) so the variables (N, type_, param, rotation,
symmetric, square) are supplied as arguments to a single format string.

Comment on lines +36 to +44
if (np.isscalar(N) and N > 1) or (isinstance(N, np.ndarray) and N.all() > 1):
win_py, cg_py = get_win(N, type_, param=param, rotation=rotation, symmetric=symmetric, square=square)

cg = reader.expected_value_of("cg")
win = reader.expected_value_of("win")
win_py = np.squeeze(win_py)
assert np.shape(win_py) == np.shape(win)
assert np.allclose(win_py, win, equal_nan=True)
assert np.allclose(cg_py, cg, equal_nan=True)
cg = reader.expected_value_of("cg")
win = reader.expected_value_of("win")
win_py = np.squeeze(win_py)
assert np.shape(win_py) == np.shape(win)
assert np.allclose(win_py, win, equal_nan=True)
assert np.allclose(cg_py, cg, equal_nan=True)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Fix conditional logic bug and clarify test behavior change.

There are two issues with this code:

  1. Logic bug: Line 36 has N.all() > 1, which first evaluates N.all() (returns a boolean), then compares that boolean with 1. This is incorrect. It should be (N > 1).all() to check if all elements are greater than 1.

  2. Test behavior change: The conditional wrapping now skips test execution for certain inputs (when N ≤ 1). This changes test coverage and could hide bugs. Verify this is intentional and consider whether these cases should raise an error or be tested differently.

🐛 Proposed fix for the logic bug
-    if (np.isscalar(N) and N > 1) or (isinstance(N, np.ndarray) and N.all() > 1):
+    if (np.isscalar(N) and N > 1) or (isinstance(N, np.ndarray) and (N > 1).all()):
         win_py, cg_py = get_win(N, type_, param=param, rotation=rotation, symmetric=symmetric, square=square)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if (np.isscalar(N) and N > 1) or (isinstance(N, np.ndarray) and N.all() > 1):
win_py, cg_py = get_win(N, type_, param=param, rotation=rotation, symmetric=symmetric, square=square)
cg = reader.expected_value_of("cg")
win = reader.expected_value_of("win")
win_py = np.squeeze(win_py)
assert np.shape(win_py) == np.shape(win)
assert np.allclose(win_py, win, equal_nan=True)
assert np.allclose(cg_py, cg, equal_nan=True)
cg = reader.expected_value_of("cg")
win = reader.expected_value_of("win")
win_py = np.squeeze(win_py)
assert np.shape(win_py) == np.shape(win)
assert np.allclose(win_py, win, equal_nan=True)
assert np.allclose(cg_py, cg, equal_nan=True)
if (np.isscalar(N) and N > 1) or (isinstance(N, np.ndarray) and (N > 1).all()):
win_py, cg_py = get_win(N, type_, param=param, rotation=rotation, symmetric=symmetric, square=square)
cg = reader.expected_value_of("cg")
win = reader.expected_value_of("win")
win_py = np.squeeze(win_py)
assert np.shape(win_py) == np.shape(win)
assert np.allclose(win_py, win, equal_nan=True)
assert np.allclose(cg_py, cg, equal_nan=True)
🤖 Prompt for AI Agents
In @tests/matlab_test_data_collectors/python_testers/getWin_test.py around lines
36 - 44, The conditional uses N.all() > 1 which compares a boolean to 1; change
it to (N > 1).all() so the intent "all elements > 1" is correctly checked for
array inputs (locate the conditional around N, get_win call in getWin_test.py).
Also address the changed test coverage: decide whether inputs with N <= 1 should
still exercise get_win or instead be asserted to raise/skip—either remove the
conditional to run the test for those cases or add explicit asserts/skips for N
<= 1 so test behavior is intentional and documented.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

🤖 Fix all issues with AI agents
In @kwave/kspaceFirstOrder1D.py:
- Around line 375-386: The hasattr check for the s_source_pos_index block is
wrong: it tests for 's_source_sig_index' but then accesses
k_sim.s_source_pos_index, so when s_source_pos_index exists and
s_source_sig_index does not the conversion is skipped; change the guard to check
hasattr(k_sim, 's_source_pos_index') and ensure the block sets
k_sim.s_source_pos_index = np.squeeze(k_sim.s_source_pos_index) - int(1) (mirror
the pattern used for u_source_pos_index and p_source_pos_index) so the 1→0
indexing conversion is applied correctly.
- Around line 354-363: When options.use_kspace is True, source_kappa may remain
undefined if the complex conditional on k_sim.source.p_mode/u_mode is false;
always initialize source_kappa in that branch (e.g., default to 1.0 or to the
same spectral taper as kappa) so later uses (lines referencing source_kappa
around the additive source handling) cannot raise UnboundLocalError. Locate the
block with symbols options.use_kspace, kappa, source_kappa, c_ref, kgrid.k,
kgrid.dt, and k_sim.source.* and set source_kappa unconditionally inside the
options.use_kspace branch (then override it only when the additive-source
condition is true).
- Around line 561-579: The bug is that source_mat = source_mat.fill(0) assigns
None (since ndarray.fill mutates in-place), so replace that assignment with an
in-place call or reinitialize the array; e.g. call source_mat.fill(0) without
assignment or use source_mat = xp.zeros_like(source_mat) before populating it;
ensure this change is made in the 'additive' case where source_mat,
k_sim.p_source_pos_index, and k_sim.source.p[k_sim.p_source_sig_index, t_index]
are used so subsequent FFT/ifft with source_kappa and xp will operate on a valid
array.
- Around line 653-664: sensor.record_start_index is 1-based while t_index is
0-based, causing an off-by-one skip; convert record_start_index to 0-based
before comparing and computing file_index. Replace uses of
sensor.record_start_index in the condition and file_index calculation (the block
gating on k_sim.use_sensor and t_index, and the file_index = t_index -
sensor.record_start_index) so they use a zero-based value (e.g.,
sensor.record_start_index - 1) or compute a local start_index0 =
sensor.record_start_index - 1 and use t_index >= start_index0 and file_index =
t_index - start_index0 when calling extract_sensor_data.
- Around line 504-510: The GPU path fails because scipy.fft is being called on
CuPy arrays and some medium.absorb_nabla1/2 arrays remain NumPy; pick a single
FFT backend at init (alias it as xp.fft or fft_backend) and replace direct
scipy.fft.ifft/fftn/fft calls (e.g., the ifft used in the ux_sgx update that
references ddx_k, ddx_k_shift_pos, kappa, p_k) with the selected backend's fft
functions so they operate on xp arrays, and ensure cp_fft import is removed or
used consistently; additionally, when using_gpu=True convert
medium.absorb_nabla1 and medium.absorb_nabla2 (and any similar medium arrays) to
xp arrays so multiplications with rho0 and duxdx use the same array type.
🧹 Nitpick comments (2)
kwave/kspaceFirstOrder1D.py (2)

1-31: Remove duplicate import and avoid split CuPy import patterns.

SimulationExecutionOptions is imported twice (Line 3 and Line 25). Also, you import cupy at module import time and dynamically in get_array_module, which makes GPU enablement harder to reason about.

Proposed cleanup
-from kwave.options.simulation_execution_options import SimulationExecutionOptions
-
 try: 
     import cupy as cp
     from cupy import fft as cp_fft
 except ImportError: 
     cp = None
     cp_fft = None
@@
-from kwave.options.simulation_execution_options import SimulationExecutionOptions
 from kwave.options.simulation_options import SimulationOptions

34-57: Make get_array_module() deterministic and consistent with cp global.

Right now get_array_module() re-imports cupy and returns a local cp, while other helpers (to_gpu, dotdict_to_cpu) use the module-global cp. It’s easy to end up with using_gpu=True but helpers referencing a different cp symbol.

One way to align behavior
 def get_array_module(verbose: bool = False):
@@
-    # Try to import cupy and check for a GPU
+    # If module import already failed, don't try to use it
+    if cp is None:
+        return np, False
+
+    # Check for a visible GPU
     try:
-        import cupy as cp
-        # this returns the number of visible GPUs
-        ngpus = cp.cuda.runtime.getDeviceCount()
+        ngpus = cp.cuda.runtime.getDeviceCount()
         if ngpus > 0:
             return cp, True
     except Exception:
         pass
📜 Review details

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between cdff8f9 and 924457a.

📒 Files selected for processing (2)
  • examples/ivp_1D_simulation/README.md
  • kwave/kspaceFirstOrder1D.py
✅ Files skipped from review due to trivial changes (1)
  • examples/ivp_1D_simulation/README.md
🧰 Additional context used
🧠 Learnings (2)
📓 Common learnings
Learnt from: waltsims
Repo: waltsims/k-wave-python PR: 625
File: docs/get_started/first_simulation.rst:147-154
Timestamp: 2025-09-11T16:12:04.876Z
Learning: In k-Wave Python, kspaceFirstOrder2D function takes separate simulation_options and execution_options parameters, not a single options dict. Use SimulationOptions and SimulationExecutionOptions classes from kwave.options, with snake_case attributes like pml_inside instead of MATLAB-style PMLInside.
📚 Learning: 2025-09-11T16:12:04.876Z
Learnt from: waltsims
Repo: waltsims/k-wave-python PR: 625
File: docs/get_started/first_simulation.rst:147-154
Timestamp: 2025-09-11T16:12:04.876Z
Learning: In k-Wave Python, kspaceFirstOrder2D function takes separate simulation_options and execution_options parameters, not a single options dict. Use SimulationOptions and SimulationExecutionOptions classes from kwave.options, with snake_case attributes like pml_inside instead of MATLAB-style PMLInside.

Applied to files:

  • kwave/kspaceFirstOrder1D.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (12)
  • GitHub Check: test (ubuntu-latest, 3.12)
  • GitHub Check: test (ubuntu-latest, 3.13)
  • GitHub Check: test (macos-latest, 3.11)
  • GitHub Check: test (macos-latest, 3.12)
  • GitHub Check: test (macos-latest, 3.13)
  • GitHub Check: test (windows-latest, 3.13)
  • GitHub Check: test (windows-latest, 3.11)
  • GitHub Check: test (windows-latest, 3.12)
  • GitHub Check: test (ubuntu-latest, 3.10)
  • GitHub Check: test (ubuntu-latest, 3.11)
  • GitHub Check: test (windows-latest, 3.10)
  • GitHub Check: test_install (windows-latest, 3.10, dev)
🔇 Additional comments (3)
kwave/kspaceFirstOrder1D.py (3)

89-95: API shape matches repo conventions (separate options objects).

Signature uses simulation_options: SimulationOptions and execution_options: SimulationExecutionOptions (not a single options dict), consistent with existing k-Wave Python patterns. Based on learnings, this is the right direction.


490-504: No issue. The repository explicitly requires Python 3.10+ as declared in pyproject.toml (requires-python = ">=3.10"), so the match/case syntax is fully supported.


511-514: Verify the condition if options.source_ux >= t_index: at line 513.

The comparison direction appears backwards for a "start at index" gate. Typically, time gates check t_index >= start_index or start_index <= t_index. If options.source_ux represents a start index, the condition should be t_index >= options.source_ux. Additionally, verify the type and semantics of options.source_ux to ensure the >= comparison is appropriate.

Comment on lines +59 to +87
def to_gpu(obj, verbose: bool = False):
"""Convert all numpy arrays in the given object to cupy arrays."""
for name, val in vars(obj).items():
if isinstance(val, np.ndarray):
if verbose:
print(f"Converting {name} to cupy array")
setattr(obj, name, cp.asarray(val))
return obj


def dotdict_to_gpu(obj, verbose: bool = False):
"""Convert all numpy arrays in the given dotdict to cupy arrays."""
for name, val in obj.items():
if isinstance(val, np.ndarray):
if verbose:
print(f"Converting {name} to cupy array")
setattr(obj, name, cp.asarray(val))
return obj


def dotdict_to_cpu(obj, verbose: bool = False):
"""Convert all numpy arrays in the given dotdict to numpy arrays."""
for name, val in obj.items():
if isinstance(val, cp.ndarray):
if verbose:
print(f"Converting {name} to numpy array")
setattr(obj, name, val.get())
return obj

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Guard against cp is None, and set dotdict items via keys (not setattr).

  • dotdict_to_cpu() will crash if called when cp is None (isinstance(val, cp.ndarray)).
  • dotdict_to_gpu()/dotdict_to_cpu() iterate obj.items() but then use setattr(obj, ...) instead of obj[name] = ... (works only if dotdict.__setattr__ mirrors keys).
Safer conversions
 def dotdict_to_gpu(obj, verbose: bool = False):
     """Convert all numpy arrays in the given dotdict to cupy arrays."""
+    if cp is None:
+        raise RuntimeError("CuPy is not available, cannot move dotdict to GPU")
     for name, val in obj.items():
         if isinstance(val, np.ndarray):
             if verbose:
                 print(f"Converting {name} to cupy array")
-            setattr(obj, name, cp.asarray(val))
+            obj[name] = cp.asarray(val)
     return obj
@@
 def dotdict_to_cpu(obj, verbose: bool = False):
     """Convert all numpy arrays in the given dotdict to numpy arrays."""
+    if cp is None:
+        return obj
     for name, val in obj.items():
         if isinstance(val, cp.ndarray):
             if verbose:
                 print(f"Converting {name} to numpy array")
-            setattr(obj, name, val.get())
+            obj[name] = val.get()
     return obj

Comment on lines +354 to +363
if options.use_kspace:
kappa = scipy.fft.ifftshift(sinc(c_ref * kgrid.k * kgrid.dt / 2.0))
kappa = np.squeeze(kappa)
if (hasattr(options, 'source_p') and hasattr(k_sim.source, 'p_mode')) and (k_sim.source.p_mode == 'additive') or \
(hasattr(options, 'source_ux') and hasattr(k_sim.source, 'u_mode')) and (k_sim.source.u_mode == 'additive'):
source_kappa = scipy.fft.ifftshift(xp.cos (c_ref * kgrid.k * kgrid.dt / 2.0))
else:
kappa = 1.0
source_kappa = 1.0

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Initialize source_kappa unconditionally when use_kspace is on.

If options.use_kspace is True and the if (...) condition doesn’t trigger, source_kappa is never defined but is later used in additive source modes (Line 523 / Line 572). That’s an UnboundLocalError risk.

Proposed fix
 if options.use_kspace:
     kappa        = scipy.fft.ifftshift(sinc(c_ref * kgrid.k * kgrid.dt / 2.0))
     kappa        = np.squeeze(kappa)
+    source_kappa = 1.0
     if (hasattr(options, 'source_p') and hasattr(k_sim.source, 'p_mode')) and (k_sim.source.p_mode == 'additive') or \
        (hasattr(options, 'source_ux') and hasattr(k_sim.source, 'u_mode')) and (k_sim.source.u_mode == 'additive'):
         source_kappa = scipy.fft.ifftshift(xp.cos (c_ref * kgrid.k * kgrid.dt / 2.0))
 else:
     kappa        = 1.0
     source_kappa = 1.0
🤖 Prompt for AI Agents
In @kwave/kspaceFirstOrder1D.py around lines 354 - 363, When options.use_kspace
is True, source_kappa may remain undefined if the complex conditional on
k_sim.source.p_mode/u_mode is false; always initialize source_kappa in that
branch (e.g., default to 1.0 or to the same spectral taper as kappa) so later
uses (lines referencing source_kappa around the additive source handling) cannot
raise UnboundLocalError. Locate the block with symbols options.use_kspace,
kappa, source_kappa, c_ref, kgrid.k, kgrid.dt, and k_sim.source.* and set
source_kappa unconditionally inside the options.use_kspace branch (then override
it only when the additive-source condition is true).

Comment on lines +375 to +386
if hasattr(k_sim, 's_source_sig_index') and k_sim.s_source_pos_index is not None:
k_sim.s_source_pos_index = np.squeeze(k_sim.s_source_pos_index) - int(1)

if hasattr(k_sim, 'u_source_pos_index') and k_sim.u_source_pos_index is not None:
k_sim.u_source_pos_index = np.squeeze(k_sim.u_source_pos_index) - int(1)

if hasattr(k_sim, 'p_source_pos_index') and k_sim.p_source_pos_index is not None:
k_sim.p_source_pos_index = np.squeeze(k_sim.p_source_pos_index) - int(1)

if hasattr(k_sim, 's_source_sig_index') and k_sim.s_source_sig_index is not None:
k_sim.s_source_sig_index = np.squeeze(k_sim.s_source_sig_index) - int(1)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Fix the hasattr typo guarding s_source_pos_index.

This block checks hasattr(k_sim, 's_source_sig_index') but then reads k_sim.s_source_pos_index (Line 375-377). If s_source_pos_index exists but s_source_sig_index doesn’t, you’ll skip the intended 1→0 indexing conversion.

Proposed fix
-    if hasattr(k_sim, 's_source_sig_index') and k_sim.s_source_pos_index is not None:
+    if hasattr(k_sim, 's_source_pos_index') and k_sim.s_source_pos_index is not None:
         k_sim.s_source_pos_index = np.squeeze(k_sim.s_source_pos_index) - int(1)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if hasattr(k_sim, 's_source_sig_index') and k_sim.s_source_pos_index is not None:
k_sim.s_source_pos_index = np.squeeze(k_sim.s_source_pos_index) - int(1)
if hasattr(k_sim, 'u_source_pos_index') and k_sim.u_source_pos_index is not None:
k_sim.u_source_pos_index = np.squeeze(k_sim.u_source_pos_index) - int(1)
if hasattr(k_sim, 'p_source_pos_index') and k_sim.p_source_pos_index is not None:
k_sim.p_source_pos_index = np.squeeze(k_sim.p_source_pos_index) - int(1)
if hasattr(k_sim, 's_source_sig_index') and k_sim.s_source_sig_index is not None:
k_sim.s_source_sig_index = np.squeeze(k_sim.s_source_sig_index) - int(1)
if hasattr(k_sim, 's_source_pos_index') and k_sim.s_source_pos_index is not None:
k_sim.s_source_pos_index = np.squeeze(k_sim.s_source_pos_index) - int(1)
if hasattr(k_sim, 'u_source_pos_index') and k_sim.u_source_pos_index is not None:
k_sim.u_source_pos_index = np.squeeze(k_sim.u_source_pos_index) - int(1)
if hasattr(k_sim, 'p_source_pos_index') and k_sim.p_source_pos_index is not None:
k_sim.p_source_pos_index = np.squeeze(k_sim.p_source_pos_index) - int(1)
if hasattr(k_sim, 's_source_sig_index') and k_sim.s_source_sig_index is not None:
k_sim.s_source_sig_index = np.squeeze(k_sim.s_source_sig_index) - int(1)
🤖 Prompt for AI Agents
In @kwave/kspaceFirstOrder1D.py around lines 375 - 386, The hasattr check for
the s_source_pos_index block is wrong: it tests for 's_source_sig_index' but
then accesses k_sim.s_source_pos_index, so when s_source_pos_index exists and
s_source_sig_index does not the conversion is skipped; change the guard to check
hasattr(k_sim, 's_source_pos_index') and ensure the block sets
k_sim.s_source_pos_index = np.squeeze(k_sim.s_source_pos_index) - int(1) (mirror
the pattern used for u_source_pos_index and p_source_pos_index) so the 1→0
indexing conversion is applied correctly.

Comment on lines +504 to +510
else:
# calculate gradient using the k-space method on a non-uniform grid
# via the mapped pseudospectral method
ux_sgx = pml_x_sgx * (pml_x_sgx * ux_sgx -
dt * rho0_sgx_inv * k_sim.kgrid.dxudxn_sgx * xp.real(scipy.fft.ifft(ddx_k * ddx_k_shift_pos * kappa * p_k)) )


Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# First, let's check the file structure and size
wc -l kwave/kspaceFirstOrder1D.py

Repository: waltsims/k-wave-python

Length of output: 97


🏁 Script executed:

#!/bin/bash
# Check imports and FFT-related code at the top of the file
head -100 kwave/kspaceFirstOrder1D.py | grep -n "import\|fft"

Repository: waltsims/k-wave-python

Length of output: 1151


🏁 Script executed:

#!/bin/bash
# Check the actual lines mentioned in the review (504-510, 588-593, 608-610)
echo "=== Lines 504-510 ==="
sed -n '504,510p' kwave/kspaceFirstOrder1D.py

echo -e "\n=== Lines 588-593 ==="
sed -n '588,593p' kwave/kspaceFirstOrder1D.py

echo -e "\n=== Lines 608-610 ==="
sed -n '608,610p' kwave/kspaceFirstOrder1D.py

Repository: waltsims/k-wave-python

Length of output: 1312


🏁 Script executed:

#!/bin/bash
# Check if cp_fft is imported
grep -n "cp_fft\|import.*fft" kwave/kspaceFirstOrder1D.py | head -20

Repository: waltsims/k-wave-python

Length of output: 143


🏁 Script executed:

#!/bin/bash
# Look for the function definition and GPU-related code
sed -n '1,100p' kwave/kspaceFirstOrder1D.py | grep -A 50 "def kspace_first_order"

Repository: waltsims/k-wave-python

Length of output: 551


🏁 Script executed:

#!/bin/bash
# Find the GPU initialization logic and xp setup
sed -n '50,150p' kwave/kspaceFirstOrder1D.py | grep -A 30 "using_gpu\|get_array_module"

Repository: waltsims/k-wave-python

Length of output: 48


🏁 Script executed:

#!/bin/bash
# Look for get_array_module function or xp assignment
grep -n "using_gpu\|get_array_module\|xp.*=" kwave/kspaceFirstOrder1D.py | head -30

Repository: waltsims/k-wave-python

Length of output: 2178


🏁 Script executed:

#!/antml:thinking
I need to understand:
1. Where xp is assigned
2. If arrays are created using xp
3. Whether the FFT operations happen on those xp arrays
</thinking>
bash
# Look at broader context around the GPU setup
sed -n '200,250p' kwave/kspaceFirstOrder1D.py

Repository: waltsims/k-wave-python

Length of output: 333


🏁 Script executed:

#!/bin/bash
# Check what arrays are being used in the problematic FFT calls
# First, let's see the context around line 508
sed -n '490,520p' kwave/kspaceFirstOrder1D.py

Repository: waltsims/k-wave-python

Length of output: 1869


🏁 Script executed:

#!/bin/bash
# Look at how arrays like p_k, ddx_k, etc. are created
grep -n "p_k\s*=\|ddx_k\s*=\|rho0\s*=" kwave/kspaceFirstOrder1D.py | head -20

Repository: waltsims/k-wave-python

Length of output: 466


🏁 Script executed:

#!/bin/bash
# Look for all scipy.fft usage in the file
grep -n "scipy\.fft" kwave/kspaceFirstOrder1D.py

Repository: waltsims/k-wave-python

Length of output: 1222


🏁 Script executed:

#!/an//bash
# Also check xp.fft usage to see the pattern
grep -n "xp\.fft" kwave/kspaceFirstOrder1D.py

Repository: waltsims/k-wave-python

Length of output: 905


🏁 Script executed:

#!/bin/bash
# Check if there's any handling for medium.absorb_* arrays with xp casting
sed -n '580,620p' kwave/kspaceFirstOrder1D.py

Repository: waltsims/k-wave-python

Length of output: 2321


🏁 Script executed:

#!/bin/bash
# Check the get_array_module function implementation
sed -n '34,50p' kwave/kspaceFirstOrder1D.py

Repository: waltsims/k-wave-python

Length of output: 659


🏁 Script executed:

#!/bin/bash
# Let's verify if medium.absorb_* arrays are created as numpy or xp arrays
grep -n "absorb_nabla\|absorb_tau\|absorb_eta" kwave/kspaceFirstOrder1D.py | head -20

Repository: waltsims/k-wave-python

Length of output: 797


🏁 Script executed:

#!/bin/bash
# Check if these absorb attributes come from the medium object and how they're handled
grep -n "medium\\.absorb" kwave/kspaceFirstOrder1D.py

Repository: waltsims/k-wave-python

Length of output: 797


🏁 Script executed:

#!/bin/bash
# Check the kWaveMedium class to understand where absorb_* arrays come from
find . -name "*.py" -path "*/kmedium*" -o -name "*medium*" | head -5

Repository: waltsims/k-wave-python

Length of output: 266


🏁 Script executed:

#!/bin/bash
# Find where absorb_nabla etc. are defined/set
fd "kmedium" -t f

Repository: waltsims/k-wave-python

Length of output: 105


🏁 Script executed:

#!/bin/bash
# Check if absorb_nabla1, absorb_nabla2 etc. are created in kmedium.py
grep -n "absorb_nabla\|absorb_tau\|absorb_eta" kwave/kmedium.py | head -20

Repository: waltsims/k-wave-python

Length of output: 48


🏁 Script executed:

#!/bin/bash
# Let's verify the actual issue: check if scipy.fft can work on CuPy arrays
# First, let's understand the problem by testing the assertion
python3 << 'EOF'
# Check if scipy.fft works with CuPy arrays (it won't)
try:
    import scipy.fft
    print("scipy.fft imported successfully")
    print("scipy.fft.ifft works on numpy arrays")
    print("scipy.fft.ifft will NOT work on CuPy arrays - it requires NumPy arrays")
except Exception as e:
    print(f"Error: {e}")
EOF

Repository: waltsims/k-wave-python

Length of output: 152


🏁 Script executed:

#!/bin/bash
# Now let's check what happens at the problematic lines
# Line 508: scipy.fft.ifft is called on the result of:
# ddx_k * ddx_k_shift_pos * kappa * p_k
# All of these are xp arrays (CuPy when using_gpu=True)

# Check if ddx_k, ddx_k_shift_pos, kappa, p_k are converted to xp
sed -n '437,455p' kwave/kspaceFirstOrder1D.py

Repository: waltsims/k-wave-python

Length of output: 863


🏁 Script executed:

#!/bin/bash
# Look at line 343 onwards - where are these arrays created initially?
sed -n '343,360p' kwave/kspaceFirstOrder1D.py

Repository: waltsims/k-wave-python

Length of output: 1044


🏁 Script executed:

#!/bin/bash
# Check the kmedium.py file for where absorb_nabla is created
head -100 kwave/kmedium.py | grep -A 5 "absorb"

Repository: waltsims/k-wave-python

Length of output: 357


🏁 Script executed:

#!/bin/bash
# Search more broadly in kmedium.py
grep -n "absorb_nabla\|absorb_tau\|absorb_eta" kwave/kmedium.py | head -30

Repository: waltsims/k-wave-python

Length of output: 48


🏁 Script executed:

#!/bin/bash
# Let's check if medium.absorb_nabla* arrays come from NumPy
# by looking at how medium is populated with these attributes
rg "absorb_nabla1\|absorb_nabla2" --type py -A 2 -B 2 | head -50

Repository: waltsims/k-wave-python

Length of output: 48


🏁 Script executed:

#!/antml:thinking>
I should also check if there's a clear confirmation that the medium.absorb_* arrays are numpy arrays and not being converted to xp. This would confirm the secondary issue the reviewer mentioned about multiplying CuPy arrays with NumPy arrays.
</thinking>
bash
# Search for where absorb_nabla is assigned in the codebase
rg "absorb_nabla\s*=" --type py | head -20

Repository: waltsims/k-wave-python

Length of output: 134


🏁 Script executed:

#!/bin/bash
# Search for absorb_nabla assignment with escaped quotes
grep -r "absorb_nabla.*=" kwave/ --include="*.py" | head -20

Repository: waltsims/k-wave-python

Length of output: 1500


🏁 Script executed:

#!/bin/bash
# Let's check the simulation file that processes the medium
rg "absorb_nabla" --type py -B 3 -A 3 | head -80

Repository: waltsims/k-wave-python

Length of output: 4054


🏁 Script executed:

#!/bin/bash
# Check if there are any helper files that process the medium
find kwave -name "*helper*" -o -name "*process*" | grep -i medium

Repository: waltsims/k-wave-python

Length of output: 48


🏁 Script executed:

#!/bin/bash
# Look for files that might process medium attributes for GPU
rg "medium\\.absorb" kwave/ --type py -B 2 -A 2 | head -60

Repository: waltsims/k-wave-python

Length of output: 3142


🏁 Script executed:

#!/bin/bash
# Let's check the create_absorption_variables to see what dtype these arrays are
cat kwave/kWaveSimulation_helper/create_absorption_variables.py | head -100

Repository: waltsims/k-wave-python

Length of output: 3610


🏁 Script executed:

#!/bin/bash
# Verify that lines 534, 549, etc. correctly use xp.fft
echo "=== Correct usage with xp.fft ===" 
sed -n '534p' kwave/kspaceFirstOrder1D.py
sed -n '549p' kwave/kspaceFirstOrder1D.py
sed -n '488p' kwave/kspaceFirstOrder1D.py
echo ""
echo "=== Incorrect usage with scipy.fft ===" 
sed -n '508p' kwave/kspaceFirstOrder1D.py
sed -n '591p' kwave/kspaceFirstOrder1D.py

Repository: waltsims/k-wave-python

Length of output: 808


🏁 Script executed:

#!/bin/bash
# Check if there are tests for GPU mode to understand the severity
find . -name "*test*" -type f | xargs grep -l "using_gpu\|gpu" | head -5

Repository: waltsims/k-wave-python

Length of output: 234


🏁 Script executed:

#!/bin/bash
# Verify the critical issue: what happens when scipy.fft receives CuPy arrays
python3 << 'EOF'
# Document the actual incompatibility
print("Critical Issue Summary:")
print("=" * 60)
print("1. scipy.fft.ifft/ifftn/fftn do NOT support CuPy arrays")
print("2. When using_gpu=True, arrays like p_k, ddx_k, etc. are CuPy")
print("3. scipy.fft operations on CuPy arrays will raise TypeError")
print("")
print("Lines with scipy.fft on CuPy arrays:")
print("  - Line 508: scipy.fft.ifft(ddx_k * ...) where ddx_k is CuPy")
print("  - Lines 591-592: scipy.fft.fftn/ifftn on CuPy arrays")
print("  - Lines 608-609: scipy.fft.fftn/ifftn on CuPy arrays")
print("")
print("Correct pattern (used elsewhere):")
print("  - Lines 488, 534, 549: xp.fft.ifft/ifftn/fftn (works for both NumPy and CuPy)")
EOF

Repository: waltsims/k-wave-python

Length of output: 653


GPU path is broken: scipy.fft cannot operate on CuPy arrays.

The code calls scipy.fft.ifft/ifftn/fftn on CuPy arrays in GPU mode, which will raise at runtime (lines 508, 591–592, 608–609). Additionally, medium.absorb_nabla1/2 arrays remain NumPy after setup, causing errors when multiplied with CuPy arrays like rho0 and duxdx. Meanwhile, cp_fft is imported but never used, and the correct pattern (xp.fft) is already used elsewhere (lines 488, 534, 549).

Select an FFT backend once during initialization and use it consistently:

Suggested approach
 import scipy.fft
@@
 def kspace_first_order_1D(...):
@@
     if execution_options.is_gpu_simulation:
         xp, using_gpu = get_array_module()
         print(f"Using {'cupy (GPU)' if using_gpu else 'numpy (CPU)'}")
     else:
         xp = np
         using_gpu = False
+
+    # Pick an FFT backend that matches the array module
+    fft = cp_fft if using_gpu else scipy.fft
@@
-            ux_sgx = pml_x_sgx * (pml_x_sgx * ux_sgx - 
-                                  dt * rho0_sgx_inv * k_sim.kgrid.dxudxn_sgx * xp.real(scipy.fft.ifft(ddx_k * ddx_k_shift_pos * kappa * p_k)) )
+            ux_sgx = pml_x_sgx * (pml_x_sgx * ux_sgx -
+                                  dt * rho0_sgx_inv * k_sim.kgrid.dxudxn_sgx * xp.real(fft.ifft(ddx_k * ddx_k_shift_pos * kappa * p_k)) )
@@
-                    p = xp.squeeze(c0**2 * (rhox
-                        + medium.absorb_tau * xp.real(scipy.fft.ifftn(medium.absorb_nabla1 * scipy.fft.fftn(rho0 * duxdx, axes=(0,)), axes=(0,) ))  
-                        - medium.absorb_eta * xp.real(scipy.fft.ifftn(medium.absorb_nabla2 * scipy.fft.fftn(rhox, axes=(0,)), axes=(0,))) ) )
+                    p = xp.squeeze(c0**2 * (rhox
+                        + medium.absorb_tau * xp.real(fft.ifftn(medium.absorb_nabla1 * fft.fftn(rho0 * duxdx, axes=(0,)), axes=(0,)))
+                        - medium.absorb_eta * xp.real(fft.ifftn(medium.absorb_nabla2 * fft.fftn(rhox, axes=(0,)), axes=(0,))) ) )

Also convert medium.absorb_nabla1/2 to xp arrays when using_gpu=True.

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In @kwave/kspaceFirstOrder1D.py around lines 504 - 510, The GPU path fails
because scipy.fft is being called on CuPy arrays and some medium.absorb_nabla1/2
arrays remain NumPy; pick a single FFT backend at init (alias it as xp.fft or
fft_backend) and replace direct scipy.fft.ifft/fftn/fft calls (e.g., the ifft
used in the ux_sgx update that references ddx_k, ddx_k_shift_pos, kappa, p_k)
with the selected backend's fft functions so they operate on xp arrays, and
ensure cp_fft import is removed or used consistently; additionally, when
using_gpu=True convert medium.absorb_nabla1 and medium.absorb_nabla2 (and any
similar medium arrays) to xp arrays so multiplications with rho0 and duxdx use
the same array type.

Comment on lines +561 to +579
# add in the pre-scaled pressure source term as a mass source
if (k_sim.source_p is not False and t_index < xp.shape(k_sim.source.p)[1]):
match k_sim.source.p_mode:
case 'dirichlet':
# enforce source values as a dirichlet boundary condition
rhox[k_sim.p_source_pos_index] = k_sim.source.p[k_sim.p_source_sig_index, t_index]
case 'additive':
# extract the source values into a matrix
source_mat = source_mat.fill(0)
source_mat[k_sim.p_source_pos_index] = k_sim.source.p[k_sim.p_source_sig_index, t_index]
# apply the k-space correction
source_mat = xp.real(xp.fft.ifft(source_kappa * xp.fft.fft(source_mat)))
# add the source values to the existing field values
# including the k-space correction
rhox = rhox + source_mat
case 'additive-no-correction':
# add the source values to the existing field values
rhox[k_sim.p_source_pos_index] = rhox[k_sim.p_source_pos_index] + source.p[k_sim.p_source_sig_index, t_index]

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Bug: source_mat = source_mat.fill(0) sets source_mat to None.

In the pressure-source additive path (Line 569-571), ndarray.fill returns None. This will crash on the next line when indexing.

Proposed fix
                 case 'additive':
                     # extract the source values into a matrix
-                    source_mat = source_mat.fill(0)
+                    source_mat.fill(0)
                     source_mat[k_sim.p_source_pos_index] = k_sim.source.p[k_sim.p_source_sig_index, t_index]
🤖 Prompt for AI Agents
In @kwave/kspaceFirstOrder1D.py around lines 561 - 579, The bug is that
source_mat = source_mat.fill(0) assigns None (since ndarray.fill mutates
in-place), so replace that assignment with an in-place call or reinitialize the
array; e.g. call source_mat.fill(0) without assignment or use source_mat =
xp.zeros_like(source_mat) before populating it; ensure this change is made in
the 'additive' case where source_mat, k_sim.p_source_pos_index, and
k_sim.source.p[k_sim.p_source_sig_index, t_index] are used so subsequent
FFT/ifft with source_kappa and xp will operate on a valid array.

Comment on lines +653 to +664
# extract required sensor data from the pressure and particle velocity
# fields if the number of time steps elapsed is greater than
# sensor.record_start_index (defaults to 1)
if k_sim.use_sensor and (t_index >= sensor.record_start_index):

# update index for data storage
file_index: int = t_index - sensor.record_start_index

# run sub-function to extract the required data from the acoustic variables
sensor_data = extract_sensor_data(kdim, xp, sensor_data, file_index, sensor_mask_index,
extract_options, record, p, ux_sgx)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

rg -n --hidden -S "record_start_index" kwave | head -n 200

Repository: waltsims/k-wave-python

Length of output: 1988


🏁 Script executed:

rg -n -B 5 "t_index >= sensor.record_start_index" kwave/kspaceFirstOrder1D.py

Repository: waltsims/k-wave-python

Length of output: 368


🏁 Script executed:

sed -n '600,680p' kwave/kspaceFirstOrder1D.py

Repository: waltsims/k-wave-python

Length of output: 3645


🏁 Script executed:

rg -n "for.*t_index" kwave/kspaceFirstOrder1D.py

Repository: waltsims/k-wave-python

Length of output: 137


🏁 Script executed:

rg -n -B 3 "index_start" kwave/kspaceFirstOrder1D.py | head -n 30

Repository: waltsims/k-wave-python

Length of output: 370


🏁 Script executed:

rg -n -A 10 "record_start_index" kwave/ksensor.py | head -n 50

Repository: waltsims/k-wave-python

Length of output: 1317


Convert sensor.record_start_index to 0-based before using it in the comparison and calculation.

The record_start_index is 1-based (defaults to 1, validated as >= 1 per MATLAB convention), but t_index is 0-based (starts at 0 from range(0, ...)). The condition t_index >= sensor.record_start_index causes an off-by-one shift: when record_start_index=1, recording skips at t_index=0 and starts at t_index=1, losing the first time step. Use t_index >= sensor.record_start_index - 1 or convert record_start_index to 0-based at the start of the loop.

🤖 Prompt for AI Agents
In @kwave/kspaceFirstOrder1D.py around lines 653 - 664,
sensor.record_start_index is 1-based while t_index is 0-based, causing an
off-by-one skip; convert record_start_index to 0-based before comparing and
computing file_index. Replace uses of sensor.record_start_index in the condition
and file_index calculation (the block gating on k_sim.use_sensor and t_index,
and the file_index = t_index - sensor.record_start_index) so they use a
zero-based value (e.g., sensor.record_start_index - 1) or compute a local
start_index0 = sensor.record_start_index - 1 and use t_index >= start_index0 and
file_index = t_index - start_index0 when calling extract_sensor_data.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant