Merge the feature to download all the plots (including group comparison) from statmodel into Development#174
Merge the feature to download all the plots (including group comparison) from statmodel into Development#174swaraj-neu wants to merge 6 commits intodevelfrom
Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughThe download plot handler signature was extended to accept (input, contrast, preprocess_data) and now branches: a ResponseCurve path that validates contrasts, prepares dose-response data, generates a PDF and returns a ZIP, and a fallback legacy path that preserves summary-plot download behavior. Changes
Sequence DiagramsequenceDiagram
participant User
participant Handler as DownloadHandler
participant Contrast
participant Data as PreprocessData
participant PlotGen as PlotGenerator
participant FS as FileSystem
User->>Handler: initiate download
Handler->>Handler: inspect plot type
alt Response Curve
Handler->>Contrast: request/validate contrast matrix
Contrast-->>Handler: matrix or NULL
Handler->>Data: preprocess / prepare dose-response data
Data-->>Handler: dose-response dataset
Handler->>PlotGen: generate response-curve plot (PDF)
PlotGen-->>FS: write temp PDF
Handler->>FS: zip PDF -> create ZIP
FS-->>Handler: ZIP path
Handler->>User: serve ZIP download
Handler->>FS: cleanup temp files
else Summary Plot
Handler->>FS: find most recent Ex_* file
FS-->>Handler: file path
Handler->>FS: copy file to download target or zip as needed
Handler->>User: serve file download
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~22 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 inconclusive)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Tip Try Coding Plans. Let us write the prompt for your AI agent so you can ship faster (with fewer bugs). Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
PR Reviewer Guide 🔍Here are some key observations to aid the review process:
|
PR Code Suggestions ✨Explore these optional code suggestions:
|
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (3)
tests/testthat/test-module-statmodel-server.R (1)
642-660: Test validates initialization but not download functionality.The test confirms the server initializes without error after the download handler signature change, which is valuable for regression protection. However, it doesn't test the actual download handler behavior (filename generation, content creation, or zip file output).
Consider adding functional tests that mock the input state for response curve plot type and verify:
- The filename function returns
"ResponseCurvePlot-<date>.zip"for response curves- The content function generates a valid PDF and zips it correctly
Would you like me to help draft a more comprehensive test that verifies the download handler behavior?
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/testthat/test-module-statmodel-server.R` around lines 642 - 660, The test test_that("statmodelServer initializes with updated download handler without error") only checks initialization but not the download handler behavior; add functional assertions that exercise the download handler returned by statmodelServer: call or invoke the handler's filename-generating function and assert it matches the pattern "ResponseCurvePlot-<date>.zip" for response curve inputs, then invoke the handler's content function (mocking any file/plot generation dependencies) and verify it writes a valid PDF inside a ZIP (open the ZIP, check for a .pdf entry and non-zero size). Locate the server object via statmodelServer in the test harness and reference the download handler object (the reactive or list element named for response curve download in statmodelServer) when adding these assertions, mocking input state to set plot type to "response curve" and supplying temporary file paths to capture output.R/statmodel-server-visualization.R (1)
151-161: Consider cleaning up temporary files after use.The PDF and ZIP files created in
tempdir()are not explicitly removed after copying to the download destination. While R will eventually clean these up, explicitly removing them after use is good practice to avoid accumulating temp files during long-running sessions.♻️ Proposed cleanup
zip_path <- file.path(temp_dir, "Ex_ResponseCurvePlot.zip") utils::zip(zip_path, files = pdf_path, flags = "-j") file.copy(zip_path, file) + # Clean up temp files + unlink(c(pdf_path, zip_path)) } else {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@R/statmodel-server-visualization.R` around lines 151 - 161, The temp PDF and ZIP created in the block using temp_dir / pdf_path / zip_path (after ggplot2::ggsave, utils::zip and file.copy) are never removed; add cleanup to delete these temp artifacts after the file.copy completes (or use an on.exit handler) by calling unlink on pdf_path and zip_path (and temp_dir if appropriate) to ensure temporary files are removed even on error.tests/testthat/test-statmodel-ui-options-visualization.R (1)
68-87: Consider consolidating duplicate tests.This test is nearly identical to the one in
test-module-statmodel-server.R(lines 642-660). Both tests verify the same initialization behavior with the updated download handler. Consider keeping only one copy to avoid maintenance overhead and potential drift between duplicates.Additionally, this test only verifies that initialization doesn't error and that
contrast$matrixis NULL - it doesn't actually test the download handler functionality (e.g., that the correct filename is generated or that the content function produces a valid zip file for response curves). Consider adding functional tests for the download handler behavior.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/testthat/test-statmodel-ui-options-visualization.R` around lines 68 - 87, This test duplicates coverage in test-module-statmodel-server.R; remove or consolidate this test from test-statmodel-ui-options-visualization.R and keep a single initialization test that uses testServer(statmodelServer, args = list(parent_session = MockShinySession$new(), loadpage_input = reactive(...), qc_input = reactive(...), get_data = reactive(...), preprocess_data = reactive(...))) and still asserts expect_null(contrast$matrix). Then extend the retained test in test-module-statmodel-server.R (or the consolidated one) to add functional checks for the download handler: locate and exercise create_download_plot_handler (or the server export that wraps it) to capture the generated filename and invoke its content function, asserting the filename format and that the produced bytes form a valid zip containing the expected response-curve files/plots.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@R/statmodel-server-visualization.R`:
- Around line 164-167: The code assumes there is at least one "Ex_*" file; guard
the block that builds files, file_info, latest_file and calls file.copy by
checking that files is non-empty (e.g., if (length(files) == 0) stop/return with
a clear message for non-response-curve downloads), and only compute
which.max(file_info$mtime) and call file.copy(latest_file, file) when files
exists; reference the variables/files: files, file_info, latest_file, and the
file.copy(...) invocation so you validate and fail-fast before attempting the
copy.
- Around line 131-168: Before calling merge() on contrast$matrix inside the
downloadHandler content (content = function(file)), add a null-check for
contrast$matrix and short-circuit with a clear validation/error to avoid the
merge crash; e.g. right before "matrix <- contrast$matrix" check
is.null(contrast$matrix) and call
shiny::validate(shiny::need(!is.null(contrast$matrix), "Contrast matrix not
built; please create it before downloading the response curve.")) or stop() with
an informative message, so prepare_dose_response_fit and
visualizeResponseProtein are only run when contrast$matrix is present.
---
Nitpick comments:
In `@R/statmodel-server-visualization.R`:
- Around line 151-161: The temp PDF and ZIP created in the block using temp_dir
/ pdf_path / zip_path (after ggplot2::ggsave, utils::zip and file.copy) are
never removed; add cleanup to delete these temp artifacts after the file.copy
completes (or use an on.exit handler) by calling unlink on pdf_path and zip_path
(and temp_dir if appropriate) to ensure temporary files are removed even on
error.
In `@tests/testthat/test-module-statmodel-server.R`:
- Around line 642-660: The test test_that("statmodelServer initializes with
updated download handler without error") only checks initialization but not the
download handler behavior; add functional assertions that exercise the download
handler returned by statmodelServer: call or invoke the handler's
filename-generating function and assert it matches the pattern
"ResponseCurvePlot-<date>.zip" for response curve inputs, then invoke the
handler's content function (mocking any file/plot generation dependencies) and
verify it writes a valid PDF inside a ZIP (open the ZIP, check for a .pdf entry
and non-zero size). Locate the server object via statmodelServer in the test
harness and reference the download handler object (the reactive or list element
named for response curve download in statmodelServer) when adding these
assertions, mocking input state to set plot type to "response curve" and
supplying temporary file paths to capture output.
In `@tests/testthat/test-statmodel-ui-options-visualization.R`:
- Around line 68-87: This test duplicates coverage in
test-module-statmodel-server.R; remove or consolidate this test from
test-statmodel-ui-options-visualization.R and keep a single initialization test
that uses testServer(statmodelServer, args = list(parent_session =
MockShinySession$new(), loadpage_input = reactive(...), qc_input =
reactive(...), get_data = reactive(...), preprocess_data = reactive(...))) and
still asserts expect_null(contrast$matrix). Then extend the retained test in
test-module-statmodel-server.R (or the consolidated one) to add functional
checks for the download handler: locate and exercise
create_download_plot_handler (or the server export that wraps it) to capture the
generated filename and invoke its content function, asserting the filename
format and that the produced bytes form a valid zip containing the expected
response-curve files/plots.
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
R/module-statmodel-server.RR/statmodel-server-visualization.Rtests/testthat/test-module-statmodel-server.Rtests/testthat/test-statmodel-ui-options-visualization.R
|
Please don't delete the feature branch post merge. |
R/statmodel-server-visualization.R
Outdated
| # Save plot to a temp PDF, then zip it | ||
| temp_dir <- tempdir() | ||
| pdf_path <- file.path(temp_dir, "Ex_ResponseCurvePlot.pdf") | ||
| ggplot2::ggsave(pdf_path, |
There was a problem hiding this comment.
Can you import the dependency @importFrom ggplot2 ggsave?
R/statmodel-server-visualization.R
Outdated
| ) | ||
|
|
||
| zip_path <- file.path(temp_dir, "Ex_ResponseCurvePlot.zip") | ||
| utils::zip(zip_path, files = pdf_path, flags = "-j") |
There was a problem hiding this comment.
Similar import the dependency @importFrom utils zip
| # DOWNLOAD PLOT HANDLER TESTS | ||
| # ============================================================================ | ||
|
|
||
| test_that("statmodelServer initializes with updated download handler without error", { |
There was a problem hiding this comment.
Could we try writing tests verifying the logic of the create_download_plot_handler instead? You'd need to mock certain functions and verify certain functions get called, e.g. ggplot2::ggsave, utils::zip, etc.
As we had discussed earlier, see this file for smaller unit tests that don't use testServer
95ccddb to
182ced8
Compare
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@R/statmodel-server-visualization.R`:
- Around line 125-131: The filename() helper always returns a ".zip" name even
when the served content (latest_file) is a PDF/PNG; update filename() or the
download logic so the returned name matches the actual content or actually
creates a ZIP for the summary branch: either (A) detect the real file extension
of latest_file and return paste("SummaryPlot-", Sys.Date(), ".",
tools::file_ext(latest_file), sep="") for the summary branch, or (B) package
latest_file into a real ZIP archive and return the .zip name; apply this change
where filename() and the code that copies latest_file (variables: filename,
latest_file, input[[NAMESPACE_STATMODEL$visualization_plot_type]],
CONSTANTS_STATMODEL$plot_type_response_curve) are used (also update the summary
branch referenced around the 169-179 region).
- Around line 157-168: The code currently writes fixed filenames into tempdir()
(pdf_path, zip_path) causing cross-session collisions; change to use unique temp
files (e.g., tempfile() or tempfile(pattern=...)) for pdf_path and zip_path,
pass those unique paths into ggplot2::ggsave and utils::zip, ensure
file.copy(file, destination, overwrite=TRUE) or equivalent is used to atomically
move the zip to the final location, and update the cleanup to unlink only those
specific tempfile paths (pdf_path, zip_path) to avoid accidentally deleting
others.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 74743fcb-8d06-4dbf-9ae0-2a8c4dfe68a7
📒 Files selected for processing (3)
NAMESPACER/statmodel-server-visualization.Rtests/testthat/test-module-statmodel-server.R
🚧 Files skipped from review as they are similar to previous changes (1)
- tests/testthat/test-module-statmodel-server.R
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
R/statmodel-server-visualization.R (1)
133-193: Centralize error handling for the full download pipeline.
merge(),prepare_dose_response_fit(),visualizeResponseProtein(),ggsave(), andutils::zip()failures can still bubble without consistent notification. Wrapping the fullcontentbody in onetryCatchwould make failure UX predictable.🔧 Proposed refactor
content = function(file) { - if (input[[NAMESPACE_STATMODEL$visualization_plot_type]] == - CONSTANTS_STATMODEL$plot_type_response_curve) { + tryCatch({ + if (input[[NAMESPACE_STATMODEL$visualization_plot_type]] == + CONSTANTS_STATMODEL$plot_type_response_curve) { # Generate response curve plot matrix <- contrast$matrix if (is.null(matrix)) { showNotification("Please build a contrast matrix first.", type = "error") return(NULL) } protein_level_data <- merge(preprocess_data()$ProteinLevelData, matrix, by = "GROUP") dia_prepared <- prepare_dose_response_fit(data = protein_level_data) @@ - } else { + } else { # Existing behavior for other plot types (TODO: refactor in future issue) files <- list.files(getwd(), pattern = "^Ex_", full.names = TRUE) @@ - } + } + }, error = function(e) { + showNotification(conditionMessage(e), type = "error") + return(NULL) + }) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@R/statmodel-server-visualization.R` around lines 133 - 193, Wrap the entire content = function(file) { ... } body in a single tryCatch so any errors from merge, prepare_dose_response_fit, visualizeResponseProtein, ggplot2::ggsave, utils::zip, file.copy, etc. are caught; in the error handler call showNotification with the error message and return(NULL). Ensure the existing on.exit(unlink(...), add = TRUE) calls still execute for cleanup (preserve them inside the try block or set up a top-level on.exit) and rethrow or return NULL consistently on error so the download pipeline (including branches that use list.files/getwd and summary zip creation) has a predictable failure path; reference the content function, contrast$matrix, preprocess_data(), prepare_dose_response_fit(), visualizeResponseProtein(), ggplot2::ggsave(), utils::zip(), and file.copy() when making the changes.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@R/statmodel-server-visualization.R`:
- Around line 174-180: The code currently uses getwd() to discover files (files
<- list.files(getwd(), pattern = "^Ex_", ...)), which can leak other sessions'
artifacts; change the handler to use a session-scoped artifact directory passed
into the handler (e.g., an argument like artifact_dir or a session-specific
value such as session$userData$artifact_dir) instead of getwd(), validate the
directory exists, then call list.files(artifact_dir, pattern="^Ex_",
full.names=TRUE) and continue using file.info(files) / latest_file as before;
update the handler signature and any call sites to supply the session-scoped
path.
---
Nitpick comments:
In `@R/statmodel-server-visualization.R`:
- Around line 133-193: Wrap the entire content = function(file) { ... } body in
a single tryCatch so any errors from merge, prepare_dose_response_fit,
visualizeResponseProtein, ggplot2::ggsave, utils::zip, file.copy, etc. are
caught; in the error handler call showNotification with the error message and
return(NULL). Ensure the existing on.exit(unlink(...), add = TRUE) calls still
execute for cleanup (preserve them inside the try block or set up a top-level
on.exit) and rethrow or return NULL consistently on error so the download
pipeline (including branches that use list.files/getwd and summary zip creation)
has a predictable failure path; reference the content function, contrast$matrix,
preprocess_data(), prepare_dose_response_fit(), visualizeResponseProtein(),
ggplot2::ggsave(), utils::zip(), and file.copy() when making the changes.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: f3391295-9a00-4b63-9a49-9d6d96a3a91b
📒 Files selected for processing (1)
R/statmodel-server-visualization.R
There was a problem hiding this comment.
♻️ Duplicate comments (1)
R/statmodel-server-visualization.R (1)
175-181:⚠️ Potential issue | 🟠 MajorShared working directory scan risks cross-session artifact leakage.
The
getwd()scan forEx_*files remains unaddressed from previous review. In multi-user Shiny deployments, this can return another session's artifact (wrong download + privacy risk).Consider passing a session-scoped artifact directory into the handler:
💡 Suggested direction
-create_download_plot_handler <- function(output, input, contrast, preprocess_data) { +create_download_plot_handler <- function(output, input, contrast, preprocess_data, artifact_dir) { ... - files <- list.files(getwd(), pattern = "^Ex_", full.names = TRUE) + files <- list.files(artifact_dir(), pattern = "^Ex_", full.names = TRUE)At the call site, pass a session-scoped reactive like
session$userData$artifact_diror areactiveValinitialized per session.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@R/statmodel-server-visualization.R` around lines 175 - 181, Replace the global getwd() scan with a session-scoped artifact directory: change the list.files call that builds files (and subsequent file_info/which.max logic) to use a provided per-session path (e.g. a parameter or reactive such as session$userData$artifact_dir or a reactiveVal passed into the handler) instead of getwd(); ensure the handler validates the directory exists and is within allowed bounds before calling list.files(pattern = "^Ex_"), then proceed with the existing file_info and latest_file logic using that session-scoped directory.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@R/statmodel-server-visualization.R`:
- Around line 175-181: Replace the global getwd() scan with a session-scoped
artifact directory: change the list.files call that builds files (and subsequent
file_info/which.max logic) to use a provided per-session path (e.g. a parameter
or reactive such as session$userData$artifact_dir or a reactiveVal passed into
the handler) instead of getwd(); ensure the handler validates the directory
exists and is within allowed bounds before calling list.files(pattern = "^Ex_"),
then proceed with the existing file_info and latest_file logic using that
session-scoped directory.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 836c9e06-d988-4c97-b02b-d7a5978fdeb2
📒 Files selected for processing (1)
R/statmodel-server-visualization.R
|
The getwd() scan is pre-existing code, not introduced by this PR. It's tracked for replacement in the follow-up issue for on-demand group comparison plot downloads. |
| } else { | ||
| # Generate group comparison plot using a session-scoped temp directory | ||
| plot_type <- input[[NAMESPACE_STATMODEL$visualization_plot_type]] | ||
| fold_change_cutoff <- ifelse( |
There was a problem hiding this comment.
I think this should be zero if NULL.
| pdf_path <- tempfile("Ex_ResponseCurvePlot-", fileext = ".pdf") | ||
| ggplot2::ggsave(pdf_path, | ||
| plot = response_plot, device = "pdf", | ||
| width = 10, height = 8 | ||
| ) | ||
|
|
||
| zip_path <- file.path(temp_dir, "Ex_ResponseCurvePlot.zip") | ||
| utils::zip(zip_path, files = pdf_path, flags = "-j") | ||
| file.copy(zip_path, file) | ||
| unlink(c(pdf_path, zip_path)) | ||
| } else { | ||
| # Existing behavior for other plot types (TODO: refactor in future issue) | ||
| files <- list.files(getwd(), pattern = "^Ex_", full.names = TRUE) | ||
| if (length(files) == 0) { | ||
| showNotification("No plot files found to download.", type = "error") | ||
| zip_path <- tempfile("Ex_ResponseCurvePlot-", fileext = ".zip") | ||
| on.exit(unlink(c(pdf_path, zip_path), force = TRUE), add = TRUE) | ||
| utils::zip(zipfile = zip_path, files = pdf_path, flags = "-j") | ||
| copied <- file.copy(zip_path, file, overwrite = TRUE) | ||
| if (!isTRUE(copied)) { | ||
| showNotification("Failed to copy response curve ZIP for download.", type = "error") | ||
| return(NULL) | ||
| } |
There was a problem hiding this comment.
This code + the temp directory creation seems to be duplicated between both plot types. Can this be consolidated into one code block? I don't think the zipfile names need to be necessarily different here (or a helper function that helps zipping files based on ggplot outputs).
| test_that("download handler filename returns ResponseCurvePlot for response curves", { | ||
| # Test the filename logic directly | ||
| plot_type <- CONSTANTS_STATMODEL$plot_type_response_curve | ||
| filename <- if (plot_type == CONSTANTS_STATMODEL$plot_type_response_curve) { | ||
| paste("ResponseCurvePlot-", Sys.Date(), ".zip", sep = "") | ||
| } else { | ||
| paste("SummaryPlot-", Sys.Date(), ".zip", sep = "") | ||
| } | ||
| expect_true(grepl("ResponseCurvePlot", filename)) | ||
| expect_true(grepl("\\.zip$", filename)) | ||
| }) | ||
|
|
||
| test_that("download handler filename returns SummaryPlot for other plot types", { | ||
| plot_type <- CONSTANTS_STATMODEL$plot_type_volcano_plot | ||
| filename <- if (plot_type == CONSTANTS_STATMODEL$plot_type_response_curve) { | ||
| paste("ResponseCurvePlot-", Sys.Date(), ".zip", sep = "") | ||
| } else { | ||
| paste("SummaryPlot-", Sys.Date(), ".zip", sep = "") | ||
| } | ||
| expect_true(grepl("SummaryPlot", filename)) | ||
| expect_true(grepl("\\.zip$", filename)) | ||
| }) | ||
|
|
||
| # ============================================================================ | ||
| # GROUP COMPARISON PLOT DOWNLOAD TESTS | ||
| # ============================================================================ | ||
|
|
||
| test_that("download handler filename returns SummaryPlot for Volcano plot", { | ||
| plot_type <- CONSTANTS_STATMODEL$plot_type_volcano_plot | ||
| filename <- if (plot_type == CONSTANTS_STATMODEL$plot_type_response_curve) { | ||
| paste("ResponseCurvePlot-", Sys.Date(), ".zip", sep = "") | ||
| } else { | ||
| paste("SummaryPlot-", Sys.Date(), ".zip", sep = "") | ||
| } | ||
| expect_true(grepl("SummaryPlot", filename)) | ||
| expect_true(grepl("\\.zip$", filename)) | ||
| }) | ||
|
|
||
| test_that("download handler filename returns SummaryPlot for Heatmap", { | ||
| plot_type <- CONSTANTS_STATMODEL$plot_type_heatmap | ||
| filename <- if (plot_type == CONSTANTS_STATMODEL$plot_type_response_curve) { | ||
| paste("ResponseCurvePlot-", Sys.Date(), ".zip", sep = "") | ||
| } else { | ||
| paste("SummaryPlot-", Sys.Date(), ".zip", sep = "") | ||
| } | ||
| expect_true(grepl("SummaryPlot", filename)) | ||
| expect_true(grepl("\\.zip$", filename)) | ||
| }) | ||
|
|
||
| test_that("download handler filename returns SummaryPlot for ComparisonPlot", { | ||
| plot_type <- CONSTANTS_STATMODEL$plot_type_comparison_plot | ||
| filename <- if (plot_type == CONSTANTS_STATMODEL$plot_type_response_curve) { | ||
| paste("ResponseCurvePlot-", Sys.Date(), ".zip", sep = "") | ||
| } else { | ||
| paste("SummaryPlot-", Sys.Date(), ".zip", sep = "") | ||
| } | ||
| expect_true(grepl("SummaryPlot", filename)) | ||
| expect_true(grepl("\\.zip$", filename)) | ||
| }) |
There was a problem hiding this comment.
You're not testing any helper functions here. Could you adjust these tests to use test a helper function?
| ) | ||
| }) | ||
|
|
||
| test_that("all plot type constants are defined for download handler branching", { |
There was a problem hiding this comment.
Similarly, this test only tests the constants are different, but what would be more useful is verifying the logic flow of the helper function create_download_plot_handler
| { | ||
| # Server initialized: create_download_plot_handler was called internally | ||
| # Verify the server doesn't crash with the new 4-argument signature | ||
| expect_null(contrast$matrix) |
There was a problem hiding this comment.
I'm not sure if this is a valid test that would validate create_download_plot_handler was even called. You'd need to verify the mock for create_download_plot_handler was invoked.
| { | ||
| # Verify server initializes without error with 6-argument download handler | ||
| # (output, input, contrast, preprocess_data, data_comparison, loadpage_input) | ||
| expect_null(contrast$matrix) |
There was a problem hiding this comment.
Similarly, I'm not sure if this is a valid test that would validate create_download_plot_handler was even called. You'd need to verify the mock for create_download_plot_handler was invoked with the 6 arguments
…lots to be exported as a zip file
…response curve plots
…que temp files and error handling
…atmap & Comparison
979921a to
7215528
Compare
feat: update create_download_plot_handler to support response curve plots to be exported as a zip file
Motivation and Context
This PR extends MSstatsShiny's plot download capability to allow exporting dynamically generated response-curve plots as zipped artifacts. Response-curve plots require reactive context (input, contrast, preprocess_data) to build dose-response visualizations from ProteinLevelData, so the download handler was extended to receive that context. The implementation adds a response-curve generation and zip-export path while preserving the legacy behavior for other plot types.
Solution summary: the download handler API was changed to accept (output, input, contrast, preprocess_data). For response-curve requests it validates the contrast matrix, prepares dose-response data, builds the response-curve plot (via the existing visualizeResponseProtein flow), saves the plot to a temporary PDF, zips it, and serves the ZIP. Non-response-curve downloads continue to use the legacy artifact-copy/zip fallback.
Detailed Changes
R/statmodel-server-visualization.R
R/module-statmodel-server.R
NAMESPACE
Misc
Unit Tests Added / Modified
Coding Guidelines and Notes (violations / technical debt)