Skip to content

Comments

Resource workflow v2#306

Draft
Kerem-G wants to merge 4 commits intomasterfrom
resourceWorkflowV2
Draft

Resource workflow v2#306
Kerem-G wants to merge 4 commits intomasterfrom
resourceWorkflowV2

Conversation

@Kerem-G
Copy link
Collaborator

@Kerem-G Kerem-G commented Feb 24, 2026

Implements changes to the resources workflow which includes file structure changes and update functionality.

Closes #288 - File structure
Closes #301 - Updated resources, strains, and sample design workflow
Closes #303 - Implements an "update" button

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR implements version 2 of the resources workflow, introducing a new file structure with separate uploads directories, automatic file naming collision resolution, and an update feature for resources, strains, and sample designs. The changes reorganize how workflow files are stored and tracked, moving from a flat structure to a hierarchical one with JSON metadata files and separate uploads subdirectories.

Changes:

  • Moved upload functionality from ResourcesWizard to ImportFile component with integrated SynBioHub collection browser
  • Implemented automatic file naming collision resolution that appends numeric suffixes to prevent overwrites
  • Added FileUpdate command that allows updating previously uploaded resources while tracking upload history

Reviewed changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated 7 comments.

File Description
frontend/src/components/panels/resources-editor/ResourcesWizard.jsx Removed upload button and handler; now displays upload history in reverse chronological order with visual distinction for older versions
frontend/src/components/activities/explorer/ImportFile.jsx Added workflow subdirectory handling, file naming collision resolution, and integrated upload to SynBioHub during import
frontend/src/commands.js Enhanced file deletion to handle uploads subdirectory, improved download to read from new file structure, and added comprehensive file update command with authentication

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +135 to +143
upload_resource(
filePath,
result.sbh_credential_check?.selectedRepo,
result.authToken,
collection.displayId,
collection.description,
dirName,
result.sbh_overwrite
);
Copy link

Copilot AI Feb 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The upload_resource call is not awaited, which means errors from the upload won't be caught by the surrounding try-catch block and the user won't be notified if the upload fails after the local file has been saved and workflow JSON has been created. This could leave the system in an inconsistent state where the file exists locally but was never uploaded to SynBioHub. Consider adding await before upload_resource and handling any upload errors appropriately.

Copilot uses AI. Check for mistakes.
Comment on lines +279 to +287
try{ await upload_resource(
newFilePath,
selectedRepo,
authToken,
collectionDisplayId,
"",
dirHandle,
3
);} catch (e) {
Copy link

Copilot AI Feb 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The upload_resource call is wrapped in a try-catch block that silently swallows all errors (empty catch block). According to issue #303, if SBS_Server fails, the new file should be removed and the old file and workflow JSON should be left as they were originally. However, this code continues to delete the old file, update the JSON, and show a success notification even if the upload fails. This violates the rollback requirement specified in issue #303. The error should be propagated or handled with proper rollback logic to remove the new file and restore the original state.

Suggested change
try{ await upload_resource(
newFilePath,
selectedRepo,
authToken,
collectionDisplayId,
"",
dirHandle,
3
);} catch (e) {
try {
await upload_resource(
newFilePath,
selectedRepo,
authToken,
collectionDisplayId,
"",
dirHandle,
3
);
} catch (e) {
// Roll back: remove staged file and propagate error so we don't
// delete the old file, update JSON, or show a success notification.
try {
await uploadsDir.removeEntry(stagingName);
} catch {
// Best-effort cleanup; ignore errors during rollback.
}
throw e;

Copilot uses AI. Check for mistakes.
collectionDisplayId,
"",
dirHandle,
3
Copy link

Copilot AI Feb 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The hardcoded value 3 for sbh_overwrite is inconsistent with the API usage pattern elsewhere in the codebase. Looking at the upload_resource function in API.js, sbh_overwrite is converted to 2 (if true) or 0 (if false) before being sent to the server. The value 3 appears to be undocumented and its behavior is unclear. The TODO comment suggests this is temporary, but using an undefined overwrite mode in production code could lead to unexpected behavior. Consider using a documented value (0, 2, or the boolean value that gets converted) or documenting what overwrite mode 3 represents.

Suggested change
3
true

Copilot uses AI. Check for mistakes.
Comment on lines +290 to +296
if (sameFilename) {
try { await uploadsDir.removeEntry(existingFileName); } catch {}
const finalFH = await uploadsDir.getFileHandle(newFileName, { create: true });
const finalWritable = await finalFH.createWritable();
await finalWritable.write(newFile);
await finalWritable.close();
try { await uploadsDir.removeEntry(stagingName); } catch {}
Copy link

Copilot AI Feb 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The file replacement logic has a potential data loss issue. When sameFilename is true, the code deletes the existing file (line 291), creates a new file (lines 292-295), and then tries to delete the staging file (line 296). However, if any error occurs between deleting the existing file and successfully writing the new file, the original data is lost with no way to recover it. A safer approach would be to verify the new file is written successfully before deleting the old one, or to keep the staging file as a backup until the entire operation completes successfully.

Copilot uses AI. Check for mistakes.

const newFilePath = `${directory}/uploads/${newFileName}`;

// TODO: Remove once SBS Server implementation works correctly
Copy link

Copilot AI Feb 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This TODO comment indicates that the upload implementation is incomplete or incorrect. The comment states "Remove once SBS Server implementation works correctly", suggesting this is a workaround rather than the final implementation. Given that the entire try-catch block silently swallows upload errors, this temporary code could mask real upload failures and create data inconsistencies. Before merging, either the SBS Server should be fixed and this workaround removed, or this should be properly implemented with error handling and rollback logic as specified in issue #303.

Copilot uses AI. Check for mistakes.
Comment on lines +19 to +31
async function getAvailableBaseName(objectTypeDir, uploadsDir, baseName, ext) {
let candidate = baseName;
let counter = 1;
while (true) {
let jsonExists = false;
let fileExists = false;
try { await objectTypeDir.getFileHandle(`${candidate}.json`); jsonExists = true; } catch {}
try { await uploadsDir.getFileHandle(`${candidate}${ext}`); fileExists = true; } catch {}
if (!jsonExists && !fileExists) return candidate;
candidate = `${baseName} (${counter})`;
counter++;
}
}
Copy link

Copilot AI Feb 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The infinite while loop in getAvailableBaseName could potentially run indefinitely if there's an issue with the file system operations or if an extremely large number of files with the same base name exist. While unlikely in practice, this could cause the application to hang. Consider adding a maximum iteration limit (e.g., 1000 attempts) and throwing an error if exceeded to prevent potential infinite loops.

Copilot uses AI. Check for mistakes.

const selectedRepo = lastUpload.selectedRepo;
const expectedEmail = lastUpload.userEmail || null;
const collectionDisplayId = lastUpload.uri.split('/').slice(-2, -1)[0] || lastUpload.collectionName;
Copy link

Copilot AI Feb 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The collectionDisplayId extraction logic assumes the URI has a specific format where the collection display ID is the second-to-last segment. This could fail if the URI format is different than expected. For example, if the URI is just "https://synbiohub.org/collection1" without trailing slashes, slice(-2, -1)[0] would return an empty string. Consider adding validation to ensure the extraction succeeds, or use a more robust method to extract the collection ID from the URI, possibly falling back to parsing the URI more carefully or using the collection's actual displayId if it's stored in the upload record.

Copilot uses AI. Check for mistakes.
attachments = {params_from_request['attachments'][file.filename] : file for file in attachment_files}
print(attachments)
else:
attachments = None
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Add "Update" for Resources Implement Workflow for Resources, Strains and SampleDesigns Fix Resources File Pointing

2 participants