Skip to content

Conversation

@viktormarinho
Copy link
Contributor

@viktormarinho viktormarinho commented Dec 2, 2025

Summary by CodeRabbit

  • Refactor

    • Upgraded core infrastructure and runtime environment to improve overall application performance and system reliability
    • Restructured build pipeline and development workflow to provide a more streamlined and efficient development experience
    • Updated all dependencies and system configurations for enhanced maintainability and platform compatibility
  • Chores

    • Refreshed development and production build configurations
    • Updated environment management and variable naming conventions

✏️ Tip: You can customize this high-level summary in your review settings.

- Changed the version of @decocms/runtime to 0.0.1-testing-beta.3 in package.json and bun.lock.
- Added hono as a new dependency.
- Refactored the development scripts in package.json for improved client and server build processes.
- Removed the wrangler.toml file as part of the project restructuring.
- Updated the environment context references in server code from DECO_CHAT_REQUEST_CONTEXT to DECO_REQUEST_CONTEXT for consistency.
- Adjusted Vite configuration to remove Cloudflare plugin integration, streamlining the build process for Bun.
- Replaced AWS SDK imports with Bun's built-in S3 client functionality in s3-client.ts.
- Updated storage tools to utilize the new S3 client methods for listing, retrieving, and deleting objects.
- Removed AWS SDK dependencies from package.json, streamlining the project and reducing external dependencies.
- Adjusted methods for generating presigned URLs and handling object metadata to align with the new S3 client structure.
@coderabbitai
Copy link

coderabbitai bot commented Dec 2, 2025

Walkthrough

The pull request migrates the object-storage project from AWS SDK S3 and Cloudflare Workers to a Bun-based S3 client. It introduces containerization via Dockerfile, updates build scripts and dependencies, removes Wrangler configuration, eliminates legacy Vite plugin code, and renames environment context properties across shared utilities.

Changes

Cohort / File(s) Summary
Build & Runtime Infrastructure
object-storage/Dockerfile, object-storage/vite.config.ts, object-storage/tsconfig.json, object-storage/package.json
Added multi-stage Dockerfile for Bun-based builds. Replaced Cloudflare Vite plugin with new deco plugin (target: bun). Removed @cloudflare/workers-types from TypeScript config. Restructured package.json scripts: removed dev/configure/deploy; added concurrent dev flow, build:client/server/combined, and production start command. Replaced AWS S3 and Cloudflare dependencies with @decocms/runtime 1.0.0-alpha.2 and hono. Swapped Vite plugin from Cloudflare to @decocms/vite-plugin 1.0.0-alpha.1.
S3 Client & Storage Operations Migration
object-storage/server/lib/s3-client.ts, object-storage/server/tools/storage.ts
Replaced AWS SDK v3 S3Client with Bun S3Client. Updated s3-client.ts: changed env property from DECO_CHAT_REQUEST_CONTEXT to DECO_REQUEST_CONTEXT; flattened credentials config; added bucketName field. Refactored tools/storage.ts: replaced AWS command operations (ListObjectsV2Command, HeadObjectCommand, etc.) with direct Bun S3Client method calls (list, file().stat(), file().presign(), file().delete()). Adapted response field mappings (Key→key, Size→size, LastModified→lastModified, ETag→etag). Updated DELETE_OBJECTS to use Promise.allSettled.
Server Runtime Configuration
object-storage/server/main.ts
Removed ASSETS property from Env type. Added import of createAssetServerFetcher from bun-asset-server. Replaced env.ASSETS.fetch delegation with direct createAssetServerFetcher call using NODE_ENV and assetsDirectory "./dist/client".
Removed Wrangler Deployment Configuration
object-storage/wrangler.toml
Deleted entire Wrangler configuration file, eliminating Cloudflare Workers deployment and assets binding setup.
Removed Legacy Vite Plugin & Updated Exports
shared/deco-vite-plugin.ts, shared/package.json
Removed entire shared/deco-vite-plugin.ts file, eliminating deco, importSqlStringPlugin, and vitePlugins exports. Removed "./vite-plugin" export mapping from shared/package.json. Added @cloudflare/vite-plugin 1.13.4 as devDependency to shared.
Environment Context Renaming
shared/tools/user.ts
Renamed UserToolsEnv interface property from DECO_CHAT_REQUEST_CONTEXT to DECO_REQUEST_CONTEXT. Updated all references in createGetUserTool accordingly.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

  • Areas requiring extra attention:
    • object-storage/server/tools/storage.ts — Verify all AWS SDK method replacements map correctly to Bun S3Client API; confirm pagination logic (nextContinuationToken derivation) and error handling in DELETE_OBJECTS (Promise.allSettled aggregation).
    • object-storage/server/lib/s3-client.ts — Confirm Bun S3Client credential flattening and bucket configuration align with expected S3 backend behavior.
    • object-storage/package.json — Validate new script orchestration (build, dev, start) works across dev and production environments.
    • object-storage/Dockerfile — Review multi-stage build and dist output locations match package.json start command entry point.

Possibly related PRs

Poem

🐰 From AWS clouds to Bun we fly,
Wrangler's wrangled out, Deco up on high!
S3 commands now just method calls,
Cloudflare's gone—Bun catches all! ✨🚀

Pre-merge checks and finishing touches

❌ Failed checks (1 inconclusive)
Check name Status Explanation Resolution
Title check ❓ Inconclusive The title 'Objectstorage outside cf' is vague and uses abbreviated/cryptic terms that don't clearly convey the main changes to someone scanning PR history. Consider using a more descriptive title such as 'Migrate object storage from Cloudflare to Bun runtime' to clearly communicate the main architectural change.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch objectstorage-outside-cf

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
shared/tools/user.ts (1)

31-39: Update the stale documentation.

The comment references the old property name DECO_CHAT_REQUEST_CONTEXT, but the property has been renamed to DECO_REQUEST_CONTEXT.

Apply this diff to update the comment:

 /**
  * `createPrivateTool` is a wrapper around `createTool` that
- * will call `env.DECO_CHAT_REQUEST_CONTEXT.ensureAuthenticated`
+ * will call `env.DECO_REQUEST_CONTEXT.ensureAuthenticated`
  * before executing the tool.
  *
  * It automatically returns a 401 error if valid user credentials
  * are not present in the request. You can also call it manually
  * to get the user object.
  */
object-storage/server/main.ts (1)

1-6: Update stale documentation.

The comment still references "Cloudflare workers app," but this has been migrated to a Bun-based runtime.

Apply this diff to update the comment:

 /**
  * This is the main entry point for your application and
- * MCP server. This is a Cloudflare workers app, and serves
- * both your MCP server at /mcp and your views as a react
- * application at /.
+ * MCP server. This is a Bun-based runtime that serves both 
+ * your MCP server at /mcp and your views as a React 
+ * application at /.
  */
🧹 Nitpick comments (2)
object-storage/Dockerfile (1)

1-37: Consider adding security and operational improvements.

The Dockerfile builds and runs correctly, but consider these enhancements:

  1. Security: The container runs as root. Add a non-root user for better security:
# Add before CMD in runtime stage
RUN addgroup --system --gid 1001 nodejs && \
    adduser --system --uid 1001 bunuser && \
    chown -R bunuser:nodejs /app
USER bunuser
  1. Observability: Add a health check for container orchestration:
# Add before CMD
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
  CMD curl -f http://localhost:3000/health || exit 1
  1. Port configuration: Consider using an environment variable instead of hardcoding port 3000 for flexibility.
object-storage/server/tools/storage.ts (1)

266-283: Consider rate limiting for large batch deletes.

The parallel delete implementation using Promise.allSettled is correct and handles errors appropriately. However, be aware that deleting 1000 objects in parallel (max allowed per input schema) could:

  1. Overwhelm the S3 service with concurrent requests, potentially triggering rate limits
  2. Create performance issues compared to AWS SDK's native batch delete

Consider adding concurrency control for large batches, e.g., using a library like p-limit to cap concurrent deletes.

Example with concurrency control:

import pLimit from 'p-limit';

// In the execute function:
const limit = pLimit(50); // Max 50 concurrent deletes
const results = await Promise.allSettled(
  keys.map((key: string) => limit(() => s3Client.file(key).delete()))
);
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 5c0edfd and e5b43f7.

⛔ Files ignored due to path filters (1)
  • bun.lock is excluded by !**/*.lock
📒 Files selected for processing (11)
  • object-storage/Dockerfile (1 hunks)
  • object-storage/package.json (3 hunks)
  • object-storage/server/lib/s3-client.ts (3 hunks)
  • object-storage/server/main.ts (3 hunks)
  • object-storage/server/tools/storage.ts (6 hunks)
  • object-storage/tsconfig.json (1 hunks)
  • object-storage/vite.config.ts (1 hunks)
  • object-storage/wrangler.toml (0 hunks)
  • shared/deco-vite-plugin.ts (0 hunks)
  • shared/package.json (1 hunks)
  • shared/tools/user.ts (2 hunks)
💤 Files with no reviewable changes (2)
  • shared/deco-vite-plugin.ts
  • object-storage/wrangler.toml
🧰 Additional context used
🧬 Code graph analysis (1)
object-storage/server/main.ts (3)
nanobanana/server/main.ts (1)
  • Env (22-27)
template-minimal/server/main.ts (1)
  • Env (18-23)
template-with-view/server/main.ts (1)
  • Env (20-25)
🔇 Additional comments (9)
object-storage/tsconfig.json (1)

36-36: LGTM!

The removal of @cloudflare/workers-types is consistent with the migration away from Cloudflare Workers to a Bun-based runtime.

shared/package.json (1)

20-20: This dependency is required and actively used across the monorepo.

The @cloudflare/vite-plugin is imported in vite.config.ts files across all major projects (nanobanana, template-minimal, template-with-view, veo, sora, replicate, pinecone, readonly-sql, openrouter, gemini-pro-vision, data-for-seo, apify, and datajud). It's also referenced in scripts/new.ts as a standard devDependency for template generation. The dependency in shared/package.json is appropriate and not an unintended leftover.

object-storage/server/lib/s3-client.ts (1)

5-5: Bun's S3Client is properly configured with full compatibility verified.

The implementation correctly supports S3-compatible providers (R2, MinIO, GCS) through custom endpoint configuration. The API surface is compatible with all operations used in storage.ts (list, stat, presign with GET/PUT methods, delete), and presigned URLs work correctly with custom endpoints via the s3Client.file().presign() method.

object-storage/package.json (1)

19-19: These alpha dependencies appear to be private/internal packages from DecoCMS, not publicly published on npm.

The project depends on @decocms/runtime@1.0.0-alpha.2 and @decocms/vite-plugin@1.0.0-alpha.1. Since these packages are not found in the public npm registry, they are likely distributed through a private registry or internal package source specific to DecoCMS. This changes the risk profile from the typical open-source alpha stability concerns—stability and breaking changes would need to be managed directly with the DecoCMS team rather than through public npm tracking. Clarify the package distribution mechanism and establish communication channels with the DecoCMS maintainers for updates and breaking change notifications.

object-storage/vite.config.ts (1)

1-16: LGTM! Note the alpha dependency.

The Vite configuration correctly integrates the Bun-focused Deco tooling. The migration to Deco plugin with target: "bun" is consistent with the broader PR changes.

Note: The project uses @decocms/vite-plugin version 1.0.0-alpha.1 (in devDependencies). As a pre-release version, monitor for breaking changes and stability issues before production deployment.

object-storage/server/tools/storage.ts (4)

151-154: LGTM!

The migration to Bun's presign API for GET operations is clean and preserves the expiration logic.


195-199: LGTM!

The migration to Bun's presign API for PUT operations correctly passes the content type.


61-82: The code correctly uses Bun's S3Client API as implemented.

The concerns raised are based on incorrect assumptions:

  1. eTag field naming (line 76): Bun's list() method returns objects with eTag (capital T), not lowercase etag. The code correctly accesses obj.eTag ?? "", which is verified by comparing with the stat() method that returns etag (lowercase) — these are two different response formats from different Bun APIs. The mapping to the output schema field etag (lowercase) is correct.

  2. Continuation token approach (lines 78-80): Bun's list() API supports the startAfter parameter which accepts a string key for pagination. Using the last object's key as the nextContinuationToken is the correct pagination approach for Bun's S3Client, as confirmed by the README documentation stating "Handle large buckets with continuation tokens."

Likely an incorrect or invalid review comment.


111-119: The metadata limitation is documented correctly and does not break existing code.

The metadata field is optional in the GET_OBJECT_METADATAOutput interface, so returning undefined is valid and expected. The implementation correctly reflects that Bun's stat() method does not provide custom S3 metadata. This is not a breaking change since consumers should already handle the optional metadata field gracefully.

Regarding error handling: the function intentionally allows errors to propagate when the object doesn't exist. This is consistent with other single-object operations in the same file (e.g., GET_PRESIGNED_URL, DELETE_OBJECT) and is an acceptable design pattern.

Likely an incorrect or invalid review comment.

Comment on lines +9 to +16
"gen": "deco gen --output=shared/deco.gen.ts",
"dev": "concurrently \"bun run dev:client\" \"NODE_ENV=development bun --hot run server/main.ts\"",
"dev:tunnel": "deco link -p 3000 -- bun run dev",
"dev:client": "bun --bun vite dev",
"build:client": "bun --bun vite build",
"build:server": "NODE_ENV=production bun build server/main.ts --target=bun --outfile=dist/server/main.js",
"build": "bun run build:client && bun run build:server",
"start": "NODE_ENV=production bun run dist/server/main.js"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Verify build artifacts match Dockerfile expectations

echo "=== Checking Dockerfile expectations ==="
if [ -f "object-storage/Dockerfile" ]; then
  rg -n "dist/(client|server)" object-storage/Dockerfile
else
  echo "Dockerfile not found at object-storage/Dockerfile"
fi

echo -e "\n=== Checking build script outputs ==="
rg -n "outfile=|outdir=" object-storage/package.json

echo -e "\n=== Checking vite build output config ==="
if [ -f "object-storage/vite.config.ts" ]; then
  rg -n "build\.(outDir|rollupOptions)" object-storage/vite.config.ts
else
  echo "vite.config.ts not found at object-storage/vite.config.ts"
fi

Repository: decocms/mcps

Length of output: 292


🏁 Script executed:

#!/bin/bash
# Find Dockerfile and check full vite config

echo "=== Listing Dockerfile possibilities ==="
find object-storage -name "Dockerfile*" -type f 2>/dev/null

echo -e "\n=== Full vite.config.ts content ==="
if [ -f "object-storage/vite.config.ts" ]; then
  cat object-storage/vite.config.ts
else
  echo "vite.config.ts not found"
fi

echo -e "\n=== Check for root Dockerfile ==="
if [ -f "Dockerfile" ]; then
  echo "Root Dockerfile exists"
  head -50 Dockerfile
fi

Repository: decocms/mcps

Length of output: 600


🏁 Script executed:

#!/bin/bash
cat object-storage/Dockerfile

Repository: decocms/mcps

Length of output: 689


🏁 Script executed:

#!/bin/bash
# Check for any build output references or issues

echo "=== Checking package.json build commands ==="
rg -A 2 '"build"' object-storage/package.json

echo -e "\n=== Checking if there are any .gitignore entries for dist ==="
rg -n "^dist" object-storage/.gitignore 2>/dev/null || echo "No .gitignore or dist entry not found"

echo -e "\n=== Checking build:client vite command specifics ==="
rg -B 2 -A 2 "dev:client|build:client" object-storage/package.json

Repository: decocms/mcps

Length of output: 946


Verify the vite build output configuration and document the expected build structure.

The build scripts orchestrate client and server builds, but there's a critical configuration gap: vite.config.ts does not explicitly set build.outDir, so bun run build:client will output to dist/ by default rather than dist/client. This creates a mixed directory structure where client assets and the server bundle (dist/server/main.js) coexist at the root of dist/.

To match the apparent intent (separate client and server builds):

  • Either add build: { outDir: 'dist/client' } to vite.config.ts
  • Or update documentation to clarify that both outputs are intentionally merged in dist/

The Dockerfile copies all of dist/ without issues, but the build structure should be explicitly defined to avoid confusion and prevent accidental asset collisions.

🤖 Prompt for AI Agents
In object-storage/package.json around lines 9 to 16, the client build currently
uses Vite's default output (dist/) which will collide with the server bundle at
dist/server/main.js; either set an explicit outDir in vite.config.ts (e.g.,
build.outDir = "dist/client") so client assets go to dist/client, or update
README/docs to state builds intentionally share dist/ and ensure Dockerfile and
any static-serving paths expect that layout; pick one approach and make the
corresponding change (edit vite.config.ts to add build.outDir or update
documentation and any downstream copy paths) so the build output structure is
explicit and deterministic.

Comment on lines 18 to 28
const config: ConstructorParameters<typeof S3Client>[0] = {
region: state.region,
credentials: {
accessKeyId: state.accessKeyId,
secretAccessKey: state.secretAccessKey,
},
accessKeyId: state.accessKeyId,
secretAccessKey: state.secretAccessKey,
bucket: state.bucketName,
};

// Add custom endpoint if provided (for S3-compatible storage like R2, MinIO, etc.)
if (state.endpoint) {
config.endpoint = state.endpoint;
// For S3-compatible services, we often need to force path style
config.forcePathStyle = true;
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🌐 Web query:

Bun S3Client path-style URLs custom endpoint configuration

💡 Result:

Short answer: use S3Client's endpoint + virtualHostedStyle option. Set endpoint to your custom URL and set virtualHostedStyle: false (or omit virtualHostedStyle if you need path-style and Bun's default is path-style for custom endpoints), e.g.:

import { S3Client } from "bun";

const s3 = new S3Client({
  accessKeyId: "AKIA...",
  secretAccessKey: "...",
  bucket: "my-bucket",
  endpoint: "http://localhost:9000", // or your custom S3-compatible URL
  virtualHostedStyle: false,         // force path-style URLs
});

Bun docs show using endpoint for custom/backends and the virtualHostedStyle flag to choose virtual-hosted vs path-style behavior. [1][2]

Note: AWS recommends virtual-hosted style and has communicated deprecation plans for path-style access, so prefer virtualHostedStyle: true for AWS S3 (or when interoperable). [3]

Sources:

  • Bun S3 docs / S3Client (endpoint, virtualHostedStyle examples). [1]
  • Bun API reference for S3Client. [2]
  • AWS documentation on path-style vs virtual-hosted–style and deprecation guidance. [3]

🏁 Script executed:

# Check for any previous implementations or history of forcePathStyle in the repo
git log --all --source --remotes -S "forcePathStyle" -- "object-storage/" 2>/dev/null | head -20

Repository: decocms/mcps

Length of output: 273


🏁 Script executed:

# Search for any test files or documentation related to S3 configuration
fd -e test.ts -e spec.ts -e test.js -e docs.md -o -e "\.md" . object-storage/ 2>/dev/null | head -20

Repository: decocms/mcps

Length of output: 38


🏁 Script executed:

# Check the full s3-client.ts file for context
wc -l object-storage/server/lib/s3-client.ts

Repository: decocms/mcps

Length of output: 97


🏁 Script executed:

cat -n object-storage/server/lib/s3-client.ts

Repository: decocms/mcps

Length of output: 1934


🏁 Script executed:

# Search for virtualHostedStyle usage in the codebase
rg "virtualHostedStyle" object-storage/

Repository: decocms/mcps

Length of output: 38


🏁 Script executed:

# Check if there are any S3-compatible provider tests or documentation
find object-storage/ -type f \( -name "*.md" -o -name "*.test.ts" -o -name "*.spec.ts" \) | xargs cat 2>/dev/null | head -100

Repository: decocms/mcps

Length of output: 2914


Add virtualHostedStyle: false when using S3-compatible endpoints.

Bun's S3Client does not automatically handle path-style URLs for custom endpoints—it requires explicit configuration via the virtualHostedStyle option. S3-compatible providers like MinIO and R2 often require path-style URLs, which means the current code will fail or behave unexpectedly with these providers.

Update the configuration to set virtualHostedStyle: false when a custom endpoint is provided:

if (state.endpoint) {
  config.endpoint = state.endpoint;
  config.virtualHostedStyle = false;
}
🤖 Prompt for AI Agents
In object-storage/server/lib/s3-client.ts around lines 18 to 28, the S3Client
config does not disable virtualHostedStyle for custom endpoints, which causes
S3-compatible providers (MinIO, R2) to require path-style URLs and fail; update
the config when state.endpoint is present to set config.endpoint =
state.endpoint and also set config.virtualHostedStyle = false so the client uses
path-style addressing for custom S3-compatible endpoints.

Comment on lines +96 to +99
fetch: createAssetServerFetcher({
env: process.env.NODE_ENV as "development" | "production" | "test",
assetsDirectory: "./dist/client",
}),
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Check NODE_ENV usage in package.json scripts, Dockerfile, and fallbacks

echo "=== Checking package.json scripts ==="
if [ -f "object-storage/package.json" ]; then
  rg -n "NODE_ENV" object-storage/package.json
else
  echo "package.json not found at object-storage/package.json"
fi

echo -e "\n=== Checking Dockerfile ==="
if [ -f "object-storage/Dockerfile" ]; then
  rg -n "NODE_ENV" object-storage/Dockerfile
else
  echo "No Dockerfile found at object-storage/Dockerfile, searching for any Dockerfile..."
  find object-storage -name "Dockerfile*" -type f 2>/dev/null
fi

echo -e "\n=== Checking if there's a fallback for undefined NODE_ENV ==="
rg -n "NODE_ENV.*\|\|" object-storage/ 2>/dev/null || echo "No fallback patterns found"

echo -e "\n=== Checking main.ts around the problematic lines ==="
sed -n '90,110p' object-storage/server/main.ts 2>/dev/null || echo "File not accessible"

Repository: decocms/mcps

Length of output: 985


🏁 Script executed:

#!/bin/bash
# Find and examine createAssetServerFetcher implementation

echo "=== Searching for createAssetServerFetcher definition ==="
rg -n "createAssetServerFetcher" object-storage/ -A 5

echo -e "\n=== Checking the import location ==="
rg -n "import.*createAssetServerFetcher" object-storage/

echo -e "\n=== Searching across entire repo ==="
rg -n "function createAssetServerFetcher\|export.*createAssetServerFetcher" --type ts

Repository: decocms/mcps

Length of output: 1056


Add explicit fallback for NODE_ENV to improve type safety.

The code relies on process.env.NODE_ENV being set, and while it is properly set in all deployment scripts (dev, build, start, and Dockerfile), the type cast as "development" | "production" | "test" provides no runtime validation. If executed outside the documented deployment paths, undefined would be passed to createAssetServerFetcher.

Add an explicit fallback:

env: (process.env.NODE_ENV || "production") as "development" | "production" | "test",

This ensures type safety and handles edge cases where NODE_ENV might not be set.

🤖 Prompt for AI Agents
In object-storage/server/main.ts around lines 96 to 99, the code casts
process.env.NODE_ENV to the union type without a runtime fallback which may pass
undefined to createAssetServerFetcher; update the env value to provide a default
(e.g. "production") when NODE_ENV is falsy, then cast to the union type so
createAssetServerFetcher always receives a valid "development" | "production" |
"test" string at runtime.

});

await s3Client.send(command);
await s3Client.file(key).delete();
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

wc -l object-storage/server/tools/storage.ts

Repository: decocms/mcps

Length of output: 98


🏁 Script executed:

sed -n '210,240p' object-storage/server/tools/storage.ts

Repository: decocms/mcps

Length of output: 872


🏁 Script executed:

rg -n "createDeleteObjectTool" object-storage/server/tools/storage.ts -A 20

Repository: decocms/mcps

Length of output: 847


🏁 Script executed:

rg -n "\.delete\(\)" object-storage/server/tools/storage.ts -B 5 -A 5

Repository: decocms/mcps

Length of output: 751


🏁 Script executed:

rg -n "createDeleteObjectTool\|DELETE_OBJECT" object-storage/server/tools/storage.ts -B 2 -A 2

Repository: decocms/mcps

Length of output: 38


🏁 Script executed:

rg -n "DELETE_OBJECT" --type=ts -B 3 -A 3 | head -50

Repository: decocms/mcps

Length of output: 2708


🏁 Script executed:

grep -r "\.delete\(\)" object-storage --type=ts -B 3 -A 3

Repository: decocms/mcps

Length of output: 176


🏁 Script executed:

sed -n '250,280p' object-storage/server/tools/storage.ts

Repository: decocms/mcps

Length of output: 1045


🏁 Script executed:

rg -n "Promise.allSettled" object-storage --type=ts -B 5 -A 10

Repository: decocms/mcps

Length of output: 1359


🏁 Script executed:

rg -n "execute.*async" object-storage/server/tools/storage.ts -A 15

Repository: decocms/mcps

Length of output: 3027


Add error handling to match the batch delete pattern.

The single delete operation lacks error handling, while the batch delete implementation (line 267–277) correctly uses Promise.allSettled() to handle failures. When s3Client.file(key).delete() fails (404, 403, network errors), it throws an unhandled exception. Update the output schema to include an optional error field and wrap the delete in error handling to gracefully report failures to callers, consistent with the DELETE_OBJECTS tool.

🤖 Prompt for AI Agents
In object-storage/server/tools/storage.ts around line 226, the single-object
delete call awaits s3Client.file(key).delete() without error handling and must
mirror the batch delete pattern: update the function's output schema/type to
include an optional error?: string field, wrap the delete in a try/catch, and on
success return the same result shape as batch (e.g., { key, status: 'fulfilled'
}), while on failure catch the error and return { key, status: 'rejected',
error: error.message || String(error) } so callers receive graceful, consistent
failure details.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants