-
Notifications
You must be signed in to change notification settings - Fork 0
Objectstorage outside cf #33
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
- Changed the version of @decocms/runtime to 0.0.1-testing-beta.3 in package.json and bun.lock. - Added hono as a new dependency. - Refactored the development scripts in package.json for improved client and server build processes. - Removed the wrangler.toml file as part of the project restructuring. - Updated the environment context references in server code from DECO_CHAT_REQUEST_CONTEXT to DECO_REQUEST_CONTEXT for consistency. - Adjusted Vite configuration to remove Cloudflare plugin integration, streamlining the build process for Bun.
- Replaced AWS SDK imports with Bun's built-in S3 client functionality in s3-client.ts. - Updated storage tools to utilize the new S3 client methods for listing, retrieving, and deleting objects. - Removed AWS SDK dependencies from package.json, streamlining the project and reducing external dependencies. - Adjusted methods for generating presigned URLs and handling object metadata to align with the new S3 client structure.
WalkthroughThe pull request migrates the object-storage project from AWS SDK S3 and Cloudflare Workers to a Bun-based S3 client. It introduces containerization via Dockerfile, updates build scripts and dependencies, removes Wrangler configuration, eliminates legacy Vite plugin code, and renames environment context properties across shared utilities. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes
Possibly related PRs
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 inconclusive)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
shared/tools/user.ts (1)
31-39: Update the stale documentation.The comment references the old property name
DECO_CHAT_REQUEST_CONTEXT, but the property has been renamed toDECO_REQUEST_CONTEXT.Apply this diff to update the comment:
/** * `createPrivateTool` is a wrapper around `createTool` that - * will call `env.DECO_CHAT_REQUEST_CONTEXT.ensureAuthenticated` + * will call `env.DECO_REQUEST_CONTEXT.ensureAuthenticated` * before executing the tool. * * It automatically returns a 401 error if valid user credentials * are not present in the request. You can also call it manually * to get the user object. */object-storage/server/main.ts (1)
1-6: Update stale documentation.The comment still references "Cloudflare workers app," but this has been migrated to a Bun-based runtime.
Apply this diff to update the comment:
/** * This is the main entry point for your application and - * MCP server. This is a Cloudflare workers app, and serves - * both your MCP server at /mcp and your views as a react - * application at /. + * MCP server. This is a Bun-based runtime that serves both + * your MCP server at /mcp and your views as a React + * application at /. */
🧹 Nitpick comments (2)
object-storage/Dockerfile (1)
1-37: Consider adding security and operational improvements.The Dockerfile builds and runs correctly, but consider these enhancements:
- Security: The container runs as root. Add a non-root user for better security:
# Add before CMD in runtime stage RUN addgroup --system --gid 1001 nodejs && \ adduser --system --uid 1001 bunuser && \ chown -R bunuser:nodejs /app USER bunuser
- Observability: Add a health check for container orchestration:
# Add before CMD HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \ CMD curl -f http://localhost:3000/health || exit 1
- Port configuration: Consider using an environment variable instead of hardcoding port 3000 for flexibility.
object-storage/server/tools/storage.ts (1)
266-283: Consider rate limiting for large batch deletes.The parallel delete implementation using
Promise.allSettledis correct and handles errors appropriately. However, be aware that deleting 1000 objects in parallel (max allowed per input schema) could:
- Overwhelm the S3 service with concurrent requests, potentially triggering rate limits
- Create performance issues compared to AWS SDK's native batch delete
Consider adding concurrency control for large batches, e.g., using a library like
p-limitto cap concurrent deletes.Example with concurrency control:
import pLimit from 'p-limit'; // In the execute function: const limit = pLimit(50); // Max 50 concurrent deletes const results = await Promise.allSettled( keys.map((key: string) => limit(() => s3Client.file(key).delete())) );
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
bun.lockis excluded by!**/*.lock
📒 Files selected for processing (11)
object-storage/Dockerfile(1 hunks)object-storage/package.json(3 hunks)object-storage/server/lib/s3-client.ts(3 hunks)object-storage/server/main.ts(3 hunks)object-storage/server/tools/storage.ts(6 hunks)object-storage/tsconfig.json(1 hunks)object-storage/vite.config.ts(1 hunks)object-storage/wrangler.toml(0 hunks)shared/deco-vite-plugin.ts(0 hunks)shared/package.json(1 hunks)shared/tools/user.ts(2 hunks)
💤 Files with no reviewable changes (2)
- shared/deco-vite-plugin.ts
- object-storage/wrangler.toml
🧰 Additional context used
🧬 Code graph analysis (1)
object-storage/server/main.ts (3)
nanobanana/server/main.ts (1)
Env(22-27)template-minimal/server/main.ts (1)
Env(18-23)template-with-view/server/main.ts (1)
Env(20-25)
🔇 Additional comments (9)
object-storage/tsconfig.json (1)
36-36: LGTM!The removal of
@cloudflare/workers-typesis consistent with the migration away from Cloudflare Workers to a Bun-based runtime.shared/package.json (1)
20-20: This dependency is required and actively used across the monorepo.The
@cloudflare/vite-pluginis imported in vite.config.ts files across all major projects (nanobanana, template-minimal, template-with-view, veo, sora, replicate, pinecone, readonly-sql, openrouter, gemini-pro-vision, data-for-seo, apify, and datajud). It's also referenced in scripts/new.ts as a standard devDependency for template generation. The dependency in shared/package.json is appropriate and not an unintended leftover.object-storage/server/lib/s3-client.ts (1)
5-5: Bun's S3Client is properly configured with full compatibility verified.The implementation correctly supports S3-compatible providers (R2, MinIO, GCS) through custom endpoint configuration. The API surface is compatible with all operations used in
storage.ts(list, stat, presign with GET/PUT methods, delete), and presigned URLs work correctly with custom endpoints via thes3Client.file().presign()method.object-storage/package.json (1)
19-19: These alpha dependencies appear to be private/internal packages from DecoCMS, not publicly published on npm.The project depends on
@decocms/runtime@1.0.0-alpha.2and@decocms/vite-plugin@1.0.0-alpha.1. Since these packages are not found in the public npm registry, they are likely distributed through a private registry or internal package source specific to DecoCMS. This changes the risk profile from the typical open-source alpha stability concerns—stability and breaking changes would need to be managed directly with the DecoCMS team rather than through public npm tracking. Clarify the package distribution mechanism and establish communication channels with the DecoCMS maintainers for updates and breaking change notifications.object-storage/vite.config.ts (1)
1-16: LGTM! Note the alpha dependency.The Vite configuration correctly integrates the Bun-focused Deco tooling. The migration to Deco plugin with
target: "bun"is consistent with the broader PR changes.Note: The project uses
@decocms/vite-pluginversion1.0.0-alpha.1(in devDependencies). As a pre-release version, monitor for breaking changes and stability issues before production deployment.object-storage/server/tools/storage.ts (4)
151-154: LGTM!The migration to Bun's presign API for GET operations is clean and preserves the expiration logic.
195-199: LGTM!The migration to Bun's presign API for PUT operations correctly passes the content type.
61-82: The code correctly uses Bun's S3Client API as implemented.The concerns raised are based on incorrect assumptions:
eTag field naming (line 76): Bun's
list()method returns objects witheTag(capital T), not lowercaseetag. The code correctly accessesobj.eTag ?? "", which is verified by comparing with thestat()method that returnsetag(lowercase) — these are two different response formats from different Bun APIs. The mapping to the output schema fieldetag(lowercase) is correct.Continuation token approach (lines 78-80): Bun's
list()API supports thestartAfterparameter which accepts a string key for pagination. Using the last object's key as thenextContinuationTokenis the correct pagination approach for Bun's S3Client, as confirmed by the README documentation stating "Handle large buckets with continuation tokens."Likely an incorrect or invalid review comment.
111-119: The metadata limitation is documented correctly and does not break existing code.The
metadatafield is optional in theGET_OBJECT_METADATAOutputinterface, so returningundefinedis valid and expected. The implementation correctly reflects that Bun'sstat()method does not provide custom S3 metadata. This is not a breaking change since consumers should already handle the optional metadata field gracefully.Regarding error handling: the function intentionally allows errors to propagate when the object doesn't exist. This is consistent with other single-object operations in the same file (e.g.,
GET_PRESIGNED_URL,DELETE_OBJECT) and is an acceptable design pattern.Likely an incorrect or invalid review comment.
| "gen": "deco gen --output=shared/deco.gen.ts", | ||
| "dev": "concurrently \"bun run dev:client\" \"NODE_ENV=development bun --hot run server/main.ts\"", | ||
| "dev:tunnel": "deco link -p 3000 -- bun run dev", | ||
| "dev:client": "bun --bun vite dev", | ||
| "build:client": "bun --bun vite build", | ||
| "build:server": "NODE_ENV=production bun build server/main.ts --target=bun --outfile=dist/server/main.js", | ||
| "build": "bun run build:client && bun run build:server", | ||
| "start": "NODE_ENV=production bun run dist/server/main.js" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify build artifacts match Dockerfile expectations
echo "=== Checking Dockerfile expectations ==="
if [ -f "object-storage/Dockerfile" ]; then
rg -n "dist/(client|server)" object-storage/Dockerfile
else
echo "Dockerfile not found at object-storage/Dockerfile"
fi
echo -e "\n=== Checking build script outputs ==="
rg -n "outfile=|outdir=" object-storage/package.json
echo -e "\n=== Checking vite build output config ==="
if [ -f "object-storage/vite.config.ts" ]; then
rg -n "build\.(outDir|rollupOptions)" object-storage/vite.config.ts
else
echo "vite.config.ts not found at object-storage/vite.config.ts"
fiRepository: decocms/mcps
Length of output: 292
🏁 Script executed:
#!/bin/bash
# Find Dockerfile and check full vite config
echo "=== Listing Dockerfile possibilities ==="
find object-storage -name "Dockerfile*" -type f 2>/dev/null
echo -e "\n=== Full vite.config.ts content ==="
if [ -f "object-storage/vite.config.ts" ]; then
cat object-storage/vite.config.ts
else
echo "vite.config.ts not found"
fi
echo -e "\n=== Check for root Dockerfile ==="
if [ -f "Dockerfile" ]; then
echo "Root Dockerfile exists"
head -50 Dockerfile
fiRepository: decocms/mcps
Length of output: 600
🏁 Script executed:
#!/bin/bash
cat object-storage/DockerfileRepository: decocms/mcps
Length of output: 689
🏁 Script executed:
#!/bin/bash
# Check for any build output references or issues
echo "=== Checking package.json build commands ==="
rg -A 2 '"build"' object-storage/package.json
echo -e "\n=== Checking if there are any .gitignore entries for dist ==="
rg -n "^dist" object-storage/.gitignore 2>/dev/null || echo "No .gitignore or dist entry not found"
echo -e "\n=== Checking build:client vite command specifics ==="
rg -B 2 -A 2 "dev:client|build:client" object-storage/package.jsonRepository: decocms/mcps
Length of output: 946
Verify the vite build output configuration and document the expected build structure.
The build scripts orchestrate client and server builds, but there's a critical configuration gap: vite.config.ts does not explicitly set build.outDir, so bun run build:client will output to dist/ by default rather than dist/client. This creates a mixed directory structure where client assets and the server bundle (dist/server/main.js) coexist at the root of dist/.
To match the apparent intent (separate client and server builds):
- Either add
build: { outDir: 'dist/client' }tovite.config.ts - Or update documentation to clarify that both outputs are intentionally merged in
dist/
The Dockerfile copies all of dist/ without issues, but the build structure should be explicitly defined to avoid confusion and prevent accidental asset collisions.
🤖 Prompt for AI Agents
In object-storage/package.json around lines 9 to 16, the client build currently
uses Vite's default output (dist/) which will collide with the server bundle at
dist/server/main.js; either set an explicit outDir in vite.config.ts (e.g.,
build.outDir = "dist/client") so client assets go to dist/client, or update
README/docs to state builds intentionally share dist/ and ensure Dockerfile and
any static-serving paths expect that layout; pick one approach and make the
corresponding change (edit vite.config.ts to add build.outDir or update
documentation and any downstream copy paths) so the build output structure is
explicit and deterministic.
| const config: ConstructorParameters<typeof S3Client>[0] = { | ||
| region: state.region, | ||
| credentials: { | ||
| accessKeyId: state.accessKeyId, | ||
| secretAccessKey: state.secretAccessKey, | ||
| }, | ||
| accessKeyId: state.accessKeyId, | ||
| secretAccessKey: state.secretAccessKey, | ||
| bucket: state.bucketName, | ||
| }; | ||
|
|
||
| // Add custom endpoint if provided (for S3-compatible storage like R2, MinIO, etc.) | ||
| if (state.endpoint) { | ||
| config.endpoint = state.endpoint; | ||
| // For S3-compatible services, we often need to force path style | ||
| config.forcePathStyle = true; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🌐 Web query:
Bun S3Client path-style URLs custom endpoint configuration
💡 Result:
Short answer: use S3Client's endpoint + virtualHostedStyle option. Set endpoint to your custom URL and set virtualHostedStyle: false (or omit virtualHostedStyle if you need path-style and Bun's default is path-style for custom endpoints), e.g.:
import { S3Client } from "bun";
const s3 = new S3Client({
accessKeyId: "AKIA...",
secretAccessKey: "...",
bucket: "my-bucket",
endpoint: "http://localhost:9000", // or your custom S3-compatible URL
virtualHostedStyle: false, // force path-style URLs
});
Bun docs show using endpoint for custom/backends and the virtualHostedStyle flag to choose virtual-hosted vs path-style behavior. [1][2]
Note: AWS recommends virtual-hosted style and has communicated deprecation plans for path-style access, so prefer virtualHostedStyle: true for AWS S3 (or when interoperable). [3]
Sources:
- Bun S3 docs / S3Client (endpoint, virtualHostedStyle examples). [1]
- Bun API reference for S3Client. [2]
- AWS documentation on path-style vs virtual-hosted–style and deprecation guidance. [3]
🏁 Script executed:
# Check for any previous implementations or history of forcePathStyle in the repo
git log --all --source --remotes -S "forcePathStyle" -- "object-storage/" 2>/dev/null | head -20Repository: decocms/mcps
Length of output: 273
🏁 Script executed:
# Search for any test files or documentation related to S3 configuration
fd -e test.ts -e spec.ts -e test.js -e docs.md -o -e "\.md" . object-storage/ 2>/dev/null | head -20Repository: decocms/mcps
Length of output: 38
🏁 Script executed:
# Check the full s3-client.ts file for context
wc -l object-storage/server/lib/s3-client.tsRepository: decocms/mcps
Length of output: 97
🏁 Script executed:
cat -n object-storage/server/lib/s3-client.tsRepository: decocms/mcps
Length of output: 1934
🏁 Script executed:
# Search for virtualHostedStyle usage in the codebase
rg "virtualHostedStyle" object-storage/Repository: decocms/mcps
Length of output: 38
🏁 Script executed:
# Check if there are any S3-compatible provider tests or documentation
find object-storage/ -type f \( -name "*.md" -o -name "*.test.ts" -o -name "*.spec.ts" \) | xargs cat 2>/dev/null | head -100Repository: decocms/mcps
Length of output: 2914
Add virtualHostedStyle: false when using S3-compatible endpoints.
Bun's S3Client does not automatically handle path-style URLs for custom endpoints—it requires explicit configuration via the virtualHostedStyle option. S3-compatible providers like MinIO and R2 often require path-style URLs, which means the current code will fail or behave unexpectedly with these providers.
Update the configuration to set virtualHostedStyle: false when a custom endpoint is provided:
if (state.endpoint) {
config.endpoint = state.endpoint;
config.virtualHostedStyle = false;
}🤖 Prompt for AI Agents
In object-storage/server/lib/s3-client.ts around lines 18 to 28, the S3Client
config does not disable virtualHostedStyle for custom endpoints, which causes
S3-compatible providers (MinIO, R2) to require path-style URLs and fail; update
the config when state.endpoint is present to set config.endpoint =
state.endpoint and also set config.virtualHostedStyle = false so the client uses
path-style addressing for custom S3-compatible endpoints.
| fetch: createAssetServerFetcher({ | ||
| env: process.env.NODE_ENV as "development" | "production" | "test", | ||
| assetsDirectory: "./dist/client", | ||
| }), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check NODE_ENV usage in package.json scripts, Dockerfile, and fallbacks
echo "=== Checking package.json scripts ==="
if [ -f "object-storage/package.json" ]; then
rg -n "NODE_ENV" object-storage/package.json
else
echo "package.json not found at object-storage/package.json"
fi
echo -e "\n=== Checking Dockerfile ==="
if [ -f "object-storage/Dockerfile" ]; then
rg -n "NODE_ENV" object-storage/Dockerfile
else
echo "No Dockerfile found at object-storage/Dockerfile, searching for any Dockerfile..."
find object-storage -name "Dockerfile*" -type f 2>/dev/null
fi
echo -e "\n=== Checking if there's a fallback for undefined NODE_ENV ==="
rg -n "NODE_ENV.*\|\|" object-storage/ 2>/dev/null || echo "No fallback patterns found"
echo -e "\n=== Checking main.ts around the problematic lines ==="
sed -n '90,110p' object-storage/server/main.ts 2>/dev/null || echo "File not accessible"Repository: decocms/mcps
Length of output: 985
🏁 Script executed:
#!/bin/bash
# Find and examine createAssetServerFetcher implementation
echo "=== Searching for createAssetServerFetcher definition ==="
rg -n "createAssetServerFetcher" object-storage/ -A 5
echo -e "\n=== Checking the import location ==="
rg -n "import.*createAssetServerFetcher" object-storage/
echo -e "\n=== Searching across entire repo ==="
rg -n "function createAssetServerFetcher\|export.*createAssetServerFetcher" --type tsRepository: decocms/mcps
Length of output: 1056
Add explicit fallback for NODE_ENV to improve type safety.
The code relies on process.env.NODE_ENV being set, and while it is properly set in all deployment scripts (dev, build, start, and Dockerfile), the type cast as "development" | "production" | "test" provides no runtime validation. If executed outside the documented deployment paths, undefined would be passed to createAssetServerFetcher.
Add an explicit fallback:
env: (process.env.NODE_ENV || "production") as "development" | "production" | "test",
This ensures type safety and handles edge cases where NODE_ENV might not be set.
🤖 Prompt for AI Agents
In object-storage/server/main.ts around lines 96 to 99, the code casts
process.env.NODE_ENV to the union type without a runtime fallback which may pass
undefined to createAssetServerFetcher; update the env value to provide a default
(e.g. "production") when NODE_ENV is falsy, then cast to the union type so
createAssetServerFetcher always receives a valid "development" | "production" |
"test" string at runtime.
| }); | ||
|
|
||
| await s3Client.send(command); | ||
| await s3Client.file(key).delete(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
wc -l object-storage/server/tools/storage.tsRepository: decocms/mcps
Length of output: 98
🏁 Script executed:
sed -n '210,240p' object-storage/server/tools/storage.tsRepository: decocms/mcps
Length of output: 872
🏁 Script executed:
rg -n "createDeleteObjectTool" object-storage/server/tools/storage.ts -A 20Repository: decocms/mcps
Length of output: 847
🏁 Script executed:
rg -n "\.delete\(\)" object-storage/server/tools/storage.ts -B 5 -A 5Repository: decocms/mcps
Length of output: 751
🏁 Script executed:
rg -n "createDeleteObjectTool\|DELETE_OBJECT" object-storage/server/tools/storage.ts -B 2 -A 2Repository: decocms/mcps
Length of output: 38
🏁 Script executed:
rg -n "DELETE_OBJECT" --type=ts -B 3 -A 3 | head -50Repository: decocms/mcps
Length of output: 2708
🏁 Script executed:
grep -r "\.delete\(\)" object-storage --type=ts -B 3 -A 3Repository: decocms/mcps
Length of output: 176
🏁 Script executed:
sed -n '250,280p' object-storage/server/tools/storage.tsRepository: decocms/mcps
Length of output: 1045
🏁 Script executed:
rg -n "Promise.allSettled" object-storage --type=ts -B 5 -A 10Repository: decocms/mcps
Length of output: 1359
🏁 Script executed:
rg -n "execute.*async" object-storage/server/tools/storage.ts -A 15Repository: decocms/mcps
Length of output: 3027
Add error handling to match the batch delete pattern.
The single delete operation lacks error handling, while the batch delete implementation (line 267–277) correctly uses Promise.allSettled() to handle failures. When s3Client.file(key).delete() fails (404, 403, network errors), it throws an unhandled exception. Update the output schema to include an optional error field and wrap the delete in error handling to gracefully report failures to callers, consistent with the DELETE_OBJECTS tool.
🤖 Prompt for AI Agents
In object-storage/server/tools/storage.ts around line 226, the single-object
delete call awaits s3Client.file(key).delete() without error handling and must
mirror the batch delete pattern: update the function's output schema/type to
include an optional error?: string field, wrap the delete in a try/catch, and on
success return the same result shape as batch (e.g., { key, status: 'fulfilled'
}), while on failure catch the error and return { key, status: 'rejected',
error: error.message || String(error) } so callers receive graceful, consistent
failure details.
Summary by CodeRabbit
Refactor
Chores
✏️ Tip: You can customize this high-level summary in your review settings.