Claude/test concierge modal jje yj#84
Open
mirai-gpro wants to merge 109 commits intoaigc3d:masterfrom
Open
Conversation
Root cause: defaults.py's default_setup() and default_config_parser() assume a distributed training environment with writable filesystem. On Cloud Run (read-only /app), this causes silent init failures. Changes: - app.py: Skip default_setup() entirely, manually set CPU/single-process config - app.py: Redirect save_path to /tmp (only writable dir on Cloud Run) - app.py: Add GCS FUSE mount path resolution with Docker-baked fallback - cloudbuild.yaml: Add Cloud Storage FUSE volume mount for model serving - cloudbuild.yaml: Increase max-instances to 4 - Include handoff docs and full LAM_Audio2Expression codebase https://claude.ai/code/session_01C6n4TZ9PPdx46jCevmVo7P
The LAM model file was misidentified as .tar but is actually a PyTorch weights file. Gemini renamed it to .pth on GCS. Also source wav2vec2 config.json from the model directory instead of LAM configs/. https://claude.ai/code/session_01C6n4TZ9PPdx46jCevmVo7P
- Import gourmet-sp from implementation-testing branch - Add sendAudioToExpression() to shop introduction TTS flow (firstShop and remainingShops now get lip sync data before playback) - Remove legacy event hooks in concierge-controller init() (replaced with clean linkTtsPlayer helper) - Clean up LAMAvatar.astro: remove legacy frame playback code (startFramePlaybackFromQueue, stopFramePlayback, frameQueue, etc.) - Simplify to single sync mechanism: frameBuffer + ttsPlayer.currentTime - Reduce health check interval from 2s to 10s https://claude.ai/code/session_01C6n4TZ9PPdx46jCevmVo7P
Using official LAM sample avatar as placeholder. Will be replaced with custom-generated avatar later. https://claude.ai/code/session_01C6n4TZ9PPdx46jCevmVo7P
- Add fade-in/fade-out smoothing (6 frames / 200ms) to prevent Gaussian Splat visual distortion at speech start/end - Parallelize expression generation with TTS synthesis: remaining sentence expression is pre-fetched during first sentence playback, eliminating wait time between segments - Add fetchExpressionFrames() for background expression fetch with pendingExpressionFrames buffer swap pattern - Apply same optimization to shop introduction flow https://claude.ai/code/session_01C6n4TZ9PPdx46jCevmVo7P
sendAudioToExpression fetch could hang indefinitely (Cloud Run cold start / service down), blocking await and preventing TTS play(). - Add AbortController timeout (8s) to all expression API fetches - Wrap expression await with Promise.race so TTS plays even if expression API is slow/down (lip sync degrades gracefully) - Applied to speakTextGCP, speakResponseInChunks, and shop flow https://claude.ai/code/session_01C6n4TZ9PPdx46jCevmVo7P
Root cause: sendAudioToExpression fetch hung in browser, blocking await and preventing TTS play() from ever being called. Fix: all expression API calls are now fire-and-forget - TTS playback starts immediately without waiting for expression frames. Frames arrive asynchronously and getExpressionData() picks them up in real-time from the frameBuffer. - Remove await/Promise.race from all sendAudioToExpression calls - Remove fetchExpressionFrames and pendingExpressionFrames (no longer needed - direct fire-and-forget is simpler) - Keep AbortController timeout (8s) inside sendAudioToExpression to prevent leaked connections https://claude.ai/code/session_01C6n4TZ9PPdx46jCevmVo7P
… calls
Architecture change: expression frames are now returned WITH TTS audio
from the backend, instead of the frontend calling audio2exp directly.
Backend (app_customer_support_modified.py):
- Replace fire-and-forget send_to_audio2exp with get_expression_frames
that returns {names, frames, frame_rate}
- Send MP3 directly to audio2exp (no separate PCM generation needed)
- TTS response: {success, audio, expression: {...}}
- Server-to-server communication: no CORS, stable, fast
Frontend (concierge-controller.ts):
- New queueExpressionFromTtsResponse() reads expression from TTS response
- Remove sendAudioToExpression (direct browser→audio2exp REST calls)
- Remove audio2expApiUrl, audio2expWsUrl, connectLAMAvatarWebSocket
- Remove EXPRESSION_API_TIMEOUT_MS, AbortController timeout
- Existing 1st-sentence-ahead pattern now automatically includes
expression data (no separate API call needed)
https://claude.ai/code/session_01C6n4TZ9PPdx46jCevmVo7P
…orget proxy - Backend: TTS endpoint no longer blocks on expression generation - Backend: New /api/audio2expression proxy (server-to-server, CORS-free) - Frontend: All expression calls use fireAndForgetExpression() (never blocks TTS play) - Removes ~2s first-sentence delay caused by synchronous expression in TTS https://claude.ai/code/session_01C6n4TZ9PPdx46jCevmVo7P
…aining Two bugs fixed: 1. Buffer corruption: frames from segment 1 mixed with segment 2 (ttsPlayer.currentTime resets but frameBuffer was concatenated) → Now clear buffer before each new TTS segment 2. 3-second delay: expression frames arrived after TTS started playing → Pre-fetch remaining segment's expression during first segment playback → When second segment starts, pre-fetched frames are immediately available New prefetchExpression() method returns Promise with parsed frames, applied non-blocking via .then() to never delay TTS playback. https://claude.ai/code/session_01C6n4TZ9PPdx46jCevmVo7P
Architecture change: backend includes expression data in TTS response (server-to-server audio2exp call ~150ms) instead of separate proxy. - Backend TTS endpoint calls audio2exp synchronously, includes result - Frontend applyExpressionFromTts(): instant buffer queue from TTS data - Proxy fireAndForgetExpression kept as fallback (timeout/error cases) - All 5 call sites (speakTextGCP, speakResponseInChunks x2, shop x2) updated - Removes prefetch complexity (TTS response already carries expression) Result: lip sync starts from frame 0, no 2-3 second gap. https://claude.ai/code/session_01C6n4TZ9PPdx46jCevmVo7P
Architecture redesign for true zero-delay TTS playback: - Backend TTS endpoint starts audio2exp in background thread, returns audio + expression_token immediately (no blocking) - New /api/expression/poll endpoint: frontend polls for result - Frontend pollExpression(): fire-and-forget polling at 150ms intervals - Removes sync expression, proxy, and prefetch approaches Timeline: TTS returns ~500ms, audio2exp completes ~150ms later (background), frontend first poll arrives ~200ms after TTS → expression available ~350ms after playback starts. Previous: 2-3 seconds delay or TTS blocked. https://claude.ai/code/session_01C6n4TZ9PPdx46jCevmVo7P
…aster response Backend: revert to sync expression in TTS response (remove async cache/polling). Frontend: replace pollExpression with applyExpressionFromTts (sync from TTS response). Frontend: fire sendMessage() immediately while ack plays (don't await firstAckPromise). pendingAckPromise is awaited before TTS playback to prevent ttsPlayer conflict. https://claude.ai/code/session_01C6n4TZ9PPdx46jCevmVo7P
…nterrupt) unlockAudioParams() does play→pause→reset on ttsPlayer for iOS unlock. When called during ack playback (parallel LLM mode), it kills the ack audio. Skip it when pendingAckPromise is active (audio already unlocked by ack). https://claude.ai/code/session_01C6n4TZ9PPdx46jCevmVo7P
…rentAudio safety Root cause: ack "はい" gets paused (not ended) by some interruption, so pendingAckPromise never resolves → speakResponseInChunks stuck forever. Fix 1: resolve pendingAckPromise on both 'ended' and 'pause' events. Fix 2: call stopCurrentAudio() after pendingAckPromise resolves to ensure ttsPlayer is clean before new TTS playback. https://claude.ai/code/session_01C6n4TZ9PPdx46jCevmVo7P
- Container: max-height 650px → height calc(100dvh - 40px), max-height 960px - Avatar stage: 140px → 300px (desktop), 100px → 200px (mobile) - Chat area: min-height 150px guaranteed for message display https://claude.ai/code/session_01C6n4TZ9PPdx46jCevmVo7P
Post-init camera: Z 1→0.6 (closer), Y 1.8→1.75 (slight down), FOV 50→36 (zoom in). Eliminates wasted space above avatar head in the 300px avatar-stage. https://claude.ai/code/session_01C6n4TZ9PPdx46jCevmVo7P
Previous: lookAt y=1.8 (head center) + tight zoom → mouth cut off at bottom. Fix: lower target to y=1.62 (nose/mouth center), adjust OrbitControls target to match. Camera Z=0.55, FOV=38 for balanced framing. https://claude.ai/code/session_01C6n4TZ9PPdx46jCevmVo7P
targetY 1.62→1.66 (avatar lower in frame), camera Y 1.62→1.72 (above target, slight downward angle instead of looking up from below) https://claude.ai/code/session_01C6n4TZ9PPdx46jCevmVo7P
Key improvements over existing lam_modal.py: - @modal.asgi_app() + Gradio 4.x instead of subprocess + patching - Direct Python integration with LAM pipeline (no regex patching) - Blender 4.2 included for GLB generation (OpenAvatarChat format) - Focused UI for concierge.zip generation with progress feedback - Proper ASGI serving resolves Gradio UI display issue on Modal Pipeline: Image → FLAME Tracking → LAM Inference → Blender GLB → ZIP https://claude.ai/code/session_01XXVR6KsYFAQiJjHvdzCzoK
Major update to concierge_modal.py: - Custom video upload: VHAP FLAME tracking extracts per-frame expression/pose parameters from user's own motion video - Video preprocessing pipeline: frame extraction, face detection (VGGHead), background matting, landmark detection per frame - VHAP GlobalTracker integration for multi-frame optimization - Export to NeRF dataset format (transforms.json + flame_param/*.npz) - Gradio UI: motion source selector (custom video or sample) - Preview video with optional audio from source video - Max 300 frames (10s@30fps) cap for manageable processing This enables generating high-quality concierge.zip with custom expressions/movements instead of being limited to pre-set samples. https://claude.ai/code/session_01XXVR6KsYFAQiJjHvdzCzoK
- Replace add_local_dir("./assets") with HuggingFace downloads for all
required model assets (FLAME tracking, parametric models, LAM assets)
- Remove REQUIRED_ASSET local check since assets are fetched at build time
- Build VHAP config programmatically instead of loading from YAML file
- Remove deprecated allow_concurrent_inputs parameter
- Add flame_vhap symlink for VHAP tracking compatibility
- Add critical file verification in _download_models()
Fixes FileNotFoundError: flame2023.pkl not found in container
https://claude.ai/code/session_01XXVR6KsYFAQiJjHvdzCzoK
Replace container-build-time HuggingFace downloads with add_local_dir to mount model files from the user's local LAM repo. This is faster and avoids dependency on HuggingFace availability. - Add _has_model_zoo / _has_assets detection at module level - Mount ./model_zoo and ./assets via add_local_dir (conditional) - Add _setup_paths() to bridge directory layout differences: - assets/human_parametric_models → model_zoo/human_parametric_models - flame_assets/flame2023.pkl → flame_assets/flame/ (flat layout) - flame_vhap symlink for VHAP tracker - Add model file verification with find-based search https://claude.ai/code/session_01XXVR6KsYFAQiJjHvdzCzoK
Modal requires add_local_dir to be the last image build step. Move _setup_model_paths() from run_function (build time) to _init_lam_pipeline() (container startup) to comply with this. https://claude.ai/code/session_01XXVR6KsYFAQiJjHvdzCzoK
User keeps all models under assets/ (not model_zoo/). Instead of symlinking individual subdirectories, symlink the entire model_zoo -> assets when model_zoo doesn't exist. This bridges lam_models, flame_tracking_models, and human_parametric_models all at once. Also adds model.safetensors to the verification checklist. https://claude.ai/code/session_01XXVR6KsYFAQiJjHvdzCzoK
Three files are not available locally and must be downloaded: - model.safetensors (LAM-20K model weights from 3DAIGC/LAM-20K) - template_file.fbx, animation.glb (from Ethan18/test_model LAM_assets.tar) Download runs via run_function BEFORE add_local_dir to satisfy Modal's ordering constraint. Downloads are cached in the image layer. https://claude.ai/code/session_01XXVR6KsYFAQiJjHvdzCzoK
1. Downloaded LAM assets (template_file.fbx, animation.glb) were being overwritten by the add_local_dir mount of assets/. Fix: copy extracted assets into model_zoo/ during build so they survive the mount. Update all path references accordingly. 2. Pin gradio==4.44.0 and gradio_client==1.3.0 to avoid the json_schema_to_python_type TypeError on additionalProperties. https://claude.ai/code/session_01XXVR6KsYFAQiJjHvdzCzoK
1. Switch assets download from Ethan18/test_model (incomplete) to official 3DAIGC/LAM-assets which includes sample_oac/ with template_file.fbx and animation.glb. 2. Monkey-patch gradio_client._json_schema_to_python_type to handle boolean additionalProperties schema (TypeError on bool). https://claude.ai/code/session_01XXVR6KsYFAQiJjHvdzCzoK
Replaces always-on Gradio UI with a one-shot CLI workflow: modal run concierge_modal.py --image face.png --video motion.mp4 Flow: 1. GPU spins up, generates concierge.zip (~5-15 min) 2. ZIP auto-downloaded to local disk 3. All containers stop → zero ongoing charges Also: - Add lightweight ui_image for web() (gradio+fastapi only, no CUDA) - Add read_volume_file() helper for cross-container ZIP download - Gradio UI (modal serve/deploy) still works but is no longer required Credit impact: Before: Heavy CUDA image running 24/7 via modal deploy After: GPU charges only during generation, then stops https://claude.ai/code/session_01XXVR6KsYFAQiJjHvdzCzoK
Root cause of "bird monster" rendering: - Old pipeline ran TWO separate Blender processes: 1) convertFBX2GLB.py: FBX import → GLB export 2) generateVertexIndices.py: OBJ import + 90° rotation → Z-sort - FBX import applies automatic Y-up→Z-up axis conversion - OBJ import + manual 90° rotation produces DIFFERENT Z coordinates - Different Z values → different sort order → vertex_order.json doesn't match actual GLB vertex positions → broken rendering Also confirmed: reference skin.glb has 20,018 vertices (no normals), generated had 54,467 (with normals causing vertex splitting). Fix: single Blender script that imports FBX once, generates vertex_order.json from that mesh, then exports GLB. Guarantees the Z-sorted vertex order matches the actual GLB vertex layout. https://claude.ai/code/session_01XXVR6KsYFAQiJjHvdzCzoK
ui_image and dl_image were defined after the web() function that references them, causing NameError at import time. https://claude.ai/code/session_01XXVR6KsYFAQiJjHvdzCzoK
…on order Moved lightweight image definitions right after the main GPU image build section. This ensures they are defined well before any @app.function decorator references them, regardless of file layout. https://claude.ai/code/session_01XXVR6KsYFAQiJjHvdzCzoK
Shows the full resolved path so users can see exactly where it looked, plus a hint to use absolute paths or place files in CWD. https://claude.ai/code/session_01XXVR6KsYFAQiJjHvdzCzoK
…ocal files Made --image optional. When omitted, main() keeps the app alive so the Gradio web endpoint stays reachable — user uploads files via browser. When --image IS provided, the original headless CLI mode still works. https://claude.ai/code/session_01XXVR6KsYFAQiJjHvdzCzoK
…d mesh The inline Blender script called bpy.ops.object.transform_apply() on the skinned FBX mesh before GLB export. This bakes the FBX axis-conversion rotation into vertex positions but leaves armature bone rest poses unchanged, completely breaking the skin binding. The result is a distorted, unrecognizable avatar instead of a proper face. The original tools/convertFBX2GLB.py correctly omits transform_apply — the glTF exporter handles the object transform during export. Also corrected misleading comments: the renderer (gaussian-splat-renderer-for- lam) has replaceIndexes=false, so vertex_order.json is NOT used for vertex remapping. The renderer uses direct 1:1 mapping: ply[i] + morphedMesh[i]. https://claude.ai/code/session_01XXVR6KsYFAQiJjHvdzCzoK
Root cause analysis of the "bird monster" avatar: - Compared broken ZIP (concierge_now.zip) vs working ZIP (concierge_fne.zip) - GLB mesh is identical: same skinning, inverse bind matrices, bone weights - PLY Gaussian attributes are the issue: 83% opacity>0.9 (vs 4% in working), 2-3x larger scales, darker colors, larger offsets from mesh surface - Pipeline code is functionally identical to official app_lam.py - Model code has @torch.compile decorators on forward_latent_points and DINOv2 encoder, but torch.compile can produce different numerical results on different GPU architectures (Modal's shared GPU pool) Fixes: 1. Disable torch._dynamo before model loading, matching the official inference runner (base_inferrer.py line 34) which also disables it 2. Add Gaussian quality validation after inference: warns when >50% of splats are highly opaque or offsets are abnormally large — these are signs of bad model inference that would produce a distorted avatar https://claude.ai/code/session_01XXVR6KsYFAQiJjHvdzCzoK
gs_model.shs can be [N,K,3] (3D) not just [N,3], causing _rgb_mean to be a nested list. Use %-formatting with explicit float() conversions for robustness. https://claude.ai/code/session_01XXVR6KsYFAQiJjHvdzCzoK
…ction The original _build_model() iterated over checkpoint keys but never checked for model parameters MISSING from the checkpoint. Those params would silently retain random initialization, potentially causing the 83% opacity blob. Changes: - Replace _build_model() call with direct ModelLAM creation + load_state_dict(strict=False) - Log missing keys (model params not in checkpoint) and unexpected keys - Add checkpoint file size logging for integrity verification - Spot-check critical GS decoder layer weights (mean/std) - Save first rendered frame as PNG for visual quality inspection - Report weight loading issues in generation diagnostics https://claude.ai/code/session_01XXVR6KsYFAQiJjHvdzCzoK
Helps trace whether the divergence starts at the DINOv2 encoder or downstream in the transformer/MLP. https://claude.ai/code/session_01XXVR6KsYFAQiJjHvdzCzoK
The LAM-20K model.safetensors from HuggingFace (3DAIGC/LAM-20K) is 2,356,560,889 bytes. Log a warning if the actual size differs, which could indicate an incomplete download or wrong file. https://claude.ai/code/session_01XXVR6KsYFAQiJjHvdzCzoK
FLAME model buffers are loaded from .pkl files during init, so missing from the checkpoint is expected. Only flag non-FLAME missing keys as critical issues that indicate randomly initialized parameters. https://claude.ai/code/session_01XXVR6KsYFAQiJjHvdzCzoK
Tests model loading + inference directly on Modal GPU, using app_lam.py's core code path. Reports weight loading status, Gaussian quality stats, and a verdict on whether the issue is in our pipeline or the environment. Usage: modal run concierge_modal.py --smoke-test modal run concierge_modal.py --smoke-test --image face.png This isolates: is the model on Modal producing bad Gaussians (environment problem) or is it our pipeline code that's wrong? https://claude.ai/code/session_01XXVR6KsYFAQiJjHvdzCzoK
…M env Root cause: DINOv2's MemEffAttention (attention.py:72-89) uses xformers.ops.memory_efficient_attention when available, but falls back to standard Attention.forward() when xformers is not installed. The LAM model was TRAINED with xformers. Without it, the fallback attention produces different features across 24 DINOv2 ViT-L layers, causing the GS decoder to output ~83% opacity > 0.9 instead of ~4%. Changes: - PyTorch 2.2.0 → 2.3.0 (matches scripts/install/install_cu118.sh) - Add xformers 0.0.26.post1 (matches scripts/install/install_cu118.sh) - pytorch3d: unpin version (matches official requirements.txt) - Add xformers availability check at container startup https://claude.ai/code/session_01XXVR6KsYFAQiJjHvdzCzoK
Shows [ENV] line in diagnostics so we can verify the Modal image was actually rebuilt with PyTorch 2.3.0 + xformers 0.0.26.post1. https://claude.ai/code/session_01XXVR6KsYFAQiJjHvdzCzoK
Extracts the verified inference pipeline from concierge_modal.py into a standalone Dockerfile + app_concierge.py that runs on HF Spaces Docker SDK or any Docker+GPU environment. - Dockerfile: nvidia/cuda:11.8 base with PyTorch 2.3.0, xformers 0.0.26.post1, Blender 4.2, all CUDA extensions pre-built - app_concierge.py: Single-process Gradio app, no Modal dependencies, same generation pipeline (FLAME tracking + VHAP + LAM + Blender GLB) - download_models.py: Fetches all model weights during Docker build https://claude.ai/code/session_01XXVR6KsYFAQiJjHvdzCzoK
Integrated several fixes to improve model loading and GLB export. Updated comments for clarity and ensured proper usage of official tools.
The concierge_modal.py had a hand-written inline convert_and_order.py Blender script that re-implemented the GLB generation logic from tools/generateARKITGLBWithBlender.py. This "re-invention" diverged from the official pipeline in subtle ways: 1. Vertex order was generated from the FBX mesh (using matrix_world for Z), while the official generateVertexIndices.py imports the OBJ and applies a 90-degree rotation before sorting by Z. 2. The inline script combined GLB export + vertex order in one Blender session, bypassing the official convertFBX2GLB.py and generateVertexIndices.py scripts. Now we call generate_glb() directly — the same function app_lam.py uses — which runs the official Blender scripts (convertFBX2GLB.py, generateVertexIndices.py) and handles temp file cleanup internally. This eliminates the inline Blender script as a potential source of quality divergence, while keeping all other improvements intact (xformers, weight validation, diagnostics, torch.compile disabled). https://claude.ai/code/session_01XXVR6KsYFAQiJjHvdzCzoK
…leanup Root cause: lam/, vhap/, configs/, external/, app_lam.py were never mounted into the Modal container. The container was using stale upstream code from `git clone github.com/aigc3d/LAM`. Local modifications had no effect. Changes: - Mount lam/, configs/, vhap/, external/, app_lam.py, app_concierge.py - Add BUILD_VERSION env var to force image rebuild on every deploy - Clear old Volume output (concierge.zip, preview.mp4, etc.) before each generation to prevent returning stale cached results - Log BUILD_VERSION on GPU worker startup for verification https://claude.ai/code/session_01XXVR6KsYFAQiJjHvdzCzoK
…ling
- Replace bare-bones UI with polished design using gr.themes.Soft() and
custom CSS (gradient header, step labels, tip/usage boxes)
- Add step-by-step guided input flow (Step 1/2/3 labels with tips)
- GPU worker now writes intermediate progress to per-job JSON on Volume,
so UI displays real pipeline step names instead of just elapsed time
- Poll interval reduced from 5s to 2s for snappier feedback
- Per-job file scoping (tracked_{job_id}.png etc.) for multi-user safety
- Proper error propagation: GPU thread errors surface in UI instead of
being silently swallowed by except/pass
- 30-minute timeout guard to prevent infinite polling
- Output section with autoplay preview, usage instructions, and labeled
visualization panels
https://claude.ai/code/session_01XXVR6KsYFAQiJjHvdzCzoK
Updated the concierge ZIP generator script with final fixes and optimizations. Adjusted error handling, improved file management, and ensured consistent behavior with the official tools.
…pping generate_glb() internally calls gen_vertex_order_with_blender() which creates the correct Blender-based vertex_order.json in oac_dir. The removed code was overwriting it with a simple list(range(N)), which destroys the FLAME-to-GLB vertex remapping needed for correct animation. https://claude.ai/code/session_01TDgrP1FjR9uk15gX5rJebC
…s (v3) Root cause of "bird monster" and "changes not reflected" issues: - lam/, vhap/, external/, configs/ were NOT mounted into the container. They came only from git clone during image build, so local code changes were never reflected in the running container. - Old concierge.zip persisted in the Volume across runs, so the UI kept serving stale results even after redeployment. Changes: 1. Mount all local source dirs (lam/, vhap/, external/, configs/, app_lam.py) into the container so edits are always reflected. 2. Volume cleanup: remove stale outputs (zip, mp4, png) before each generation on both UI and GPU sides. 3. CUDA arch list: add 8.9 (Ada/L4) alongside 8.6 (Ampere). 4. Runtime diagnostics: log xformers availability, PyTorch/CUDA versions, GPU name, and weight loading statistics with [CRITICAL] alerts for shape mismatches. 5. Image version tag (_IMAGE_VERSION) to force Modal image rebuild. https://claude.ai/code/session_01TDgrP1FjR9uk15gX5rJebC
The external/ directory contains cpu_nms.pyx which is compiled to cpu_nms.cpython-310-*.so during image build (git clone + build_ext). Mounting the local external/ (which only has source files, no .so) overwrites the compiled extension, causing ImportError at runtime → Internal Server Error. Also: - Wrap gradio_client._json_schema_to_python_type patch in try/except for compatibility with newer gradio versions - Bump IMAGE_VERSION to v3.1 to force rebuild https://claude.ai/code/session_01TDgrP1FjR9uk15gX5rJebC
add_local_file() was added in Modal SDK v0.66.40. If the user has an older version, the method doesn't exist and the module fails to load at import time, causing Internal Server Error on all endpoints. app_lam.py is already in the container from git clone, so no need to mount it separately. https://claude.ai/code/session_01TDgrP1FjR9uk15gX5rJebC
@modal.concurrent is incompatible with @modal.asgi_app() in some Modal SDK versions and can cause Internal Server Error. ASGI apps handle concurrency natively via FastAPI/uvicorn, so the decorator is unnecessary. https://claude.ai/code/session_01TDgrP1FjR9uk15gX5rJebC
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
No description provided.