Skip to content

perf: 优化 sessions/messages 长内容 Pretty 性能#860

Open
tesgth032 wants to merge 30 commits intoding113:devfrom
tesgth032:perf/sessions-messages-pretty
Open

perf: 优化 sessions/messages 长内容 Pretty 性能#860
tesgth032 wants to merge 30 commits intoding113:devfrom
tesgth032:perf/sessions-messages-pretty

Conversation

@tesgth032
Copy link
Contributor

@tesgth032 tesgth032 commented Mar 2, 2026

Summary

优化 /dashboard/sessions/[sessionId]/messages 页面中 CodeDisplay 组件处理大内容(Large JSON、headers、长文本)时的性能问题。通过引入多层级降级策略(纯文本展示 → 虚拟化语法高亮 → Web Worker 处理),解决 react-syntax-highlighter 因大量 DOM 生成导致的主线程长任务和页面卡死问题。

Problem

在会话消息详情页中,当 Pretty 视图展示超长内容时:

  • react-syntax-highlighter 生成大量 DOM 节点,导致主线程阻塞
  • 页面出现明显的卡顿甚至卡死现象
  • 用户体验严重下降,无法流畅查看大内容

Related Issues & PRs

Related Issues

Related PRs

Solution

本 PR 采用三层降级策略,在不引入 CodeMirror/Monaco 等重型编辑器的前提下,实现大内容的流畅展示:

方案 1:纯文本降级(默认开启)

  • 当内容超过阈值(CCH_CODEDISPLAY_HIGHLIGHT_MAX_CHARS,默认 30000 字符)时,自动降级为 textarea 纯文本展示
  • 保留复制、下载、搜索等功能
  • 避免因语法高亮导致 DOM 爆炸

方案 3:虚拟化语法高亮(可选开启)

  • 通过 CCH_CODEDISPLAY_VIRTUAL_HIGHLIGHT=true 开启
  • 仅渲染可视窗口内的语法高亮内容(虚拟化技术)
  • UI 提供 Plain/Virtual 视图切换
  • 索引失败自动回退到 Plain 视图,避免空白页面

Worker 化架构

  • JSON Pretty 格式化(带进度/取消/输出上限)迁移到 Web Worker
  • 行索引构建迁移到 Web Worker
  • only-matches 搜索迁移到 Web Worker
  • Worker 不可用时自动回退到主线程实现

Changes

Core Changes (主要实现)

文件 变更内容
src/components/ui/code-display.tsx 核心组件重构,集成多层级策略、Worker 通信、虚拟化支持 (+996/-235)
src/components/ui/code-display.worker.ts Web Worker 实现,处理 JSON pretty、行索引、搜索 (+847)
src/components/ui/code-display-worker-client.ts Worker 通信客户端,支持进度回调、取消操作 (+432)
src/components/ui/code-display-virtual-highlighter.tsx 虚拟化语法高亮组件,仅渲染可视区域 (+309)
src/components/ui/code-display-config.ts 配置定义,包含环境变量读取与默认值 (+154)

Supporting Changes (支持性变更)

文件 变更内容
src/components/ui/code-display-matches-list.tsx 搜索结果列表组件 (+131)
tests/unit/code-display-perf-ui.test.tsx 单元测试,覆盖大内容策略与回退路径 (+323)
src/components/ui/code-display-plain-textarea.tsx 纯文本展示组件 (+40)
src/components/ui/code-display-config-context.tsx 配置上下文提供器 (+27)
.env.example 新增 CodeDisplay 相关配置 (+17)
messages/*/dashboard.json i18n 翻译(5 种语言)(+8 每文件)
src/app/[locale]/layout.tsx 注入配置上下文 (+3/-1)
src/app/providers.tsx 添加配置 Provider (+17/-12)

Files Changed Summary

  • 新增文件: 9 个
  • 修改文件: 6 个
  • 总添加行数: ~3700+ 行
  • 总删除行数: ~250+ 行

Configuration

新增环境变量配置(已更新 .env.example):

# Dashboard CodeDisplay(长内容美化性能)
# 方案1:长内容 Pretty 视图使用纯文本(textarea)展示,避免语法高亮 DOM 爆炸导致卡顿(默认开启)
CCH_CODEDISPLAY_LARGE_PLAIN=true

# 方案3:仅对可视窗口做语法高亮(虚拟化高亮)。默认关闭,需要显式开启
CCH_CODEDISPLAY_VIRTUAL_HIGHLIGHT=false

# 可选调参(一般无需修改)
CCH_CODEDISPLAY_WORKER_ENABLE=true
CCH_CODEDISPLAY_PERF_DEBUG=false
CCH_CODEDISPLAY_HIGHLIGHT_MAX_CHARS=30000
CCH_CODEDISPLAY_VIRTUAL_OVERSCAN_LINES=50
CCH_CODEDISPLAY_VIRTUAL_CONTEXT_LINES=50
CCH_CODEDISPLAY_VIRTUAL_LINE_HEIGHT_PX=18
CCH_CODEDISPLAY_MAX_PRETTY_OUTPUT_BYTES=20000000
CCH_CODEDISPLAY_MAX_LINE_INDEX_LINES=200000

Testing

Automated Tests

  • 单元测试已添加: tests/unit/code-display-perf-ui.test.tsx (+323 行)
    • 大内容策略测试
    • 回退路径测试
    • UI 交互测试

Build & Quality Checks

  • bun run build - 生产构建通过
  • bun run lint - Biome 代码检查通过
  • bun run lint:fix - 自动修复完成
  • bun run typecheck - TypeScript 类型检查通过
  • bun run test - Vitest 单元测试通过

Manual Testing Checklist

  • 大 JSON 内容(>30KB)正确降级为纯文本
  • 开启虚拟化高亮后,大内容可流畅滚动
  • Worker 失败时正确回退到主线程
  • 搜索功能在大内容下正常工作
  • 复制/下载功能在所有模式下可用
  • 所有 i18n 翻译正确显示

Breaking Changes

无破坏性变更 - 所有新功能默认关闭或通过配置启用,现有行为保持不变。

Screenshots / Demo

建议添加以下截图/GIF:

  1. 大 JSON 内容自动降级为纯文本的展示效果
  2. 虚拟化高亮模式下的流畅滚动演示
  3. Plain/Virtual 视图切换 UI

Checklist

  • 代码遵循项目规范(Biome 格式化)
  • 自我代码审查完成
  • 所有测试通过
  • i18n 翻译已添加(5 种语言)
  • 环境变量示例已更新
  • 配置文档已补充

描述由 Claude AI 增强

Greptile Summary

This PR introduces a well-structured three-layer degradation strategy for large content in CodeDisplay: plain-textarea fallback, optional virtual syntax highlighting, and Web Worker offloading for JSON formatting, line-indexing, and search. The architecture is sound and most issues from the previous review rounds have been addressed (CRLF consistency, getTextKey consolidation, isCancelled TTL cleanup, worker cancellation interval alignment, WORKER_UNAVAILABLE typing, retry button for canceled state, and workerEnabled threading through to buildLineIndex).

Key remaining observations:

  • Worker queue blocked by synchronous formatterformatJsonPrettyStreaming runs without any await, so cancel messages and new search/index jobs are queued and cannot execute until the formatter finishes. For very large files this can delay user-visible operations several seconds after clicking Cancel.
  • stringifyJsonPretty is dead code — exported from the worker client and handled in the worker, but never imported by any component; the corresponding PendingJob arm and worker message branch are unreachable.
  • One-frame null render in CodeDisplayVirtualHighlighter — between indexStatus becoming "ready" and the first updateWindow effect firing, the component returns null, causing a brief visual flash.
  • No match highlighting in CodeDisplayMatchesList — result rows display the full matching line but do not highlight the search term within it, making it harder for users to spot the match in long lines.

Confidence Score: 3/5

  • Merge with caution — the synchronous formatter can hold the worker thread hostage for seconds after cancellation, blocking queued search/index jobs; other issues are minor.
  • The core degradation logic is correct and the majority of prior-round issues have been resolved. The PR is a significant improvement over the status quo. Score is held at 3 (rather than 4) because the synchronous formatJsonPrettyStreaming blocking the worker job queue is a real responsiveness regression for users on large files who cancel formatting and then try to search — they will see searches appear to "hang" until the worker finishes the cancelled format job. The dead-code stringifyJsonPretty adds unnecessary maintenance surface but is not a runtime risk.
  • Pay close attention to src/components/ui/code-display.worker.ts (synchronous formatter blocks queue) and src/components/ui/code-display-worker-client.ts (unused stringifyJsonPretty export).

Important Files Changed

Filename Overview
src/components/ui/code-display.tsx Core component refactored with Worker-based JSON formatting, SSE virtualization, and multi-strategy large content rendering. Complex state machine with several known issues addressed from prior review round; one new blocking-queue concern and a minor null-render window remain.
src/components/ui/code-display.worker.ts Web Worker implementation with a synchronous streaming JSON formatter, async line-index builder, and async search. The synchronous formatter cannot respond to cancel messages sent during its execution, effectively blocking the worker job queue until it finishes.
src/components/ui/code-display-worker-client.ts Worker communication layer with proper abort-signal wiring and fallback implementations. stringifyJsonPretty is exported and handled by the worker but never imported or called by any component — dead code.
src/components/ui/code-display-virtual-highlighter.tsx Virtual syntax highlighter that builds a line index then renders only the visible window. Returns null briefly between index completion and first range calculation, causing a one-frame flicker.
src/components/ui/code-display-config.ts Clean configuration module with safe env-var parsing and sensible defaults. No issues found.
src/components/ui/code-display-matches-list.tsx Virtualized list of matching lines. Correctly handles CRLF line endings. No syntax/logic issues; matched term is not highlighted within lines (UX consideration).
src/lib/utils/text-key.ts Dual-mode text fingerprinting: full FNV-1a for texts ≤200k chars, sampled hash for larger texts. Well documented with acknowledged precision trade-offs.

Sequence Diagram

sequenceDiagram
    participant UI as CodeDisplay (Main Thread)
    participant Client as WorkerClient
    participant Worker as code-display.worker

    Note over UI,Worker: Large JSON → Pretty Tab
    UI->>Client: formatJsonPretty({ text, signal })
    Client->>Worker: postMessage(formatJsonPretty, jobId)
    Worker-->>Client: postMessage(progress, 30%)
    Client-->>UI: onProgress(30%)
    UI->>UI: show spinner + Cancel button

    Note over UI,Worker: User clicks Cancel
    UI->>Client: signal.abort()
    Client->>Client: cancelWorkerJob(jobId) → resolve(CANCELED)
    Client->>Worker: postMessage(cancel, jobId)
    UI->>UI: show "Canceled" + Retry button
    Note over Worker: cancel message queued — formatter still running synchronously
    Worker->>Worker: formatJsonPrettyStreaming finishes
    Worker-->>Client: postMessage(result) — ignored (no pending entry)
    Worker->>Worker: handleCancel fires (TTL cleanup)

    Note over UI,Worker: Search on large content
    UI->>Client: buildLineIndex({ text, signal })
    Client->>Worker: postMessage(buildLineIndex, jobId2)
    Worker-->>Client: postMessage(progress, 50%)
    Client-->>UI: onProgress(50%)
    UI->>Client: searchLines({ text, query, signal })
    Client->>Worker: postMessage(searchLines, jobId3)
    Worker-->>Client: postMessage(searchLinesResult)
    Client-->>UI: matches Int32Array (transferred)
    UI->>UI: render CodeDisplayMatchesList
Loading

Last reviewed commit: 030de03

- 方案1(默认开):长内容 Pretty 视图降级为纯文本 textarea,避免 SyntaxHighlighter DOM 爆炸
- 方案3(可选开):虚拟窗口语法高亮(仅渲染可视切片),并提供 Plain/Virtual 视图切换
- JSON pretty / 行索引 / only-matches 搜索迁移到 Web Worker,支持进度与取消,失败自动回落
- 新增 CodeDisplay 相关 env 配置并更新 .env.example
- 补充单元测试覆盖大内容策略与回退路径
@coderabbitai
Copy link

coderabbitai bot commented Mar 2, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

新增 CodeDisplay 的运行时配置解析与上下文、Worker(格式化/索引/搜索)与无 Worker 回退、虚拟高亮/纯文本/匹配列表/SSE 渲染实现、文本指纹工具及大量本地化文案与测试;在根布局将解析后的配置传入提供器。

Changes

Cohort / File(s) Summary
环境示例
\.env.example
新增多个 CodeDisplay 相关环境变量键(视图切换、Worker 启用、性能调优与阈值等)。
配置类型与解析
src/components/ui/code-display-config.ts
新增 CodeDisplayConfig 接口、DEFAULT_CODE_DISPLAY_CONFIGparseCodeDisplayConfigFromEnv(从 env 解析并做边界校验)。
配置上下文
src/components/ui/code-display-config-context.tsx
新增客户端 React 上下文 CodeDisplayConfigProvider 与钩子 useCodeDisplayConfig,用于注入运行时配置。
应用集成点
src/app/[locale]/layout.tsx, src/app/providers.tsx
根布局解析 env 并将 codeDisplayConfig 作为新 prop 传入 AppProvidersAppProviders 包裹 CodeDisplayConfigProvider(组件签名新增 prop)。
国际化文案
messages/*/dashboard.json
codeDisplay 节点添加多语言字符串(如 prettyWorkingviewPlainviewVirtualsearchWorkingcancelretryvirtual.indexWorking 等进度/错误/切换文案)。
核心 UI 组件
src/components/ui/code-display.tsx, src/components/ui/code-display-*.tsx
新增/扩展 CodeDisplay 渲染与控制:JSON pretty 打印(含 worker 协同)、大文件路径、视图切换(纯文本/虚拟高亮)、only-matches 优化、索引/搜索流程、SSE 事件渲染、进度/取消/重试 UI 与若干新导出类型(DisplaySseEvent 等)。
Worker 与客户端编排
src/components/ui/code-display.worker.ts, src/components/ui/code-display-worker-client.ts
新增 Worker 实现与主线程客户端编排:流式 JSON 格式化、stringify、行索引构建、行搜索、进度回调、取消支持、错误码契约及无 Worker 的主线程回退(大量类型与 API 导出)。
辅助工具
src/lib/utils/text-key.ts, src/lib/utils/index.ts
新增 getTextKey 文本指纹函数并在 utils 索引中导出(短文本全量哈希,超长文本采样窗口)。
测试
tests/unit/*, src/components/ui/__tests__/*, tests/integration/*
新增/更新大量测试:性能/大文件 UI 流程、配置解析、worker-client 行为及回退、组件模式切换行为及少量测试重构。
项目配置微变更
biome.json, package.json
biome.json 删除 $schema 字段;package.json@biomejs/biome devDependency 固定为 2.4.5

Estimated code review effort

🎯 5 (Critical) | ⏱️ ~120 minutes

Possibly related PRs

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 7.41% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed PR 标题准确描述了主要变更——优化长内容 Pretty 性能,与 changeset 中所有关键改进(Worker 架构、虚拟化、降级策略、多语言支持)一致。
Description check ✅ Passed PR 描述详细说明了大内容性能优化的三层降级策略、相关问题与解决方案、配置参数、测试情况和破坏性变更评估,与变更集高度相关。

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

此拉取请求旨在解决 /dashboard/sessions/[sessionId]/messages 页面在显示超长内容时因 react-syntax-highlighter 产生大量 DOM 导致的性能问题。通过引入智能的内容渲染策略和 Web Worker 卸载计算,显著提升了页面的响应性和用户体验,确保即使是大型数据也能流畅查看和交互。

Highlights

  • 性能优化策略: 为 /dashboard/sessions/[sessionId]/messages 页面引入了两种处理长内容 Pretty 视图的性能优化方案:当内容超过阈值时降级为纯文本显示(方案1,默认开启),以及可选的可视窗口语法高亮(虚拟化,方案3)。
  • Web Worker 化: 将 JSON 美化、行索引构建和仅匹配搜索等计算密集型任务迁移到 Web Worker 中执行,以避免阻塞主线程,并提供了 Worker 不可用时的回退机制。
  • 配置与国际化: 新增了多个环境变量以配置这些性能特性,并更新了多语言文件以支持新的 UI 文本和进度提示。
  • 组件重构与新增: 重构了 CodeDisplay 组件,并新增了 CodeDisplayConfig 上下文、CodeDisplayMatchesListCodeDisplayPlainTextareaCodeDisplayVirtualHighlighter 等组件,以及 code-display-worker-clientcode-display.worker.ts 文件来支持新的架构。
Changelog
  • .env.example
    • 新增了 CCH_CODEDISPLAY_LARGE_PLAINCCH_CODEDISPLAY_VIRTUAL_HIGHLIGHT 等环境变量,用于配置代码显示性能优化功能。
  • messages/en/dashboard.json
    • 新增了 'Formatting...', 'Plain', 'Virtual highlight', 'Searching...', 'Cancel' 和 'Indexing...' 等翻译键,以支持新的 UI 元素和 Worker 进度提示。
  • messages/ja/dashboard.json
    • 新增了 '整形中...', 'テキスト', '仮想ハイライト', '検索中...', 'キャンセル' 和 '索引作成中...' 等翻译键。
  • messages/ru/dashboard.json
    • 新增了 'Форматирование...', 'Текст', 'Виртуальная подсветка', 'Поиск...', 'Отмена' 和 'Индексация...' 等翻译键。
  • messages/zh-CN/dashboard.json
    • 新增了 '美化中...', '纯文本', '虚拟高亮', '搜索中...', '取消' 和 '建立索引中...' 等翻译键。
  • messages/zh-TW/dashboard.json
    • 新增了 '格式化中...', '純文字', '虛擬高亮', '搜尋中...', '取消' 和 '建立索引中...' 等翻译键。
  • src/app/[locale]/layout.tsx
    • 导入了 parseCodeDisplayConfigFromEnv 函数,并将解析后的 codeDisplayConfig 传递给 AppProviders
  • src/app/providers.tsx
    • 更新了 AppProviders 组件,使其接受 codeDisplayConfig 属性,并使用 CodeDisplayConfigProvider 包裹其子组件。
  • src/components/ui/code-display-config-context.tsx
    • 新增了文件,定义了 CodeDisplayConfig 的 React Context,并提供了 CodeDisplayConfigProvideruseCodeDisplayConfig Hook。
  • src/components/ui/code-display-config.ts
    • 新增了文件,定义了 CodeDisplayConfig 接口、默认配置 DEFAULT_CODE_DISPLAY_CONFIG,以及从环境变量解析配置的工具函数 parseCodeDisplayConfigFromEnv
  • src/components/ui/code-display-matches-list.tsx
    • 新增了组件,用于以虚拟化列表的形式高效显示搜索匹配结果。
  • src/components/ui/code-display-plain-textarea.tsx
    • 新增了组件,用于显示纯文本内容,针对大文本进行了优化,避免频繁的 React 受控更新。
  • src/components/ui/code-display-virtual-highlighter.tsx
    • 新增了组件,实现了大内容的虚拟化语法高亮,利用 Web Worker 进行行索引构建。
  • src/components/ui/code-display-worker-client.ts
    • 新增了文件,用于管理与 Web Worker 的通信,提供了 JSON 格式化、行索引和搜索的客户端函数,并包含主线程回退逻辑。
  • src/components/ui/code-display.tsx
    • 重构了 CodeDisplay 组件,集成了新的性能优化逻辑,包括使用 CodeDisplayConfig 上下文、实现大内容的纯文本回退和虚拟化高亮、将 JSON 格式化和搜索任务卸载到 Worker,并更新了 UI 以显示进度指示器和视图切换选项。
    • 移除了 parseSSEDataForDisplay 的导入。
  • src/components/ui/code-display.worker.ts
    • 新增了 Web Worker 文件,包含了流式 JSON 美化、行索引构建和行搜索的逻辑,旨在脱离主线程运行。
  • tests/unit/code-display-perf-ui.test.tsx
    • 新增了单元测试,验证了大型内容性能策略,包括纯文本回退、虚拟化高亮和基于 Worker 的搜索功能。
Activity
  • 此拉取请求引入了针对 sessions/messages 页面处理大内容时的显著性能改进。
  • 它实现了两种主要策略:针对超大内容的纯文本回退,以及针对大内容的可选虚拟化高亮。
  • JSON 格式化、行索引和搜索等繁重计算现在通过 Web Worker 执行,以防止 UI 冻结。
  • 通过环境变量暴露了新的配置选项,允许对这些性能特性进行细致调整。
  • UI 已更新,包含了 Worker 任务的进度指示器和大型美化内容的视图切换选项。
  • 新增了全面的单元测试以验证新的性能逻辑。
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

本次 PR 为展示长内容引入了显著的性能优化,通过 Web Worker 处理重计算任务和虚拟化渲染来避免页面卡顿。整体实现非常出色,代码结构清晰、健壮,并为不支持 Worker 的环境提供了回退方案。新增的配置项和组件都经过了周详的考虑。我只发现了一个与 SSE 事件搜索时展开状态管理相关的小问题,并提出了修复建议。总体而言,这是一次高质量的性能优化工作。


const [viewportHeight, setViewportHeight] = useState(0);
const [scrollTop, setScrollTop] = useState(0);
const [expandedRows, setExpandedRows] = useState<Set<number>>(() => new Set());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

events 列表因为搜索过滤而改变时,expandedRows 中存储的行索引会变得不正确,可能导致错误的行被意外展开。为了提升用户体验和修复此问题,建议添加一个 useEffect hook,在 events prop 改变时重置 expandedRows 状态。这样可以确保在筛选结果变化后,展开状态被清空,避免混淆。

  const [expandedRows, setExpandedRows] = useState<Set<number>>(() => new Set());

  useEffect(() => {
    // 当事件列表(例如,由于搜索)发生变化时,重置展开行的状态以避免不一致
    setExpandedRows(new Set());
  }, [events]);

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (5)
src/components/ui/code-display-virtual-highlighter.tsx (1)

79-133: 建议:考虑稳定化 onRequestPlainView 回调依赖

onRequestPlainViewuseEffect 依赖数组中,如果父组件未使用 useCallback 包装此回调,每次父组件渲染都会导致索引构建 effect 重新执行,触发不必要的重复索引。

建议的修复方案

可以使用 useRef 存储回调的最新引用,避免将其加入依赖数组:

+ const onRequestPlainViewRef = useRef(onRequestPlainView);
+ onRequestPlainViewRef.current = onRequestPlainView;

  useEffect(() => {
    // ... existing code ...
    
    void buildLineIndex({
      // ...
    }).then((res) => {
      // ...
      if (!res.ok) {
        setIndexStatus("error");
        setIndexProgress(null);
-       onRequestPlainView?.();
+       onRequestPlainViewRef.current?.();
        return;
      }
      // ...
    });

    return () => controller.abort();
- }, [maxLines, onRequestPlainView, perfDebugEnabled, textKey]);
+ }, [maxLines, perfDebugEnabled, textKey]);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/components/ui/code-display-virtual-highlighter.tsx` around lines 79 -
133, The effect depends on onRequestPlainView which may be unstable and cause
unnecessary reruns; stabilize by capturing the latest callback in a ref (e.g.
create onRequestPlainViewRef and update it whenever onRequestPlainView changes)
and use onRequestPlainViewRef.current inside the useEffect promise/result
handling instead of referencing onRequestPlainView in the dependency array, then
remove onRequestPlainView from the useEffect deps while keeping other deps
(maxLines, perfDebugEnabled, textKey) so buildLineIndex and the abort logic
(indexAbortRef/controller/jobId) only rerun when truly needed.
src/components/ui/code-display.tsx (3)

871-886: resolveTextForAction 函数应使用 useCallback 包装。

该函数依赖多个状态变量,作为 handleDownloadhandleCopy 的依赖。虽然当前实现正确,但使用 useCallback 可以更清晰地表达依赖关系并避免潜在的闭包陷阱。

不过,由于函数内部依赖过多状态,当前的内联函数实现也是合理的。

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/components/ui/code-display.tsx` around lines 871 - 886, Wrap the
resolveTextForAction function in useCallback to make its dependencies explicit
and avoid closure issues: replace the inline declaration of resolveTextForAction
with a useCallback that returns the same logic and include all referenced
variables in its dependency array (language, content, mode, resolvedPrettyText,
showOnlyMatches, onlyMatchesQuery, shouldOptimizeOnlyMatches,
nonSseFilteredText, onlyMatchesLineStarts, onlyMatchesMatches, and
buildOnlyMatchesText). Then ensure handleDownload and handleCopy list
resolveTextForAction in their dependency arrays so they update when the
resolved-text logic changes.

288-373: SSE 行渲染的 key 可能不唯一。

Line 295 使用 `${index}-${evt.event}` 作为 key,但如果同一 index 有多个相同 event 类型的事件(虽然不太可能),key 会重复。考虑使用唯一标识符或仅使用 index

建议修改
-              key={`${index}-${evt.event}`}
+              key={index}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/components/ui/code-display.tsx` around lines 288 - 373, The key for each
rendered SSE row (in the rows.map callback that returns the <div>) is currently
`${index}-${evt.event}`, which can collide; update the key on the root element
returned by rows.map to use a truly unique identifier—prefer using an explicit
unique field from the event if available (e.g., evt.id or evt.timestamp) or fall
back to the numeric index alone (key={index}) or a stable combination like
`${index}-${evt.id}`; change the key prop on the element created in the rows.map
callback to the chosen unique value (replace the `${index}-${evt.event}` usage).

224-249: 潜在的 rAF 句柄泄漏风险。

当组件卸载时,第二个 useEffect(Lines 244-249)清理了 rafRef,但如果 scrollRef.current 在卸载前被清理,可能存在时序问题。建议将 rAF 清理逻辑合并到第一个 useEffect 的清理函数中,或确保两个 effect 的依赖一致。

当前实现在实践中应该是安全的,因为 React 会按顺序执行清理函数。

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/components/ui/code-display.tsx` around lines 224 - 249, The rAF handle
cleanup (rafRef and cancelAnimationFrame) may race with the resize observer
cleanup; move the rAF cancellation into the same useEffect that sets up
scrollRef/ResizeObserver (the effect that defines el, update, ro) so both
ro?.disconnect() and if (rafRef.current !== null)
cancelAnimationFrame(rafRef.current) run together in the same cleanup, or
alternatively make both effects share the same dependency array to guarantee
ordering; update references: scrollRef, rafRef, useEffect, ResizeObserver,
update, ro, and cancelAnimationFrame to implement this change.
src/components/ui/code-display-worker-client.ts (1)

251-268: 移除不必要的 await

return await new Promise(...) 中的 await 是多余的,直接 return new Promise(...) 即可。这在多个函数中重复出现。

建议修改
-  return await new Promise((resolve) => {
+  return new Promise((resolve) => {

同样适用于 stringifyJsonPretty(Line 296)、buildLineIndex(Line 356)、searchLines(Line 415)。

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/components/ui/code-display-worker-client.ts` around lines 251 - 268, The
returned Promise in functions like the one creating jobId via genJobId() is
being awaited unnecessarily (e.g., "return await new Promise(...)"); remove the
redundant await and just "return new Promise(...)" in this block and the other
similar functions stringifyJsonPretty, buildLineIndex, and searchLines so the
promise is returned directly; keep the existing abort handling
(signal?.addEventListener/onAbort, cancelWorkerJob(jobId)), pending.set(...) and
w.postMessage(...) logic intact and only remove the await keyword.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/components/ui/code-display-matches-list.tsx`:
- Around line 16-18: The code unconditionally uses nextStart - 1 as an inclusive
end which, combined with the later slice(start, end) (exclusive upper bound),
drops the final character for the last line or single-line texts; change the end
computation to produce an exclusive end index: compute nextStart = lineNo + 1 <
lineCount ? (lineStarts[lineNo + 1] ?? text.length) : text.length, then set end
= (text[nextStart - 1] === '\n' ? nextStart - 1 : nextStart) and return [start,
end]; this preserves the final character while still excluding a trailing
newline when present (references: variables nextStart, lineStarts, lineNo,
lineCount, text and the later slice(start, end)).

---

Nitpick comments:
In `@src/components/ui/code-display-virtual-highlighter.tsx`:
- Around line 79-133: The effect depends on onRequestPlainView which may be
unstable and cause unnecessary reruns; stabilize by capturing the latest
callback in a ref (e.g. create onRequestPlainViewRef and update it whenever
onRequestPlainView changes) and use onRequestPlainViewRef.current inside the
useEffect promise/result handling instead of referencing onRequestPlainView in
the dependency array, then remove onRequestPlainView from the useEffect deps
while keeping other deps (maxLines, perfDebugEnabled, textKey) so buildLineIndex
and the abort logic (indexAbortRef/controller/jobId) only rerun when truly
needed.

In `@src/components/ui/code-display-worker-client.ts`:
- Around line 251-268: The returned Promise in functions like the one creating
jobId via genJobId() is being awaited unnecessarily (e.g., "return await new
Promise(...)"); remove the redundant await and just "return new Promise(...)" in
this block and the other similar functions stringifyJsonPretty, buildLineIndex,
and searchLines so the promise is returned directly; keep the existing abort
handling (signal?.addEventListener/onAbort, cancelWorkerJob(jobId)),
pending.set(...) and w.postMessage(...) logic intact and only remove the await
keyword.

In `@src/components/ui/code-display.tsx`:
- Around line 871-886: Wrap the resolveTextForAction function in useCallback to
make its dependencies explicit and avoid closure issues: replace the inline
declaration of resolveTextForAction with a useCallback that returns the same
logic and include all referenced variables in its dependency array (language,
content, mode, resolvedPrettyText, showOnlyMatches, onlyMatchesQuery,
shouldOptimizeOnlyMatches, nonSseFilteredText, onlyMatchesLineStarts,
onlyMatchesMatches, and buildOnlyMatchesText). Then ensure handleDownload and
handleCopy list resolveTextForAction in their dependency arrays so they update
when the resolved-text logic changes.
- Around line 288-373: The key for each rendered SSE row (in the rows.map
callback that returns the <div>) is currently `${index}-${evt.event}`, which can
collide; update the key on the root element returned by rows.map to use a truly
unique identifier—prefer using an explicit unique field from the event if
available (e.g., evt.id or evt.timestamp) or fall back to the numeric index
alone (key={index}) or a stable combination like `${index}-${evt.id}`; change
the key prop on the element created in the rows.map callback to the chosen
unique value (replace the `${index}-${evt.event}` usage).
- Around line 224-249: The rAF handle cleanup (rafRef and cancelAnimationFrame)
may race with the resize observer cleanup; move the rAF cancellation into the
same useEffect that sets up scrollRef/ResizeObserver (the effect that defines
el, update, ro) so both ro?.disconnect() and if (rafRef.current !== null)
cancelAnimationFrame(rafRef.current) run together in the same cleanup, or
alternatively make both effects share the same dependency array to guarantee
ordering; update references: scrollRef, rafRef, useEffect, ResizeObserver,
update, ro, and cancelAnimationFrame to implement this change.

ℹ️ Review info

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Cache: Disabled due to Reviews > Disable Cache setting

📥 Commits

Reviewing files that changed from the base of the PR and between 2769a75 and 355812e.

📒 Files selected for processing (17)
  • .env.example
  • messages/en/dashboard.json
  • messages/ja/dashboard.json
  • messages/ru/dashboard.json
  • messages/zh-CN/dashboard.json
  • messages/zh-TW/dashboard.json
  • src/app/[locale]/layout.tsx
  • src/app/providers.tsx
  • src/components/ui/code-display-config-context.tsx
  • src/components/ui/code-display-config.ts
  • src/components/ui/code-display-matches-list.tsx
  • src/components/ui/code-display-plain-textarea.tsx
  • src/components/ui/code-display-virtual-highlighter.tsx
  • src/components/ui/code-display-worker-client.ts
  • src/components/ui/code-display.tsx
  • src/components/ui/code-display.worker.ts
  • tests/unit/code-display-perf-ui.test.tsx

@github-actions github-actions bot added the size/XL Extra Large PR (> 1000 lines) label Mar 2, 2026

function getTextKey(text: string): string {
const len = text.length;
if (len === 0) return "0:0:0:0:0:0";
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[MEDIUM] [SIMPLIFY-COMPLEXITY] Duplicate function definition

The getTextKey function is defined identically in two separate files:

  • src/components/ui/code-display-virtual-highlighter.tsx (this file, lines 31-42)
  • src/components/ui/code-display.tsx (added lines 89-100 in the diff)

Why this is a problem: Code duplication increases maintenance burden and risk of inconsistency if one copy is updated without the other.

Suggested fix: Extract to a shared utility file and import from both locations:

// src/lib/utils/text-key.ts
export function getTextKey(text: string): string {
  const len = text.length;
  if (len === 0) return "0:0:0:0:0:0";

  const first = text.charCodeAt(0);
  const second = len > 1 ? text.charCodeAt(1) : 0;
  const mid = text.charCodeAt(len >> 1);
  const penultimate = len > 1 ? text.charCodeAt(len - 2) : 0;
  const last = text.charCodeAt(len - 1);

  return `${len}:${first}:${second}:${mid}:${penultimate}:${last}`;
}

Copy link
Contributor

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review Summary

This PR introduces significant performance optimizations for the CodeDisplay component when handling large content. The approach uses Web Workers for heavy computation, virtual highlighting for large text, and plain textarea fallbacks to prevent DOM explosion.

Important Note: The PR title contains "[未完成]" indicating this is work in progress. Consider whether a full review is appropriate at this stage.

PR Size: XL

  • Lines changed: 3,584 (3,336 additions, 248 deletions)
  • Files changed: 17

For XL PRs, consider splitting into smaller PRs:

  1. Core infrastructure: Config types, context provider, worker client
  2. UI components: Plain textarea, virtual highlighter, matches list
  3. Main component refactor: CodeDisplay changes
  4. Tests: Test coverage for new features

Issues Found

Category Critical High Medium Low
Logic/Bugs 0 0 0 0
Security 0 0 0 0
Error Handling 0 0 0 0
Types 0 0 0 0
Comments/Docs 0 0 0 0
Tests 0 0 1 0
Simplification 0 0 1 0

Medium Priority Issues (Should Fix)

  1. [SIMPLIFY-COMPLEXITY] Duplicate function definition (inline comment posted)

    • The getTextKey function is duplicated in two files
    • Extract to shared utility to reduce maintenance burden
  2. [TEST-INCOMPLETE] Missing test coverage for worker error paths

    • Tests exist for happy paths but not for:
      • Worker client fallback scenarios (when Worker unavailable)
      • Error handling in formatJsonPretty/stringifyJsonPretty
      • VirtualHighlighter index failure fallback

Positive Observations

  • i18n properly implemented for all 5 languages
  • Good use of AbortController for cancellable operations
  • Appropriate fallback mechanisms when Worker unavailable
  • Clean separation of concerns with config/context/worker

Review Coverage

  • Logic and correctness - Clean
  • Security (OWASP Top 10) - Clean (uses safe Blob types, no XSS vectors)
  • Error handling - Clean (errors properly surfaced via result types)
  • Type safety - Clean (good use of discriminated unions)
  • Documentation accuracy - Clean (comments match implementation)
  • Test coverage - Needs improvement (worker error paths untested)
  • Code clarity - Minor issue (duplicate function)

Automated review by Claude AI

Copy link

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

25 files reviewed, 54 comments

Edit Code Review Agent Settings | Greptile

Comment on lines +55 to +66
function getTextKey(text: string): string {
const len = text.length;
if (len === 0) return "0:0:0:0:0:0";

const first = text.charCodeAt(0);
const second = len > 1 ? text.charCodeAt(1) : 0;
const mid = text.charCodeAt(len >> 1);
const penultimate = len > 1 ? text.charCodeAt(len - 2) : 0;
const last = text.charCodeAt(len - 1);

return `${len}:${first}:${second}:${mid}:${penultimate}:${last}`;
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Duplicated code - this getTextKey function is identical to the one in code-display-virtual-highlighter.tsx:12-23. Extract to a shared utility to avoid duplication.

Prompt To Fix With AI
This is a comment left during a code review.
Path: src/components/ui/code-display.tsx
Line: 55-66

Comment:
Duplicated code - this `getTextKey` function is identical to the one in `code-display-virtual-highlighter.tsx:12-23`. Extract to a shared utility to avoid duplication.

How can I resolve this? If you propose a fix, please make it concise.

Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!

Comment on lines +96 to +98
let workerSingleton: Worker | null = null;
let initialized = false;
let nextJobId = 1;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Worker singleton could cause test isolation issues. Consider instance-based workers or proper cleanup between tests.

Prompt To Fix With AI
This is a comment left during a code review.
Path: src/components/ui/code-display-worker-client.ts
Line: 96-98

Comment:
Worker singleton could cause test isolation issues. Consider instance-based workers or proper cleanup between tests.

How can I resolve this? If you propose a fix, please make it concise.

Comment on lines +179 to +188
w.onerror = () => {
// Worker 崩溃/加载失败:清空 pending,后续自动回落到主线程
for (const [jobId, job] of pending.entries()) {
job.resolve({ ok: false, errorCode: "UNKNOWN" });
pending.delete(jobId);
}
workerSingleton?.terminate();
workerSingleton = null;
initialized = false;
};
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Worker crashes reset initialized flag but don't provide error feedback to UI. Consider adding retry logic or user-visible error state.

Prompt To Fix With AI
This is a comment left during a code review.
Path: src/components/ui/code-display-worker-client.ts
Line: 179-188

Comment:
Worker crashes reset `initialized` flag but don't provide error feedback to UI. Consider adding retry logic or user-visible error state.

How can I resolve this? If you propose a fix, please make it concise.

Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!

- 修复 CodeDisplayMatchesList / buildOnlyMatchesText 的最后一行截断问题(支持 CRLF)
- SSE 事件列表过滤后重置展开状态,避免索引错位
- 抽取 getTextKey 到共享 util,避免重复实现
- 虚拟高亮使用 ref 稳定 onRequestPlainView,避免重复索引
- 简化 Worker client 的 Promise 返回(移除冗余 await)
- 增加单测覆盖最后一行不截断
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (1)
tests/unit/code-display-perf-ui.test.tsx (1)

256-259: 建议避免断言中硬编码 "Show all" 文案。

这里可直接取 dashboardMessages 对应字段,避免翻译文案调整导致无关失败。

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/unit/code-display-perf-ui.test.tsx` around lines 256 - 259, 当前测试在
waitFor 断言中硬编码了 "Show all" 文案;请改为从本地化消息对象中读取对应字段并断言该值。定位到使用 waitFor 和
container.querySelector('[data-testid="code-display-only-matches-toggle"]')
的代码,将 (btn?.textContent || "").includes("Show all") 替换为使用 dashboardMessages
中对应的键(例如 dashboardMessages.showAll 或
dashboardMessages.codeDisplay.onlyMatchesToggle)并断言包含该变量,以避免因翻译文案变更导致的测试失败。
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/components/ui/code-display-worker-client.ts`:
- Around line 110-118: getWorker currently constructs a new Worker synchronously
and can throw (bypassing callers' null-check fallback); wrap the Worker creation
in a try/catch around the new Worker(...) call inside getWorker, log or swallow
the error, ensure workerSingleton remains null on failure, and return null so
callers relying on if (!w) can perform fallback; reference getWorker,
workerSingleton and supportsWorker to locate the change.

In `@src/lib/utils/text-key.ts`:
- Around line 10-21: The current getTextKey samples 5 chars and causes high
collision risk; replace the sampling scheme with a full-text hash: update
getTextKey to compute a deterministic hash of the entire text (for example a
fast non-crypto hash like FNV-1a or a SHA-1/sha256 hex) and return that
(optionally prefixed by text length for extra safety), so keys uniquely
represent content; ensure the function still accepts a string and returns a
string and update any consumers expecting the old `${len}:...` format if needed.

---

Nitpick comments:
In `@tests/unit/code-display-perf-ui.test.tsx`:
- Around line 256-259: 当前测试在 waitFor 断言中硬编码了 "Show all"
文案;请改为从本地化消息对象中读取对应字段并断言该值。定位到使用 waitFor 和
container.querySelector('[data-testid="code-display-only-matches-toggle"]')
的代码,将 (btn?.textContent || "").includes("Show all") 替换为使用 dashboardMessages
中对应的键(例如 dashboardMessages.showAll 或
dashboardMessages.codeDisplay.onlyMatchesToggle)并断言包含该变量,以避免因翻译文案变更导致的测试失败。

ℹ️ Review info

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Cache: Disabled due to Reviews > Disable Cache setting

📥 Commits

Reviewing files that changed from the base of the PR and between 355812e and 7ae0cca.

📒 Files selected for processing (7)
  • src/components/ui/code-display-matches-list.tsx
  • src/components/ui/code-display-virtual-highlighter.tsx
  • src/components/ui/code-display-worker-client.ts
  • src/components/ui/code-display.tsx
  • src/lib/utils/index.ts
  • src/lib/utils/text-key.ts
  • tests/unit/code-display-perf-ui.test.tsx

Comment on lines +604 to +605
if (text.charCodeAt(i) === 10) {
lineCount += 1;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Only counts \n (LF) as line breaks, but doesn't handle \r\n (CRLF) or \r (CR) line endings. This creates inconsistency with code-display-matches-list.tsx:22-26 which does handle CRLF. Files with Windows line endings will have incorrect line indices.

Suggested change
if (text.charCodeAt(i) === 10) {
lineCount += 1;
if (text.charCodeAt(i) === 10 || text.charCodeAt(i) === 13) {
// Skip CRLF as single line break
if (text.charCodeAt(i) === 13 && i + 1 < text.length && text.charCodeAt(i + 1) === 10) {
i += 1;
}
lineCount += 1;
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/components/ui/code-display.worker.ts
Line: 604-605

Comment:
Only counts `\n` (LF) as line breaks, but doesn't handle `\r\n` (CRLF) or `\r` (CR) line endings. This creates inconsistency with `code-display-matches-list.tsx:22-26` which does handle CRLF. Files with Windows line endings will have incorrect line indices.

```suggestion
    if (text.charCodeAt(i) === 10 || text.charCodeAt(i) === 13) {
      // Skip CRLF as single line break
      if (text.charCodeAt(i) === 13 && i + 1 < text.length && text.charCodeAt(i + 1) === 10) {
        i += 1;
      }
      lineCount += 1;
```

How can I resolve this? If you propose a fix, please make it concise.

errorCode: "CANCELED" | "UNKNOWN";
};

const cancelledJobs = new Set<number>();
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Memory leak - cancelledJobs Set grows unbounded. Jobs are deleted in finally block after completion, but if a job crashes before reaching finally or if cancellations accumulate, this Set will grow indefinitely.

Add periodic cleanup or size limit check.

Prompt To Fix With AI
This is a comment left during a code review.
Path: src/components/ui/code-display.worker.ts
Line: 98

Comment:
Memory leak - `cancelledJobs` Set grows unbounded. Jobs are deleted in `finally` block after completion, but if a job crashes before reaching `finally` or if cancellations accumulate, this Set will grow indefinitely.

Add periodic cleanup or size limit check.

How can I resolve this? If you propose a fix, please make it concise.

}
}

const starts = new Int32Array(lineCount);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using Int32Array limits maximum file size to ~2GB (Int32 max value is 2,147,483,647). Files larger than this will cause integer overflow in line start positions. Consider documenting this limitation or using BigInt64Array for very large file support.

Prompt To Fix With AI
This is a comment left during a code review.
Path: src/components/ui/code-display.worker.ts
Line: 621

Comment:
Using `Int32Array` limits maximum file size to ~2GB (Int32 max value is 2,147,483,647). Files larger than this will cause integer overflow in line start positions. Consider documenting this limitation or using `BigInt64Array` for very large file support.

How can I resolve this? If you propose a fix, please make it concise.

Comment on lines +108 to +112
if (!res.ok) {
setIndexStatus("error");
setIndexProgress(null);
onRequestPlainViewRef.current?.();
return;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Silent fallback - when line indexing fails, it automatically calls onRequestPlainView() without showing an error message to the user. Users won't know why the virtual highlight view suddenly switched to plain text. Consider showing a toast notification explaining the fallback reason.

Prompt To Fix With AI
This is a comment left during a code review.
Path: src/components/ui/code-display-virtual-highlighter.tsx
Line: 108-112

Comment:
Silent fallback - when line indexing fails, it automatically calls `onRequestPlainView()` without showing an error message to the user. Users won't know why the virtual highlight view suddenly switched to plain text. Consider showing a toast notification explaining the fallback reason.

How can I resolve this? If you propose a fix, please make it concise.

Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!

Comment on lines +140 to +158
const chunks: string[] = [];
let outLen = 0;

const indentCache = new Map<number, string>();
const getIndent = (level: number) => {
const cached = indentCache.get(level);
if (cached) return cached;
const str = " ".repeat(level * indentSize);
indentCache.set(level, str);
return str;
};

const push = (s: string): { ok: true } | { ok: false; errorCode: "OUTPUT_TOO_LARGE" } => {
chunks.push(s);
outLen += s.length;
if (estimateUtf16Bytes(outLen) > maxOutputBytes) {
return { ok: false, errorCode: "OUTPUT_TOO_LARGE" };
}
return { ok: true };
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The chunks array grows without upfront size validation. For malformed JSON with deeply nested structures, this could accumulate large amounts of data before hitting maxOutputBytes check. Consider adding periodic size checks or early termination if chunk count exceeds a threshold.

Prompt To Fix With AI
This is a comment left during a code review.
Path: src/components/ui/code-display.worker.ts
Line: 140-158

Comment:
The `chunks` array grows without upfront size validation. For malformed JSON with deeply nested structures, this could accumulate large amounts of data before hitting `maxOutputBytes` check. Consider adding periodic size checks or early termination if chunk count exceeds a threshold.

How can I resolve this? If you propose a fix, please make it concise.

const code = text.charCodeAt(i);
if (!isWhitespaceCharCode(code)) break;
i += 1;
if ((i & 8191) === 0) maybeReportProgress();
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cancellation check happens every 8192 iterations ((i & 8191) === 0). For very large files, this might be too coarse - it could take seconds to respond to cancellation. Consider reducing to 2048 or 4096 for more responsive cancellation.

Prompt To Fix With AI
This is a comment left during a code review.
Path: src/components/ui/code-display.worker.ts
Line: 203

Comment:
Cancellation check happens every 8192 iterations (`(i & 8191) === 0`). For very large files, this might be too coarse - it could take seconds to respond to cancellation. Consider reducing to 2048 or 4096 for more responsive cancellation.

How can I resolve this? If you propose a fix, please make it concise.

Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!

return JSON.stringify(value, null, 2);
}

function estimateUtf8BytesUpToLimit(text: string, limitBytes: number): number {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Resolved duplication: getTextKey is now exported from @/lib/utils/text-key.ts and imported here (line 33). The duplication issue from the previous review has been addressed.

Prompt To Fix With AI
This is a comment left during a code review.
Path: src/components/ui/code-display.tsx
Line: 55

Comment:
Resolved duplication: `getTextKey` is now exported from `@/lib/utils/text-key.ts` and imported here (line 33). The duplication issue from the previous review has been addressed.

How can I resolve this? If you propose a fix, please make it concise.

- Worker 构造失败(CSP/资源异常)时返回 null,确保走主线程 fallback
- JSON Pretty 的取消按钮同步更新状态,避免 loading 卡住
- getTextKey 改为 FNV-1a 哈希(小文本全量,大文本固定窗口),降低碰撞风险
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/components/ui/code-display-worker-client.ts`:
- Around line 272-274: The call to w.postMessage(...) after pending.set(...) can
throw synchronously and leave pending entries and abort listeners dangling; wrap
each w.postMessage(...) (the sites that set pending with { kind: "...", resolve:
wrappedResolve, onProgress } and then post) in a try/catch, and in the catch
remove the pending entry (pending.delete(jobId)), remove any associated
abort/cleanup listener, and reject the original promise (call
wrappedResolve.reject or otherwise reject the Promise) with the caught error so
callers don’t hang; ensure you apply this same try/catch + cleanup pattern for
all occurrences that use pending, jobId, wrappedResolve, and onProgress before
posting to the worker.

In `@src/components/ui/code-display.tsx`:
- Around line 516-525: The effect that starts JSON pretty tasks is incorrectly
allowing a duplicate run when jsonPrettyStatus changes to "loading" because the
early-return check (comparing jsonPrettyTextKey === jsonSourceKey and
jsonPrettyStatus in ["ready","invalid","tooLarge","canceled","error"]) omits
"loading"; update that conditional to also treat "loading" as an
in-progress/terminal state so the effect returns early and doesn't abort the
previous controller then start a duplicate task. Apply the same change to the
other similar checks referenced (the blocks around the symbols
jsonPrettyTextKey, jsonSourceKey, jsonPrettyStatus and the controller abort
logic in the other effect ranges) so all duplicate-trigger locations include
"loading" in their excluded statuses.

ℹ️ Review info

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Cache: Disabled due to Reviews > Disable Cache setting

📥 Commits

Reviewing files that changed from the base of the PR and between 7ae0cca and 6c3fcc7.

📒 Files selected for processing (3)
  • src/components/ui/code-display-worker-client.ts
  • src/components/ui/code-display.tsx
  • src/lib/utils/text-key.ts


useEffect(() => {
// events 可能由搜索过滤产生新列表:重置展开状态以避免索引错位
void events.length;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Redundant statement - accessing events.length has no effect. The [events] dependency in the effect (line 222) already triggers the reset. Remove this line.

Suggested change
void events.length;
// events 可能由搜索过滤产生新列表:重置展开状态以避免索引错位
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/components/ui/code-display.tsx
Line: 220

Comment:
Redundant statement - accessing `events.length` has no effect. The `[events]` dependency in the effect (line 222) already triggers the reset. Remove this line.

```suggestion
    // events 可能由搜索过滤产生新列表:重置展开状态以避免索引错位
```

How can I resolve this? If you propose a fix, please make it concise.

if (!ws.ok) return ws;

if (stack.length === 0) {
if ((rootState as "value" | "end") === "end") {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unnecessary type assertion - rootState is already typed as "value" | "end" (line 184). The cast has no effect.

Suggested change
if ((rootState as "value" | "end") === "end") {
if (rootState === "end") {
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/components/ui/code-display.worker.ts
Line: 428

Comment:
Unnecessary type assertion - `rootState` is already typed as `"value" | "end"` (line 184). The cast has no effect.

```suggestion
      if (rootState === "end") {
```

How can I resolve this? If you propose a fix, please make it concise.

- JSON pretty 同 key 进入 loading 时直接返回,避免重复 abort+重启
- Worker postMessage 同步抛错时清理 pending 与监听器,避免 Promise 永久挂起
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/components/ui/code-display-worker-client.ts`:
- Around line 349-368: The fallback long-loop routines in buildLineIndex and
searchLines need to check for cancellation inside their loops: add periodic
checks of signal.aborted (or equivalent AbortSignal) within both the
line-counting loop and the line-starts building loop and immediately return a
consistent abort result (e.g., { ok: false, errorCode: "ABORTED" }) when
aborted; apply the same pattern to the analogous loop block referenced (the code
at the other fallback around lines 420-432) so that both counting and indexing
loops exit promptly on cancellation.
- Around line 224-227: The cancel postMessage call is missing the try/catch used
elsewhere; wrap the call to getWorker() and w.postMessage({ type: "cancel",
jobId }) in a try-catch (same pattern as
formatJsonPretty/stringifyJsonPretty/buildLineIndex/searchLines) so any errors
from worker state are caught and handled (log or ignore consistent with existing
handlers) and do not throw; reference getWorker(), w.postMessage, type: "cancel"
and jobId when applying the change.

In `@src/components/ui/code-display.tsx`:
- Line 432: The component initializes mode with getDefaultMode(language) but
never updates it when language changes, causing stale mode; add logic in the
CodeDisplay component to update mode whenever language changes by invoking
setMode(getDefaultMode(language)) (e.g., in a useEffect that depends on
language) so mode stays in sync with language while preserving user toggles when
appropriate; reference the existing state variables mode and setMode and the
helper getDefaultMode to locate where to add the effect.

ℹ️ Review info

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Cache: Disabled due to Reviews > Disable Cache setting

📥 Commits

Reviewing files that changed from the base of the PR and between 6c3fcc7 and 783cc98.

📒 Files selected for processing (2)
  • src/components/ui/code-display-worker-client.ts
  • src/components/ui/code-display.tsx

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/components/ui/code-display-worker-client.ts`:
- Around line 127-133: The initialized flag in ensureInitialized is set too
early (initialized = true) before getWorker() and binding handlers; move the
assignment so it occurs only after a non-null worker is returned and after you
attach the worker.onmessage and worker.onerror handlers in ensureInitialized (so
that when getWorker() succeeds you won't skip binding), and keep initialized
false when getWorker() fails so subsequent calls can retry binding pending
handlers; reference ensureInitialized, initialized, getWorker, worker.onmessage,
worker.onerror, and pending to locate the changes.

ℹ️ Review info

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Cache: Disabled due to Reviews > Disable Cache setting

📥 Commits

Reviewing files that changed from the base of the PR and between 783cc98 and 86cee4b.

📒 Files selected for processing (1)
  • src/components/ui/code-display-worker-client.ts

@tesgth032
Copy link
Contributor Author

已根据 CodeRabbit 最新 actionable 修复 ensureInitialized:仅在 getWorker() 返回非空且绑定 onmessage/onerror 后才设置 initialized = true,避免首次 getWorker() 失败导致后续 Worker 成功但未绑定 handler、job 永久 pending。

另外补齐:

  • cancelWorkerJobpostMessage try/catch
  • 主线程 fallback(无 Worker)下 buildLineIndex / searchLines 的循环内 abort 检查,保证取消及时生效

本地验证:bun run lint / bun run typecheck / bun run test / bun run build 均通过。

麻烦 CodeRabbit 重新 review / 更新状态。

Comment on lines +767 to +808
if (cached && cached.key === nonSseTextKey) {
setOnlyMatchesLineStarts(cached.lineStarts);
setOnlyMatchesIndexStatus("ready");
setOnlyMatchesIndexProgress(null);
} else {
const controller = new AbortController();
onlyMatchesIndexAbortRef.current?.abort();
onlyMatchesIndexAbortRef.current = controller;

setOnlyMatchesLineStarts(null);
setOnlyMatchesIndexStatus("loading");
setOnlyMatchesIndexProgress({ processed: 0, total: text.length });

void buildLineIndex({
text,
maxLines: codeDisplayConfig.maxLineIndexLines,
onProgress: (p) => {
if (controller.signal.aborted) return;
if (p.stage !== "index") return;
setOnlyMatchesIndexProgress({ processed: p.processed, total: p.total });
},
signal: controller.signal,
}).then((res) => {
if (controller.signal.aborted) return;
if (jobId !== onlyMatchesIndexJobRef.current) return;

if (!res.ok) {
setOnlyMatchesLineStarts(null);
setOnlyMatchesIndexStatus("error");
setOnlyMatchesIndexProgress(null);
return;
}

onlyMatchesLineIndexCacheRef.current = {
key: nonSseTextKey,
lineStarts: res.lineStarts,
lineCount: res.lineCount,
};
setOnlyMatchesLineStarts(res.lineStarts);
setOnlyMatchesIndexStatus("ready");
setOnlyMatchesIndexProgress(null);
});
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cache key relies solely on nonSseTextKey which uses FNV-1a windowed hashing for large texts. for files >200k chars, hash collision is theoretically possible if two same-length files differ only in unsampled regions (outside first/middle/last 8KB windows)

while collision is rare, when it occurs the cached line index would be wrong, causing incorrect search results. consider adding a secondary validation (e.g., compare first 100 chars or line count) before using cached index

Prompt To Fix With AI
This is a comment left during a code review.
Path: src/components/ui/code-display.tsx
Line: 767-808

Comment:
cache key relies solely on `nonSseTextKey` which uses FNV-1a windowed hashing for large texts. for files >200k chars, hash collision is theoretically possible if two same-length files differ only in unsampled regions (outside first/middle/last 8KB windows)

while collision is rare, when it occurs the cached line index would be wrong, causing incorrect search results. consider adding a secondary validation (e.g., compare first 100 chars or line count) before using cached index

How can I resolve this? If you propose a fix, please make it concise.

Comment on lines +197 to +206
const skipWhitespace = () => {
while (i < total) {
if (isCancelled(jobId)) return { ok: false as const, errorCode: "CANCELED" as const };
const code = text.charCodeAt(i);
if (!isWhitespaceCharCode(code)) break;
i += 1;
if ((i & 8191) === 0) maybeReportProgress();
}
return { ok: true as const };
};
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

whitespace skipping reports progress every 8192 chars ((i & 8191) === 0), but for inputs with massive leading whitespace (e.g., gigabytes of spaces), the 200ms throttle check might not provide enough granular updates since no actual format output is generated

this could make the UI appear frozen. consider also checking progress in push() or reducing throttle interval for whitespace-heavy inputs

Prompt To Fix With AI
This is a comment left during a code review.
Path: src/components/ui/code-display.worker.ts
Line: 197-206

Comment:
whitespace skipping reports progress every 8192 chars (`(i & 8191) === 0`), but for inputs with massive leading whitespace (e.g., gigabytes of spaces), the 200ms throttle check might not provide enough granular updates since no actual format output is generated

this could make the UI appear frozen. consider also checking progress in `push()` or reducing throttle interval for whitespace-heavy inputs

How can I resolve this? If you propose a fix, please make it concise.

Comment on lines +144 to +147
env.CCH_CODEDISPLAY_MAX_PRETTY_OUTPUT_BYTES,
DEFAULT_CODE_DISPLAY_CONFIG.maxPrettyOutputBytes,
{ min: 1_000_000 }
),
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maxPrettyOutputBytes minimum is 1MB but highlightMaxChars default is 30k chars. for prettified JSON, expansion ratio can be 3-5x, so 30k chars → ~150KB output, well below 1MB min

however if user sets very low CCH_CODEDISPLAY_HIGHLIGHT_MAX_CHARS (e.g., 5000) while keeping default maxPrettyOutputBytes, the limits become inconsistent. consider validating that maxPrettyOutputBytes >= highlightMaxChars * 10 (assuming max 5x expansion)

Prompt To Fix With AI
This is a comment left during a code review.
Path: src/components/ui/code-display-config.ts
Line: 144-147

Comment:
`maxPrettyOutputBytes` minimum is 1MB but `highlightMaxChars` default is 30k chars. for prettified JSON, expansion ratio can be 3-5x, so 30k chars → ~150KB output, well below 1MB min

however if user sets very low `CCH_CODEDISPLAY_HIGHLIGHT_MAX_CHARS` (e.g., 5000) while keeping default `maxPrettyOutputBytes`, the limits become inconsistent. consider validating that `maxPrettyOutputBytes >= highlightMaxChars * 10` (assuming max 5x expansion)

How can I resolve this? If you propose a fix, please make it concise.

Comment on lines +816 to +850
useEffect(() => {
if (!shouldOptimizeOnlyMatches) return;

const jobId = (onlyMatchesSearchJobRef.current += 1);
const jobTextKey = nonSseTextKey;
const text = nonSseTextForModeRef.current;
const query = debouncedOnlyMatchesQuery;

const controller = new AbortController();
onlyMatchesSearchAbortRef.current?.abort();
onlyMatchesSearchAbortRef.current = controller;

const lineFilteredText = useMemo(() => {
setOnlyMatchesMatches(null);
setOnlyMatchesSearchStatus("loading");
setOnlyMatchesSearchProgress({ processed: 0, total: text.length });

void searchLines({
text,
query,
maxResults: Math.min(resolvedMaxLines, 50_000),
onProgress: (p) => {
if (controller.signal.aborted) return;
if (p.stage !== "search") return;
setOnlyMatchesSearchProgress({ processed: p.processed, total: p.total });
},
signal: controller.signal,
}).then((res) => {
if (controller.signal.aborted) return;
if (jobId !== onlyMatchesSearchJobRef.current) return;
if (jobTextKey !== nonSseTextKeyRef.current) return;

if (!res.ok) {
setOnlyMatchesMatches(null);
setOnlyMatchesSearchStatus("error");
setOnlyMatchesSearchProgress(null);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if user types search query rapidly, each keystroke triggers a new searchLines worker job. the debounce on onlyMatchesQuery helps, but useDebounce is mocked to return immediate value in tests (line 30 of test file)

in production with real debounce, rapid typing is fine. but if debounce delay is very short (e.g., 100ms) and user types fast, could still queue multiple concurrent searches. consider adding check to skip search if previous one is still loading, or increase debounce delay

Prompt To Fix With AI
This is a comment left during a code review.
Path: src/components/ui/code-display.tsx
Line: 816-850

Comment:
if user types search query rapidly, each keystroke triggers a new `searchLines` worker job. the debounce on `onlyMatchesQuery` helps, but `useDebounce` is mocked to return immediate value in tests (line 30 of test file)

in production with real debounce, rapid typing is fine. but if debounce delay is very short (e.g., 100ms) and user types fast, could still queue multiple concurrent searches. consider adding check to skip search if previous one is still `loading`, or increase debounce delay

How can I resolve this? If you propose a fix, please make it concise.

Comment on lines +120 to +135
beforeEach(() => {
workerClientMocks.buildLineIndex.mockReset();
workerClientMocks.searchLines.mockReset();
workerClientMocks.formatJsonPretty.mockReset();

workerClientMocks.buildLineIndex.mockResolvedValue({
ok: true,
lineStarts: new Int32Array([0]),
lineCount: 1,
});
workerClientMocks.searchLines.mockResolvedValue({ ok: true, matches: new Int32Array(0) });
workerClientMocks.formatJsonPretty.mockResolvedValue({
ok: false,
errorCode: "UNKNOWN",
});
});
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

all worker functions are mocked with default error responses (formatJsonPretty returns UNKNOWN error), requiring each test to override mocks

this pattern is error-prone - if a test forgets to mock a needed function, it silently gets error behavior instead of success. consider defaulting mocks to success states and only override for error-case tests

Prompt To Fix With AI
This is a comment left during a code review.
Path: tests/unit/code-display-perf-ui.test.tsx
Line: 120-135

Comment:
all worker functions are mocked with default error responses (`formatJsonPretty` returns `UNKNOWN` error), requiring each test to override mocks

this pattern is error-prone - if a test forgets to mock a needed function, it silently gets error behavior instead of success. consider defaulting mocks to success states and only override for error-case tests

How can I resolve this? If you propose a fix, please make it concise.

Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!

Comment on lines +16 to +31
const FULL_HASH_MAX_CHARS = 200_000;
const WINDOW_CHARS = 8192;

let hash = 2166136261;

const update = (code: number) => {
hash ^= code;
hash = Math.imul(hash, 16777619);
};

if (len <= FULL_HASH_MAX_CHARS) {
for (let i = 0; i < len; i += 1) {
update(text.charCodeAt(i));
}
return `${len}:${(hash >>> 0).toString(36)}`;
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

for texts at exactly 200,000 chars (the FULL_HASH_MAX_CHARS boundary), full hashing is used. for 200,001 chars, windowed hashing is used

this creates a sharp performance cliff - a single character difference causes O(n) → O(1) behavior change. consider using gradual transition (e.g., 180k-220k) or documenting this threshold as critical performance boundary

Prompt To Fix With AI
This is a comment left during a code review.
Path: src/lib/utils/text-key.ts
Line: 16-31

Comment:
for texts at exactly 200,000 chars (the `FULL_HASH_MAX_CHARS` boundary), full hashing is used. for 200,001 chars, windowed hashing is used

this creates a sharp performance cliff - a single character difference causes O(n) → O(1) behavior change. consider using gradual transition (e.g., 180k-220k) or documenting this threshold as critical performance boundary

How can I resolve this? If you propose a fix, please make it concise.

tesgth032 and others added 2 commits March 2, 2026 22:46
- language 变更时按默认规则同步 raw/pretty(尽量保留用户切换)\n- SSE 搜索避免对 data 做全量 toLowerCase 复制\n- Worker 不可用时跳过超大 JSON 主线程格式化,避免卡顿\n- 补充单测覆盖语言切换场景
@tesgth032
Copy link
Contributor Author

补充自检后又做了 3 个稳健性修复(已推送到本 PR 分支):

  1. CodeDisplaylanguage 变化时同步默认 raw/pretty(仅当仍处于旧语言默认值时才切换,尽量保留用户显式切换)
  2. SSE 搜索:改为 RegExp(i) 字面量匹配,避免对大 evt.datatoLowerCase() 全量复制引发额外内存/卡顿
  3. Worker 不可用:对“超大 JSON pretty”不再回退主线程 JSON.parse/stringify(返回 WORKER_UNAVAILABLE 让上层回退纯文本/下载),避免极端情况下卡死

并补充单测覆盖 “language 切换默认模式” 场景;本地验证 bun run lint / bun run typecheck / bun run test / bun run build 均通过。

麻烦 CodeRabbit/维护者重新 review/更新 CHANGES_REQUESTED 状态(旧 review 针对的点已修复)。

@tesgth032
Copy link
Contributor Author

@coderabbitai review

已按最新 review 修正并推送(commit b426802):

  • only-matches:索引未 ready/索引失败时不再启动 searchLines 全文扫描(避免高开销路径),并补回归断言确保 searchLines 不会被调用。
  • Worker:formatJsonPretty 回落 parse/stringify 前后补 isCancelled(jobId) 检查,取消优先,不再在取消后输出结果。

麻烦更新 review 结论,谢谢。

@coderabbitai
Copy link

coderabbitai bot commented Mar 2, 2026

@tesgth032 感谢您的持续改进!我已收到您在 commit b426802 中的修复:

only-matches 优化 - 索引未 ready 或失败时不再启动 searchLines 全文扫描,避免高开销路径,并补充回归断言确保不会误调用

Worker 取消优先 - formatJsonPretty 回落 parse/stringify 前后增加 isCancelled(jobId) 检查,确保取消后不输出结果

我将立即触发完整的 PR review。

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (4)
tests/unit/code-display-perf-ui.test.tsx (1)

591-613: CR-only 行尾测试的 lineStarts 数组可能存在偏差。

测试用例使用 lineStarts: Int32Array.from([0, 2, 4]),但对于文本 "a\rb\r" (长度为 4):

  • 第 0 行起始位置:0
  • 第 1 行起始位置(\r 后):2
  • 第 2 行起始位置(第二个 \r 后):4

但第 2 行实际上是空行(位置 4 已经是字符串末尾),所以 matches: Int32Array.from([0, 1]) 只匹配前两行是正确的。不过建议添加注释说明这个边界情况的预期行为。

建议添加注释说明
  test("matches list strips CR-only line endings", async () => {
+   // Text "a\rb\r" has 3 lines: "a" (0-1), "b" (2-3), "" (4-4 empty)
+   // lineStarts: [0, 2, 4] corresponds to line start positions after CR
    const text = "a\rb\r";

    const { container, unmount } = renderWithIntl(
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/unit/code-display-perf-ui.test.tsx` around lines 591 - 613, The test
"matches list strips CR-only line endings" uses text = "a\rb\r" with lineStarts
= Int32Array.from([0, 2, 4]) and matches = Int32Array.from([0, 1]); add a
concise comment in that test explaining that the final lineStart value 4 equals
text.length which represents an empty trailing line (so only lines 0 and 1
contain content and are expected to match), and note that if the intent were to
include the empty trailing line then lineStarts/matches would need to be
adjusted—refer to the variables text, lineStarts and matches when adding the
comment.
src/components/ui/code-display.tsx (2)

374-378: SSE 数据渲染时的 JSON 格式化可能阻塞主线程。

当 SSE 事件的 data 小于 highlightMaxChars 时,会在渲染时同步执行 safeJsonParsestringifyPretty(Lines 375-376)。虽然单个事件数据通常较小,但如果有大量事件同时展开,可能导致渲染卡顿。

当前阈值检查(Line 364: evt.data.length <= highlightMaxChars)已经限制了单个数据的大小,这个风险在实际使用中较低。

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/components/ui/code-display.tsx` around lines 374 - 378, The inline
synchronous JSON parse/pretty-print inside the JSX using safeJsonParse and
stringifyPretty on evt.data (rendered in SyntaxHighlighter) can block the main
thread when many events expand; move this work out of the render path by
precomputing the formatted string (or a parse result flag) either when the SSE
message is received (before storing into state) or by wrapping the logic in a
useMemo keyed by evt.data and highlightMaxChars so parsing/formatting only runs
when the input changes; ensure you still respect the existing threshold check
(evt.data.length <= highlightMaxChars) and preserve the fallback to raw evt.data
when parsing fails (parsed.ok).

287-301: SSE 虚拟化的行高估算可能导致滚动跳跃。

estimatedRowHeight = 44 是固定值,但实际行高会因内容不同而变化(折叠时约 44px,但标题较长时可能更高)。当用户快速滚动时,可能出现轻微的内容跳跃。

这是虚拟列表的常见权衡,对于本场景(> 200 条事件才启用虚拟化)可以接受。如果未来用户反馈滚动体验不佳,可考虑使用动态行高测量。

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/components/ui/code-display.tsx` around lines 287 - 301, The fixed
estimatedRowHeight (currently estimatedRowHeight = 44) can cause scroll jump
because actual row heights vary; replace the constant with a measured/derived
value: add state (e.g. estimatedRowHeight state) and measure visible row DOM
heights (via refs or a ResizeObserver inside the component that renders rows
referenced by rows, or compute an average height from a sample) and update
estimatedRowHeight accordingly, then recalc startIndex/endIndex/topPad/bottomPad
whenever the measured value changes so virtualization uses a realistic row
height; keep the existing useVirtual, startIndex, endIndex, topPad, bottomPad
and rows logic but derive estimatedRowHeight dynamically to avoid jumps.
src/components/ui/code-display.worker.ts (1)

634-652: 第二遍扫描缺少进度报告。

第一遍计数时有进度报告(Lines 622-628),但第二遍构建 lineStarts 数组时没有。对于非常大的文件,第二遍扫描可能耗时较长,用户无法获知进度。

不过考虑到第二遍的计算量比第一遍小(不需要判断是否超过 maxLines),且第一遍已经汇报了 100% 进度,这不是阻塞性问题。

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/components/ui/code-display.worker.ts` around lines 634 - 652, 第二遍构建
lineStarts(循环使用 text.charCodeAt、starts、idx、isCancelled 和
jobId)缺少进度上报;在该循环内按与第一遍相同的机制(即文件中用于第一遍计数的进度上报代码块)定期发出进度更新并在发现 isCancelled(jobId)
时提前返回,确保使用相同的 progress-reporting function/format/threshold
来计算和发送百分比(避免改变上报接口或语义),并保持频率与第一遍一致以在处理大文件时提供用户可见的进度反馈。
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/components/ui/code-display.worker.ts`:
- Around line 818-837: The stringifyJsonPretty handler calls
stringifyPretty(msg.value, msg.indentSize) without re-checking cancellation, so
long-running stringify can finish after a cancel and still post a success; after
calling stringifyPretty (but before size checks and before calling post), call
isCancelled(jobId) and if true post the stringifyJsonPrettyResult with ok: false
and errorCode: "CANCELED" (same pattern used earlier) and return; ensure you
reference the existing symbols stringifyJsonPretty handler, stringifyPretty,
isCancelled, post, and the stringifyJsonPrettyResult message shape when adding
this check.

---

Nitpick comments:
In `@src/components/ui/code-display.tsx`:
- Around line 374-378: The inline synchronous JSON parse/pretty-print inside the
JSX using safeJsonParse and stringifyPretty on evt.data (rendered in
SyntaxHighlighter) can block the main thread when many events expand; move this
work out of the render path by precomputing the formatted string (or a parse
result flag) either when the SSE message is received (before storing into state)
or by wrapping the logic in a useMemo keyed by evt.data and highlightMaxChars so
parsing/formatting only runs when the input changes; ensure you still respect
the existing threshold check (evt.data.length <= highlightMaxChars) and preserve
the fallback to raw evt.data when parsing fails (parsed.ok).
- Around line 287-301: The fixed estimatedRowHeight (currently
estimatedRowHeight = 44) can cause scroll jump because actual row heights vary;
replace the constant with a measured/derived value: add state (e.g.
estimatedRowHeight state) and measure visible row DOM heights (via refs or a
ResizeObserver inside the component that renders rows referenced by rows, or
compute an average height from a sample) and update estimatedRowHeight
accordingly, then recalc startIndex/endIndex/topPad/bottomPad whenever the
measured value changes so virtualization uses a realistic row height; keep the
existing useVirtual, startIndex, endIndex, topPad, bottomPad and rows logic but
derive estimatedRowHeight dynamically to avoid jumps.

In `@src/components/ui/code-display.worker.ts`:
- Around line 634-652: 第二遍构建 lineStarts(循环使用
text.charCodeAt、starts、idx、isCancelled 和
jobId)缺少进度上报;在该循环内按与第一遍相同的机制(即文件中用于第一遍计数的进度上报代码块)定期发出进度更新并在发现 isCancelled(jobId)
时提前返回,确保使用相同的 progress-reporting function/format/threshold
来计算和发送百分比(避免改变上报接口或语义),并保持频率与第一遍一致以在处理大文件时提供用户可见的进度反馈。

In `@tests/unit/code-display-perf-ui.test.tsx`:
- Around line 591-613: The test "matches list strips CR-only line endings" uses
text = "a\rb\r" with lineStarts = Int32Array.from([0, 2, 4]) and matches =
Int32Array.from([0, 1]); add a concise comment in that test explaining that the
final lineStart value 4 equals text.length which represents an empty trailing
line (so only lines 0 and 1 contain content and are expected to match), and note
that if the intent were to include the empty trailing line then
lineStarts/matches would need to be adjusted—refer to the variables text,
lineStarts and matches when adding the comment.

ℹ️ Review info

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Cache: Disabled due to Reviews > Disable Cache setting

📥 Commits

Reviewing files that changed from the base of the PR and between f13ec01 and b426802.

📒 Files selected for processing (3)
  • src/components/ui/code-display.tsx
  • src/components/ui/code-display.worker.ts
  • tests/unit/code-display-perf-ui.test.tsx

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/components/ui/code-display.worker.ts`:
- Around line 587-728: The long-running loops in buildLineIndex and searchLines
only check cancellation and post progress every 8192 iterations, causing slow
cancel responsiveness; make both functions async and inside their hot loops (the
places using (i & 8191) === 0 and (scan & 8191) === 0) add a periodic yield by
awaiting new Promise(r => setTimeout(r, 0)) (or Promise.resolve() if you prefer
microtask) alongside the existing progress/post and isCancelled checks, keep
using isCancelled(jobId) and post(...) as before, and update callers to await
the new async signatures so behavior and progress reporting remain identical
while cancellation becomes responsive.

ℹ️ Review info

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Cache: Disabled due to Reviews > Disable Cache setting

📥 Commits

Reviewing files that changed from the base of the PR and between f13ec01 and b426802.

📒 Files selected for processing (3)
  • src/components/ui/code-display.tsx
  • src/components/ui/code-display.worker.ts
  • tests/unit/code-display-perf-ui.test.tsx

@tesgth032
Copy link
Contributor Author

@coderabbitai review

本次补充修正(commit 3bda57d):

  • SSE 展开:JSON 解析/pretty 从渲染路径移到子组件 useMemo,并在不启用虚拟化时不再在 scroll 事件里 setState 触发整表 rerender(减少展开后滚动卡顿)。
  • Worker:buildLineIndex 改为单遍扫描生成 lineStarts;stringifyJsonPretty 在 stringify 后补 isCancelled(jobId) 检查,取消语义一致。
  • getTextKey:超大文本增加全局步进采样,进一步降低缓存 key 碰撞概率。
  • tests:补充 CR-only 行尾用例说明注释。

麻烦更新 review 结论,谢谢。

@coderabbitai
Copy link

coderabbitai bot commented Mar 2, 2026

@tesgth032 收到!我已注意到您在 commit 3bda57d 中的优化:

SSE 滚动性能优化 - JSON 解析/pretty 移至 useMemo,未启用虚拟化时避免 scroll 触发全表 rerender
Worker 效率提升 - buildLineIndex 改为单遍扫描,stringifyJsonPretty 在 stringify 后补充取消检查
getTextKey 碰撞优化 - 超大文本增加全局步进采样,降低缓存 key 碰撞概率
测试可读性 - CR-only 行尾用例补充说明注释

我将立即触发完整的 PR review。


🧠 Learnings used
Learnt from: NieiR
Repo: ding113/claude-code-hub PR: 573
File: src/actions/model-prices.ts:275-335
Timestamp: 2026-01-10T06:20:04.478Z
Learning: In the `ding113/claude-code-hub` repository, Server Actions (files under `src/actions/*.ts`) currently return hardcoded Chinese error messages directly. This is a codebase-wide architectural decision that applies to all action files (e.g., model-prices.ts, users.ts, system-config.ts). Changing this pattern requires a coordinated i18n refactor across all Server Actions, which should be handled as a separate unified task rather than piecemeal changes in individual PRs.
✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (2)
src/components/ui/code-display.tsx (1)

1029-1039: ⚠️ Potential issue | 🟡 Minor

大文本下载的 MIME 仍可能被误标为 application/json

当前分支在 isCandidateJson && text.length > 200_000 时跳过校验,但仍保持 application/json。这会把非 JSON 大文本误标为 JSON。

建议修复(默认保守为 text/plain,仅在可证实为 JSON 时使用 application/json)
   const isCandidateJson = language === "json" && !(showOnlyMatches && onlyMatchesQuery);
-  let downloadType: "application/json" | "text/plain" = isCandidateJson
-    ? "application/json"
-    : "text/plain";
+  let downloadType: "application/json" | "text/plain" = "text/plain";

   // 避免对超大内容在主线程 JSON.parse;小内容可用于更准确决定 MIME 类型。
   const maxValidateJsonChars = 200_000;
-  if (isCandidateJson && text.length <= maxValidateJsonChars) {
-    const parsed = safeJsonParse(text);
-    if (!parsed.ok) downloadType = "text/plain";
+  if (isCandidateJson) {
+    const parsed = text.length <= maxValidateJsonChars ? safeJsonParse(text) : { ok: false as const };
+    if (parsed.ok) {
+      downloadType = "application/json";
+    }
   }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/components/ui/code-display.tsx` around lines 1029 - 1039, The current
logic leaves downloadType as "application/json" for large texts when
isCandidateJson is true even if unvalidated; change the flow so downloadType
defaults to "text/plain" and only switch to "application/json" when
isCandidateJson AND text.length <= maxValidateJsonChars AND
safeJsonParse(text).ok; update the block around isCandidateJson, downloadType,
maxValidateJsonChars and safeJsonParse to perform this conservative check (use
symbols: isCandidateJson, downloadType, maxValidateJsonChars, safeJsonParse,
text, language, showOnlyMatches, onlyMatchesQuery).
src/components/ui/code-display.worker.ts (1)

587-719: ⚠️ Potential issue | 🟠 Major

长循环仍无法及时响应取消请求

buildLineIndex/searchLines 仍是同步热循环,cancel 消息要等当前任务返回后才会进入 cancelledJobs。因此这里的 isCancelled(jobId) 对“运行中任务”基本不生效,取消/重试在大文本下会明显滞后。

建议修复(为热循环增加 yield 点)
+const yieldToEventLoop = () => new Promise<void>((resolve) => setTimeout(resolve, 0));

-function buildLineIndex({
+async function buildLineIndex({
   text,
   jobId,
   maxLines,
 }: {
   text: string;
   jobId: number;
   maxLines: number;
-}):
+}): Promise<
   | { ok: true; lineStarts: Int32Array; lineCount: number }
-  | { ok: false; errorCode: "CANCELED" | "TOO_MANY_LINES"; lineCount?: number } {
+  | { ok: false; errorCode: "CANCELED" | "TOO_MANY_LINES"; lineCount?: number }
+> {
@@
     if ((i & 8191) === 0) {
+      await yieldToEventLoop();
+      if (isCancelled(jobId)) return { ok: false, errorCode: "CANCELED" };
       const now = performance.now();
       if (now - lastProgressAt > 200) {
         lastProgressAt = now;
         post({ type: "progress", jobId, stage: "index", processed: i, total });
       }
     }
   }
@@
-function searchLines({
+async function searchLines({
   text,
   query,
   jobId,
   maxResults,
 }: {
@@
-}): { ok: true; matches: Int32Array } | { ok: false; errorCode: "CANCELED" } {
+}): Promise<{ ok: true; matches: Int32Array } | { ok: false; errorCode: "CANCELED" }> {
@@
     while (scan < pos) {
       if ((scan & 8191) === 0 && isCancelled(jobId)) return { ok: false, errorCode: "CANCELED" };
+      if ((scan & 8191) === 0) {
+        await yieldToEventLoop();
+        if (isCancelled(jobId)) return { ok: false, errorCode: "CANCELED" };
+      }
       const code = text.charCodeAt(scan);
@@
-(self as DedicatedWorkerGlobalScope).onmessage = (e: MessageEvent<WorkerRequest>) => {
+(self as DedicatedWorkerGlobalScope).onmessage = async (e: MessageEvent<WorkerRequest>) => {
@@
-      const result = buildLineIndex({ text: msg.text, jobId, maxLines: msg.maxLines });
+      const result = await buildLineIndex({ text: msg.text, jobId, maxLines: msg.maxLines });
@@
-      const result = searchLines({
+      const result = await searchLines({
         text: msg.text,
         query: msg.query,
         jobId,
         maxResults: msg.maxResults,
       });

Also applies to: 721-878

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/components/ui/code-display.worker.ts` around lines 587 - 719,
buildLineIndex and searchLines run long synchronous loops so isCancelled(jobId)
cannot observe incoming cancel messages promptly; make both functions async
(change signatures of buildLineIndex and searchLines) and add periodic yield
points inside their hot loops (e.g. when (i & 8191)===0 in buildLineIndex and on
the scan/pos loop in searchLines) by awaiting a microtask (await
Promise.resolve()) or a short timeout, then re-check isCancelled(jobId) and
continue, and keep the existing progress post logic around the yield; update any
callers to await these functions.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@src/components/ui/code-display.tsx`:
- Around line 1029-1039: The current logic leaves downloadType as
"application/json" for large texts when isCandidateJson is true even if
unvalidated; change the flow so downloadType defaults to "text/plain" and only
switch to "application/json" when isCandidateJson AND text.length <=
maxValidateJsonChars AND safeJsonParse(text).ok; update the block around
isCandidateJson, downloadType, maxValidateJsonChars and safeJsonParse to perform
this conservative check (use symbols: isCandidateJson, downloadType,
maxValidateJsonChars, safeJsonParse, text, language, showOnlyMatches,
onlyMatchesQuery).

In `@src/components/ui/code-display.worker.ts`:
- Around line 587-719: buildLineIndex and searchLines run long synchronous loops
so isCancelled(jobId) cannot observe incoming cancel messages promptly; make
both functions async (change signatures of buildLineIndex and searchLines) and
add periodic yield points inside their hot loops (e.g. when (i & 8191)===0 in
buildLineIndex and on the scan/pos loop in searchLines) by awaiting a microtask
(await Promise.resolve()) or a short timeout, then re-check isCancelled(jobId)
and continue, and keep the existing progress post logic around the yield; update
any callers to await these functions.

ℹ️ Review info

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Cache: Disabled due to Reviews > Disable Cache setting

📥 Commits

Reviewing files that changed from the base of the PR and between b426802 and 3bda57d.

📒 Files selected for processing (4)
  • src/components/ui/code-display.tsx
  • src/components/ui/code-display.worker.ts
  • src/lib/utils/text-key.ts
  • tests/unit/code-display-perf-ui.test.tsx
🚧 Files skipped from review as they are similar to previous changes (2)
  • tests/unit/code-display-perf-ui.test.tsx
  • src/lib/utils/text-key.ts

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (4)
src/components/ui/code-display.tsx (4)

314-314: 虚拟化阈值硬编码为 200,建议提取为常量或配置项

useVirtual 条件中的 events.length > 200 是一个魔法数字。考虑将其提取为具名常量或纳入 codeDisplayConfig,便于后续调整和维护。

建议修改
+const SSE_VIRTUAL_THRESHOLD = 200;
+
 function CodeDisplaySseEvents({
   // ...
 }) {
   // ...
-  const useVirtual = events.length > 200 && expandedRows.size === 0 && viewportHeight > 0;
+  const useVirtual = events.length > SSE_VIRTUAL_THRESHOLD && expandedRows.size === 0 && viewportHeight > 0;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/components/ui/code-display.tsx` at line 314, The virtualization threshold
is hardcoded as a magic number in the useVirtual condition (events.length >
200). Replace the literal with a named constant (e.g. VIRTUALIZATION_THRESHOLD)
or move it into the existing configuration object (e.g.
codeDisplayConfig.virtualizationThreshold) and update the useVirtual computation
(which references events, expandedRows and viewportHeight) to use that
constant/config value so the threshold can be tuned and maintained easily.

315-316: SSE 虚拟化参数可配置化

estimatedRowHeight = 44overscan = 12 与组件其他部分使用的 codeDisplayConfig.virtualLineHeightPx / codeDisplayConfig.virtualOverscanLines 不一致。建议统一使用配置值或明确说明差异原因。

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/components/ui/code-display.tsx` around lines 315 - 316, 当前硬编码的
estimatedRowHeight 和 overscan 与组件中其它地方使用的 codeDisplayConfig.virtualLineHeightPx
/ codeDisplayConfig.virtualOverscanLines 不一致;请把这两个常量替换为从配置读取的值(使用
codeDisplayConfig.virtualLineHeightPx 替代 estimatedRowHeight,使用
codeDisplayConfig.virtualOverscanLines 替代
overscan),并在读取时提供合理的回退默认值以保持向后兼容;如果确实需要与配置不一致,改为在同一位置添加明确注释解释差异并保持配置值可覆盖。

1035-1035: 下载 MIME 验证阈值与其他阈值不一致

maxValidateJsonChars = 200_000jsonPrettySyncText 中的 maxSyncChars (line 699) 使用了相同的值,但两处独立定义。建议提取为共享常量以确保一致性。

建议修改
+const MAX_SYNC_JSON_CHARS = 200_000;
+
 // in jsonPrettySyncText:
-    const maxSyncChars = Math.min(codeDisplayConfig.highlightMaxChars, 200_000);
+    const maxSyncChars = Math.min(codeDisplayConfig.highlightMaxChars, MAX_SYNC_JSON_CHARS);

 // in handleDownload:
-    const maxValidateJsonChars = 200_000;
+    const maxValidateJsonChars = MAX_SYNC_JSON_CHARS;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/components/ui/code-display.tsx` at line 1035, Two independently defined
thresholds (maxValidateJsonChars and maxSyncChars inside jsonPrettySyncText) use
the same numeric value; extract a shared constant (e.g., JSON_MAX_CHARS) and
replace both usages with that constant so they remain consistent. Update the
constant declaration (export or file-scope) and change references in
maxValidateJsonChars and inside the jsonPrettySyncText function to use the new
shared symbol, removing the duplicated literal.

472-472: 大内容判定阈值硬编码

isLargeContent 的判定条件 content.length > 4000 || lineCount > 200 使用了硬编码值。这些阈值与其他配置项(如 highlightMaxChars)相互独立,可能导致行为不一致。建议考虑将这些阈值也纳入配置体系。

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/components/ui/code-display.tsx` at line 472, The large-content check
currently hardcodes thresholds in the expression isLargeContent = content.length
> 4000 || lineCount > 200; — change this to derive thresholds from existing
configuration (e.g., reuse highlightMaxChars or a new config like
largeContentMaxChars / largeContentMaxLines) so behavior is consistent with
other settings; update any relevant UI/config entry points that supply
highlightMaxChars to also provide the new threshold or reuse it, and replace the
direct numeric literals in isLargeContent with those config values (referencing
isLargeContent, content.length, lineCount, and highlightMaxChars/new
largeContentMax* config names to locate the code).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@src/components/ui/code-display.tsx`:
- Line 314: The virtualization threshold is hardcoded as a magic number in the
useVirtual condition (events.length > 200). Replace the literal with a named
constant (e.g. VIRTUALIZATION_THRESHOLD) or move it into the existing
configuration object (e.g. codeDisplayConfig.virtualizationThreshold) and update
the useVirtual computation (which references events, expandedRows and
viewportHeight) to use that constant/config value so the threshold can be tuned
and maintained easily.
- Around line 315-316: 当前硬编码的 estimatedRowHeight 和 overscan 与组件中其它地方使用的
codeDisplayConfig.virtualLineHeightPx / codeDisplayConfig.virtualOverscanLines
不一致;请把这两个常量替换为从配置读取的值(使用 codeDisplayConfig.virtualLineHeightPx 替代
estimatedRowHeight,使用 codeDisplayConfig.virtualOverscanLines 替代
overscan),并在读取时提供合理的回退默认值以保持向后兼容;如果确实需要与配置不一致,改为在同一位置添加明确注释解释差异并保持配置值可覆盖。
- Line 1035: Two independently defined thresholds (maxValidateJsonChars and
maxSyncChars inside jsonPrettySyncText) use the same numeric value; extract a
shared constant (e.g., JSON_MAX_CHARS) and replace both usages with that
constant so they remain consistent. Update the constant declaration (export or
file-scope) and change references in maxValidateJsonChars and inside the
jsonPrettySyncText function to use the new shared symbol, removing the
duplicated literal.
- Line 472: The large-content check currently hardcodes thresholds in the
expression isLargeContent = content.length > 4000 || lineCount > 200; — change
this to derive thresholds from existing configuration (e.g., reuse
highlightMaxChars or a new config like largeContentMaxChars /
largeContentMaxLines) so behavior is consistent with other settings; update any
relevant UI/config entry points that supply highlightMaxChars to also provide
the new threshold or reuse it, and replace the direct numeric literals in
isLargeContent with those config values (referencing isLargeContent,
content.length, lineCount, and highlightMaxChars/new largeContentMax* config
names to locate the code).

ℹ️ Review info

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Cache: Disabled due to Reviews > Disable Cache setting

📥 Commits

Reviewing files that changed from the base of the PR and between b426802 and 3bda57d.

📒 Files selected for processing (4)
  • src/components/ui/code-display.tsx
  • src/components/ui/code-display.worker.ts
  • src/lib/utils/text-key.ts
  • tests/unit/code-display-perf-ui.test.tsx
🚧 Files skipped from review as they are similar to previous changes (2)
  • tests/unit/code-display-perf-ui.test.tsx
  • src/lib/utils/text-key.ts

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (1)
src/components/ui/code-display.tsx (1)

1038-1048: ⚠️ Potential issue | 🟡 Minor

下载 MIME 判定在超阈值分支仍可能误标为 JSON。

当前逻辑会在 text.length > MAX_SYNC_JSON_CHARS 时跳过校验并保留 application/json,这会把“未知是否为合法 JSON”的下载结果误标为 JSON。

建议修正
-    const isCandidateJson = language === "json" && !(showOnlyMatches && onlyMatchesQuery);
-    let downloadType: "application/json" | "text/plain" = isCandidateJson
-      ? "application/json"
-      : "text/plain";
+    const isCandidateJson = language === "json" && !(showOnlyMatches && onlyMatchesQuery);
+    let downloadType: "application/json" | "text/plain" = "text/plain";
@@
-    if (isCandidateJson && text.length <= maxValidateJsonChars) {
-      const parsed = safeJsonParse(text);
-      if (!parsed.ok) downloadType = "text/plain";
-    }
+    if (isCandidateJson && text.length <= maxValidateJsonChars) {
+      if (safeJsonParse(text).ok) {
+        downloadType = "application/json";
+      }
+    }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/components/ui/code-display.tsx` around lines 1038 - 1048, The current
logic leaves downloadType as "application/json" for large texts (>
MAX_SYNC_JSON_CHARS) without validation, which can mislabel non-JSON files;
update the branch around isCandidateJson/downloadType so that if text.length >
MAX_SYNC_JSON_CHARS you set downloadType = "text/plain" (i.e., treat
large/unvalidated content as plain text) instead of keeping application/json;
keep the existing small-text safeJsonParse(path) check for validated JSON and
preserve the showOnlyMatches/onlyMatchesQuery logic used to compute
isCandidateJson.
🧹 Nitpick comments (1)
src/components/ui/code-display.tsx (1)

1001-1010: 非 Worker 回退路径建议复用去抖查询,降低大文本输入抖动。

这里直接使用 onlyMatchesQuery,在 Worker 不可用且文本较大时会每次按键触发全量 split/filter。建议改为 debouncedOnlyMatchesQuery(或按大小条件启用去抖)。

建议修改
-    if (!onlyMatchesQuery) return null;
+    if (!debouncedOnlyMatchesQuery) return null;
@@
-    const matches = lines.filter((line) => line.includes(onlyMatchesQuery));
+    const matches = lines.filter((line) => line.includes(debouncedOnlyMatchesQuery));
@@
-  }, [language, nonSseTextForMode, onlyMatchesQuery, showOnlyMatches, shouldOptimizeOnlyMatches]);
+  }, [
+    language,
+    nonSseTextForMode,
+    debouncedOnlyMatchesQuery,
+    showOnlyMatches,
+    shouldOptimizeOnlyMatches,
+  ]);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/components/ui/code-display.tsx` around lines 1001 - 1010, The useMemo for
nonSseFilteredText currently uses onlyMatchesQuery directly causing expensive
split/filter on every keystroke for large texts; change it to use the
debouncedOnlyMatchesQuery when shouldOptimizeOnlyMatches is true (i.e., replace
checks and the line.includes(onlyMatchesQuery) with debouncedOnlyMatchesQuery
when optimizing), and update the useMemo dependency array to include
debouncedOnlyMatchesQuery (and retain onlyMatchesQuery if you still use it when
not optimizing); this reuses the debounced query for the non-Worker fallback and
prevents repeated full-text processing on each keypress.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@src/components/ui/code-display.tsx`:
- Around line 1038-1048: The current logic leaves downloadType as
"application/json" for large texts (> MAX_SYNC_JSON_CHARS) without validation,
which can mislabel non-JSON files; update the branch around
isCandidateJson/downloadType so that if text.length > MAX_SYNC_JSON_CHARS you
set downloadType = "text/plain" (i.e., treat large/unvalidated content as plain
text) instead of keeping application/json; keep the existing small-text
safeJsonParse(path) check for validated JSON and preserve the
showOnlyMatches/onlyMatchesQuery logic used to compute isCandidateJson.

---

Nitpick comments:
In `@src/components/ui/code-display.tsx`:
- Around line 1001-1010: The useMemo for nonSseFilteredText currently uses
onlyMatchesQuery directly causing expensive split/filter on every keystroke for
large texts; change it to use the debouncedOnlyMatchesQuery when
shouldOptimizeOnlyMatches is true (i.e., replace checks and the
line.includes(onlyMatchesQuery) with debouncedOnlyMatchesQuery when optimizing),
and update the useMemo dependency array to include debouncedOnlyMatchesQuery
(and retain onlyMatchesQuery if you still use it when not optimizing); this
reuses the debounced query for the non-Worker fallback and prevents repeated
full-text processing on each keypress.

ℹ️ Review info

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Cache: Disabled due to Reviews > Disable Cache setting

📥 Commits

Reviewing files that changed from the base of the PR and between 3bda57d and eb23ccd.

📒 Files selected for processing (1)
  • src/components/ui/code-display.tsx

@tesgth032
Copy link
Contributor Author

已按 review 建议补齐 Worker 取消响应性(避免取消后任务仍长时间占用 Worker,阻塞后续新任务):

  • src/components/ui/code-display.worker.ts:引入 job 队列,保证在加入 �wait/yield 后仍保持任务串行处理;收到 cancel 时 best-effort 移除队列中尚未开始的 job,减少无意义计算。
  • �uildLineIndex / searchLines:加入周期性 yield(约每 50ms)以让 Worker 事件循环能处理 cancel 消息,从而更及时中断长循环。

本地已验证:�un run lint、�un run typecheck、�un run test、�un run build 均通过。

@coderabbitai review

@coderabbitai
Copy link

coderabbitai bot commented Mar 2, 2026

@tesgth032 感谢您的持续改进!我已收到您在最新 commit 中针对 Worker 取消响应性的优化:

Job 队列引入 - 在 code-display.worker.ts 中实现任务队列,确保即使加入 yield 后仍保持串行处理,并支持 best-effort 移除未开始的已取消任务

周期性 yield - 在 buildLineIndexsearchLines 的长循环中加入约 50ms 间隔的 yield,让 Worker 事件循环能及时处理 cancel 消息并中断任务

本地验证通过 - lint、typecheck、test、build 均已通过

这些改进有效解决了取消后任务仍长时间占用 Worker 的问题,避免阻塞后续新任务。我将立即触发完整的 PR review。

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

@tesgth032
Copy link
Contributor Author

上一条 comment 的反引号在 PowerShell 字符串里被转义,导致显示异常;这里重新贴一次(内容一致):

  • src/components/ui/code-display.worker.ts:引入 job 队列,保证在加入 await/yield 后仍保持任务串行处理;收到 cancel 时 best-effort 移除队列中尚未开始的 job,减少无意义计算。
  • buildLineIndex / searchLines:加入周期性 yield(约每 50ms)以让 Worker 事件循环能处理 cancel 消息,从而更及时中断长循环。

本地已验证:bun run lintbun run typecheckbun run testbun run build 均通过。

@tesgth032
Copy link
Contributor Author

已按 AI review 意见做了进一步修正(仅改动本 PR 引入的代码):

  • Virtual Pretty:索引失败/取消时不再“静默回退”,Plain/Virtual 切换区会显示具体回退原因(Too many lines / Canceled / Failed),并补齐 5 语种 i18n。
  • getTextKey:改为更稳健的轻量级哈希(小文本全量 FNV-1a,超大文本多窗口 + stride 采样 + 双 hash),降低碰撞导致错误复用 lineIndex/pretty 的风险。
  • 补充/调整对应单测断言;日文翻译改为半角括号,修复 i18n 规范测试失败。

本地已验证:bun run lint / bun run typecheck / bun run test / bun run build 均通过。

Comment on lines +1113 to +1116
if (!q) return sseEvents;
const re = new RegExp(escapeRegExp(q), "i");
return sseEvents.filter((evt) => re.test(evt.event) || re.test(evt.data));
}, [searchQuery, sseEvents]);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Case-sensitivity inconsistency between SSE and non-SSE search

The SSE filter at this line uses a case-insensitive regex ("i" flag), while the non-SSE optimized search path uses searchLines which internally calls String.prototype.indexOf — a case-sensitive operation.

A user who types "error" to find an occurrence of "Error" will:

  • Get matches in SSE view (case-insensitive)
  • Get zero matches in large JSON/text view (case-sensitive)

This silent behavioral inconsistency will confuse users. Consider lowercasing both query and text in searchLines (or applying .toLowerCase() on both sides in the fallback and passing a caseInsensitive flag to the worker), or documenting the difference in the UI placeholder text.

Suggested change
if (!q) return sseEvents;
const re = new RegExp(escapeRegExp(q), "i");
return sseEvents.filter((evt) => re.test(evt.event) || re.test(evt.data));
}, [searchQuery, sseEvents]);
const re = new RegExp(escapeRegExp(q), "i");
return sseEvents.filter((evt) => re.test(evt.event) || re.test(evt.data));

The real fix needs to happen in code-display.worker.ts searchLines to support case-insensitive matching.

Prompt To Fix With AI
This is a comment left during a code review.
Path: src/components/ui/code-display.tsx
Line: 1113-1116

Comment:
**Case-sensitivity inconsistency between SSE and non-SSE search**

The SSE filter at this line uses a **case-insensitive** regex (`"i"` flag), while the non-SSE optimized search path uses `searchLines` which internally calls `String.prototype.indexOf` — a **case-sensitive** operation.

A user who types `"error"` to find an occurrence of `"Error"` will:
- Get matches in SSE view (case-insensitive)
- Get zero matches in large JSON/text view (case-sensitive)

This silent behavioral inconsistency will confuse users. Consider lowercasing both query and text in `searchLines` (or applying `.toLowerCase()` on both sides in the fallback and passing a `caseInsensitive` flag to the worker), or documenting the difference in the UI placeholder text.

```suggestion
    const re = new RegExp(escapeRegExp(q), "i");
    return sseEvents.filter((evt) => re.test(evt.event) || re.test(evt.data));
```

The real fix needs to happen in `code-display.worker.ts` `searchLines` to support case-insensitive matching.

How can I resolve this? If you propose a fix, please make it concise.

Comment on lines +46 to 48
const SSE_ESTIMATED_ROW_HEIGHT_PX = 44;
const SSE_OVERSCAN = 12;

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hardcoded SSE row height may break virtualization accuracy

SSE_ESTIMATED_ROW_HEIGHT_PX = 44 is used as a fixed estimate for collapsed SSE event row height. This estimate is sensitive to:

  • Font size changes (different user agents / zoom levels)
  • Theme differences (border widths, padding)
  • Tailwind class changes to the <div className="rounded-md border bg-background/50"> wrapper

When the actual row height differs from the estimate, the virtual scroll position will drift — rows will render at wrong positions or overlapping areas will appear empty. Since this component already has a ResizeObserver for viewport height, consider measuring the actual first rendered row height with a ref once it mounts, and using that as the estimatedRowHeight:

const [measuredRowHeight, setMeasuredRowHeight] = useState(SSE_ESTIMATED_ROW_HEIGHT_PX);
// measure via ref after first visible row renders
const estimatedRowHeight = measuredRowHeight;
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/components/ui/code-display.tsx
Line: 46-48

Comment:
**Hardcoded SSE row height may break virtualization accuracy**

`SSE_ESTIMATED_ROW_HEIGHT_PX = 44` is used as a fixed estimate for collapsed SSE event row height. This estimate is sensitive to:
- Font size changes (different user agents / zoom levels)
- Theme differences (border widths, padding)
- Tailwind class changes to the `<div className="rounded-md border bg-background/50">` wrapper

When the actual row height differs from the estimate, the virtual scroll position will drift — rows will render at wrong positions or overlapping areas will appear empty. Since this component already has a `ResizeObserver` for viewport height, consider measuring the actual first rendered row height with a ref once it mounts, and using that as the `estimatedRowHeight`:

```tsx
const [measuredRowHeight, setMeasuredRowHeight] = useState(SSE_ESTIMATED_ROW_HEIGHT_PX);
// measure via ref after first visible row renders
const estimatedRowHeight = measuredRowHeight;
```

How can I resolve this? If you propose a fix, please make it concise.

Comment on lines +21 to +27
if (!ref.current) return;
if (ref.current.value === value) return;
ref.current.value = value;
}, [value]);

return (
<textarea
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mixed controlled/uncontrolled textarea pattern can diverge on fast updates

The component uses defaultValue={value} (uncontrolled — only applied at initial mount) together with a useEffect that manually writes to ref.current.value. If the parent re-renders with a new value before the useEffect fires (e.g., in concurrent mode or under Suspense), the displayed content could briefly lag or show stale text.

A safer pattern is to drop defaultValue entirely and rely only on the imperative DOM write, or use a plain <textarea value={value} onChange={() => {}} readOnly /> (fully controlled) for correctness. For the performance goal of avoiding controlled-input overhead, keeping the current pattern is acceptable, but the stale-render window should be acknowledged in a comment:

// Using uncontrolled textarea + manual DOM write to avoid React re-render overhead
// for very large strings. defaultValue seeds the initial render; subsequent changes
// are applied imperatively in the effect.
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/components/ui/code-display-plain-textarea.tsx
Line: 21-27

Comment:
**Mixed controlled/uncontrolled textarea pattern can diverge on fast updates**

The component uses `defaultValue={value}` (uncontrolled — only applied at initial mount) together with a `useEffect` that manually writes to `ref.current.value`. If the parent re-renders with a new `value` before the `useEffect` fires (e.g., in concurrent mode or under Suspense), the displayed content could briefly lag or show stale text.

A safer pattern is to drop `defaultValue` entirely and rely only on the imperative DOM write, or use a plain `<textarea value={value} onChange={() => {}} readOnly />` (fully controlled) for correctness. For the performance goal of avoiding controlled-input overhead, keeping the current pattern is acceptable, but the stale-render window should be acknowledged in a comment:

```tsx
// Using uncontrolled textarea + manual DOM write to avoid React re-render overhead
// for very large strings. defaultValue seeds the initial render; subsequent changes
// are applied imperatively in the effect.
```

How can I resolve this? If you propose a fix, please make it concise.

@tesgth032
Copy link
Contributor Author

tesgth032 commented Mar 3, 2026

已补齐一个容易被忽略的卡顿路径(仅改动本 PR 引入的代码):

  • CCH_CODEDISPLAY_WORKER_ENABLE=false(或 Worker 不可用)且内容很大时,only-matches 之前可能走到同步 split/filter 的回退路径,仍有卡顿风险。
  • 现在 large only-matches 统一走“增量行索引 + 增量搜索”策略(可取消/有进度),并通过 workerEnabled 参数确保在 Worker 禁用时不会尝试构造 Worker。

改动点:

  • src/components/ui/code-display-worker-client.ts:no-worker fallback 支持周期性 yield + 进度回调;buildLineIndex/searchLines 增加 workerEnabled 以强制走 no-worker 路径。
  • src/components/ui/code-display.tsx:large only-matches 不再依赖 workerEnabled 才启用优化;调用 worker-client 时透传 workerEnabled
  • 单测补齐:tests/unit/code-display-worker-client.test.tstests/unit/code-display-perf-ui.test.tsx

本地已验证:bun run lint / bun run lint:fix / bun run typecheck / bun run test / bun run build 均通过。

Comment on lines +77 to +84
const controller = new AbortController();
indexAbortRef.current = controller;

setIndexStatus("loading");
setIndexProgress({ processed: 0, total: currentText.length });
setLineStarts(null);
setLineCount(0);

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

workerEnabled config setting silently ignored

buildLineIndex is called without the workerEnabled parameter, so it always defaults to true. When CCH_CODEDISPLAY_WORKER_ENABLE=false is set by the user, this component will still attempt to use the Worker (and only fall back if the Worker itself is unavailable), effectively ignoring the explicit configuration.

The parent CodeDisplay passes all other config props (lineHeightPx, overscanLines, contextLines, maxLines, perfDebugEnabled) but omits workerEnabled. Add it to the component's props and thread it through:

// In the component props:
workerEnabled: boolean;

// In the buildLineIndex call:
void buildLineIndex({
  text: currentText,
  maxLines,
  onProgress: (p) => { ... },
  signal: controller.signal,
  workerEnabled,  // ← pass through
}).then((res) => { ... });

And at the call site in code-display.tsx:

<CodeDisplayVirtualHighlighter
  ...
  workerEnabled={codeDisplayConfig.workerEnabled}
/>
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/components/ui/code-display-virtual-highlighter.tsx
Line: 77-84

Comment:
**`workerEnabled` config setting silently ignored**

`buildLineIndex` is called without the `workerEnabled` parameter, so it always defaults to `true`. When `CCH_CODEDISPLAY_WORKER_ENABLE=false` is set by the user, this component will still attempt to use the Worker (and only fall back if the Worker itself is unavailable), effectively ignoring the explicit configuration.

The parent `CodeDisplay` passes all other config props (`lineHeightPx`, `overscanLines`, `contextLines`, `maxLines`, `perfDebugEnabled`) but omits `workerEnabled`. Add it to the component's props and thread it through:

```tsx
// In the component props:
workerEnabled: boolean;

// In the buildLineIndex call:
void buildLineIndex({
  text: currentText,
  maxLines,
  onProgress: (p) => { ... },
  signal: controller.signal,
  workerEnabled,  // ← pass through
}).then((res) => { ... });
```

And at the call site in `code-display.tsx`:
```tsx
<CodeDisplayVirtualHighlighter
  ...
  workerEnabled={codeDisplayConfig.workerEnabled}
/>
```

How can I resolve this? If you propose a fix, please make it concise.

Comment on lines +271 to +275
const rafRef = useRef<number | null>(null);
const scrollTopRef = useRef(0);

const [viewportHeight, setViewportHeight] = useState(0);
const [scrollTop, setScrollTop] = useState(0);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Expanding any SSE row disables virtualization for the entire list

useVirtual is false whenever expandedRows.size > 0, which means opening a single row causes all events to be rendered simultaneously. For a list with 1 000+ events, this causes the very main-thread DOM explosion the PR aims to prevent.

Consider keeping virtualization active even with expanded rows by measuring the expanded row's actual rendered height instead of toggling virtualization off entirely. A simpler intermediate approach: cap non-virtual rendering to a safe event count even when rows are expanded:

const NON_VIRTUAL_EXPANDED_LIMIT = 500; // tuneable

const useVirtual =
  events.length > SSE_VIRTUAL_THRESHOLD &&
  viewportHeight > 0 &&
  (expandedRows.size === 0 || events.length > NON_VIRTUAL_EXPANDED_LIMIT);

This restores the performance guard for large lists while still allowing the simpler non-virtual path for smaller ones.

Prompt To Fix With AI
This is a comment left during a code review.
Path: src/components/ui/code-display.tsx
Line: 271-275

Comment:
**Expanding any SSE row disables virtualization for the entire list**

`useVirtual` is `false` whenever `expandedRows.size > 0`, which means opening a **single** row causes **all** events to be rendered simultaneously. For a list with 1 000+ events, this causes the very main-thread DOM explosion the PR aims to prevent.

Consider keeping virtualization active even with expanded rows by measuring the expanded row's actual rendered height instead of toggling virtualization off entirely. A simpler intermediate approach: cap non-virtual rendering to a safe event count even when rows are expanded:

```tsx
const NON_VIRTUAL_EXPANDED_LIMIT = 500; // tuneable

const useVirtual =
  events.length > SSE_VIRTUAL_THRESHOLD &&
  viewportHeight > 0 &&
  (expandedRows.size === 0 || events.length > NON_VIRTUAL_EXPANDED_LIMIT);
```

This restores the performance guard for large lists while still allowing the simpler non-virtual path for smaller ones.

How can I resolve this? If you propose a fix, please make it concise.

Comment on lines +1 to +5
"use client";

export type FormatJsonPrettyErrorCode =
| "INVALID_JSON"
| "CANCELED"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

MAX_SYNC_JSON_CHARS constant duplicated

MAX_SYNC_JSON_CHARS = 200_000 is defined at module level in code-display.tsx (line 41) and also inside the formatJsonPretty function in this file. The two values are in sync today, but having the same magic number in two places risks silent drift.

Extract it to code-display-config.ts (or a shared constants file) and import it in both locations:

// code-display-config.ts (or a new constants.ts)
export const MAX_SYNC_JSON_CHARS = 200_000;
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/components/ui/code-display-worker-client.ts
Line: 1-5

Comment:
**`MAX_SYNC_JSON_CHARS` constant duplicated**

`MAX_SYNC_JSON_CHARS = 200_000` is defined at module level in `code-display.tsx` (line 41) and also inside the `formatJsonPretty` function in this file. The two values are in sync today, but having the same magic number in two places risks silent drift.

Extract it to `code-display-config.ts` (or a shared constants file) and import it in both locations:

```ts
// code-display-config.ts (or a new constants.ts)
export const MAX_SYNC_JSON_CHARS = 200_000;
```

How can I resolve this? If you propose a fix, please make it concise.

Comment on lines 1 to 3
{
"$schema": "https://biomejs.dev/schemas/2.4.4/schema.json",
"vcs": {
"enabled": true,
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unrelated $schema removal

The "$schema" line was dropped from biome.json — this is unrelated to the performance optimization and removes IDE JSON-schema validation for the biome config. Please revert this change or keep it in a separate commit.

Prompt To Fix With AI
This is a comment left during a code review.
Path: biome.json
Line: 1-3

Comment:
**Unrelated `$schema` removal**

The `"$schema"` line was dropped from `biome.json` — this is unrelated to the performance optimization and removes IDE JSON-schema validation for the biome config. Please revert this change or keep it in a separate commit.

How can I resolve this? If you propose a fix, please make it concise.

@tesgth032
Copy link
Contributor Author

已跟进 Greptile/CodeRabbit/Gemini 的 review 点并修正(仅改动本 PR 新增/变更代码):

  • 恢复 biome.json 的 $schema,并取消 @biomejs/biome 版本 pin(回到 dev 的 ^2)。
  • 抽取 MAX_SYNC_JSON_CHARS 到 src/components/ui/code-display-config.ts,避免重复常量漂移。
  • Virtual Highlighter 透传 workerEnabled,确保 CCH_CODEDISPLAY_WORKER_ENABLE=false 时不再隐式启用 Worker。
  • SSE 列表保持虚拟化(不再因展开而整表渲染);搜索过滤时重置展开/选中,避免索引错位。
  • 下载 MIME 对超大文本更保守;non-worker 的 only-matches fallback 复用去抖避免每次按键 split/filter。
  • CodeDisplayPlainTextarea 增补说明,解释 defaultValue + imperative write 的设计取舍。

本地已验证:bun run lint / bun run lint:fix / bun run typecheck / bun run test / bun run build 全通过。

@gemini-code-assist
Copy link
Contributor

tesgth032 感谢您持续的改进和对 Greptile、CodeRabbit 以及 Gemini 提出的 review 意见的修正!

我已注意到您在最新 commit 中进行的以下优化:

  • 配置与依赖管理: 恢复 biome.json$schema 并调整 @biomejs/biome 版本,同时将 MAX_SYNC_JSON_CHARS 抽取为共享常量。
  • 虚拟高亮与 Worker: 确保 Virtual Highlighter 正确透传 workerEnabled 参数,避免 Worker 被隐式启用。
  • SSE 列表优化: 改进了 SSE 列表的虚拟化渲染,并在搜索过滤时重置展开/选中状态,以提升性能和用户体验。
  • 下载与搜索逻辑: 对超大文本的下载 MIME 类型处理更为保守,并优化了非 Worker 模式下 only-matches 搜索的去抖逻辑。
  • 代码可读性: 为 CodeDisplayPlainTextarea 增加了设计说明。

我将立即触发完整的 PR review,以验证这些修正是否符合预期。感谢您的努力!

@tesgth032 tesgth032 changed the title perf: 优化 sessions/messages 长内容 Pretty 性能 [未完成] perf: 优化 sessions/messages 长内容 Pretty 性能 Mar 3, 2026
Comment on lines +22 to +25
export type BuildLineIndexErrorCode = "CANCELED" | "TOO_MANY_LINES" | "UNKNOWN";

export type BuildLineIndexResult =
| { ok: true; lineStarts: Int32Array; lineCount: number }
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

stringifyJsonPretty exported but never used in application code

stringifyJsonPretty is exported from this module but is not imported anywhere in the actual application — only in tests/unit/code-display-worker-client.test.ts. Exporting an unused public API adds noise and unnecessary bundle surface.

If this is intentionally reserved for future use, a comment saying so would help. Otherwise, consider removing the export (or the function entirely if it has no caller) to keep the module surface clean:

Suggested change
export type BuildLineIndexErrorCode = "CANCELED" | "TOO_MANY_LINES" | "UNKNOWN";
export type BuildLineIndexResult =
| { ok: true; lineStarts: Int32Array; lineCount: number }
export type StringifyJsonPrettyErrorCode = "CANCELED" | "OUTPUT_TOO_LARGE" | "UNKNOWN";
export type StringifyJsonPrettyResult =
| { ok: true; text: string }
| { ok: false; errorCode: StringifyJsonPrettyErrorCode };

(removing the export keyword on the function itself, or noting intent with a comment)

Prompt To Fix With AI
This is a comment left during a code review.
Path: src/components/ui/code-display-worker-client.ts
Line: 22-25

Comment:
**`stringifyJsonPretty` exported but never used in application code**

`stringifyJsonPretty` is exported from this module but is not imported anywhere in the actual application — only in `tests/unit/code-display-worker-client.test.ts`. Exporting an unused public API adds noise and unnecessary bundle surface.

If this is intentionally reserved for future use, a comment saying so would help. Otherwise, consider removing the export (or the function entirely if it has no caller) to keep the module surface clean:

```suggestion
export type StringifyJsonPrettyErrorCode = "CANCELED" | "OUTPUT_TOO_LARGE" | "UNKNOWN";

export type StringifyJsonPrettyResult =
  | { ok: true; text: string }
  | { ok: false; errorCode: StringifyJsonPrettyErrorCode };
```
(removing the `export` keyword on the function itself, or noting intent with a comment)

How can I resolve this? If you propose a fix, please make it concise.

Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!

Comment on lines +271 to +275
const rafRef = useRef<number | null>(null);
const scrollTopRef = useRef(0);

const [measuredRowHeight, setMeasuredRowHeight] = useState(SSE_ESTIMATED_ROW_HEIGHT_PX);
const measuredRowHeightRef = useRef(measuredRowHeight);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SSE virtual mode disables expandedRows-based multi-expand UX silently

When events.length > SSE_VIRTUAL_THRESHOLD (200), the component renders button rows and a single activeRow detail panel — the <details> / expandedRows path is completely bypassed. Any rows a user previously expanded in non-virtual mode (e.g. from a filtered result set that had ≤200 events) would silently switch to the single-active-panel UX if the filter is cleared and the list grows past 200.

More concretely: expandedRows state is maintained inside CodeDisplaySseEvents and resets when events changes. The render branches between virtual (button + active panel) and non-virtual (<details>) based purely on events.length. This creates two completely different interaction models with no visual warning to the user.

Consider at minimum showing a note like "Viewing in compact mode (N events)" when virtual mode is active, or keeping <details> semantics by measuring each expanded row's actual height (as suggested in a previous review round).

Prompt To Fix With AI
This is a comment left during a code review.
Path: src/components/ui/code-display.tsx
Line: 271-275

Comment:
**SSE virtual mode disables `expandedRows`-based multi-expand UX silently**

When `events.length > SSE_VIRTUAL_THRESHOLD` (200), the component renders button rows and a single `activeRow` detail panel — the `<details>` / `expandedRows` path is completely bypassed. Any rows a user previously expanded in non-virtual mode (e.g. from a filtered result set that had ≤200 events) would silently switch to the single-active-panel UX if the filter is cleared and the list grows past 200.

More concretely: `expandedRows` state is maintained inside `CodeDisplaySseEvents` and resets when `events` changes. The render branches between virtual (button + active panel) and non-virtual (`<details>`) based purely on `events.length`. This creates two completely different interaction models with no visual warning to the user.

Consider at minimum showing a note like "Viewing in compact mode (N events)" when virtual mode is active, or keeping `<details>` semantics by measuring each expanded row's actual height (as suggested in a previous review round).

How can I resolve this? If you propose a fix, please make it concise.

Comment on lines +158 to +165
function parseSseForCodeDisplay(sseText: string): DisplaySseEvent[] {
const events: DisplaySseEvent[] = [];

let eventName = "";
let dataLines: string[] = [];

const flushEvent = () => {
if (dataLines.length === 0) {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

parseSseForCodeDisplay silently ignores unrecognised field names

The SSE spec allows id: (last-event-id) and retry: fields. Lines carrying those fields reach the end of the loop body without matching any if branch, so they are silently dropped. This is intentional for a display-only parser, but it means that if a server sends an id: line as the last thing before a blank line (no data: line), flushEvent() is still called at the end of the loop and will no-op harmlessly. The behaviour is correct, but a brief comment would help future readers understand why id: / retry: handling is intentionally absent.

// id: and retry: fields are intentionally not handled — this parser
// is display-only and does not track last-event-id or reconnect timing.
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/components/ui/code-display.tsx
Line: 158-165

Comment:
**`parseSseForCodeDisplay` silently ignores unrecognised field names**

The SSE spec allows `id:` (last-event-id) and `retry:` fields. Lines carrying those fields reach the end of the loop body without matching any `if` branch, so they are silently dropped. This is intentional for a display-only parser, but it means that if a server sends an `id:` line as the last thing before a blank line (no `data:` line), `flushEvent()` is still called at the end of the loop and will no-op harmlessly. The behaviour is correct, but a brief comment would help future readers understand why `id:` / `retry:` handling is intentionally absent.

```ts
// id: and retry: fields are intentionally not handled — this parser
// is display-only and does not track last-event-id or reconnect timing.
```

How can I resolve this? If you propose a fix, please make it concise.

Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!

Comment on lines 280 to +283
describe("backfill", () => {
test(
"backfill copies non-warmup message_request rows when ledger rows are missing",
{ timeout: 60_000 },
async () => {
const userId = nextUserId();
const providerId = nextProviderId();
const keepA = await insertMessageRequestRow({
key: nextKey("backfill-a"),
userId,
providerId,
costUsd: "1.100000000000000",
});
const keepB = await insertMessageRequestRow({
key: nextKey("backfill-b"),
userId,
providerId,
costUsd: "2.200000000000000",
});
const warmup = await insertMessageRequestRow({
key: nextKey("backfill-warmup"),
userId,
providerId,
blockedBy: "warmup",
});

await db.delete(usageLedger).where(inArray(usageLedger.requestId, [keepA, keepB, warmup]));

const summary = await backfillUsageLedger();
expect(summary.totalProcessed).toBeGreaterThanOrEqual(2);

const rows = await db
.select({ requestId: usageLedger.requestId })
.from(usageLedger)
.where(inArray(usageLedger.requestId, [keepA, keepB, warmup]));
const requestIds = rows.map((row) => row.requestId);

expect(requestIds).toContain(keepA);
expect(requestIds).toContain(keepB);
expect(requestIds).not.toContain(warmup);
}
);
test("backfill copies non-warmup message_request rows when ledger rows are missing", {
timeout: 60_000,
}, async () => {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unrelated whitespace-only reformatting

This test was reformatted (the { timeout } options object moved inline) without any semantic change. It would be cleaner to keep such cosmetic changes in a separate commit so the diff of this PR stays focused on the performance feature.

Prompt To Fix With AI
This is a comment left during a code review.
Path: tests/integration/usage-ledger.test.ts
Line: 280-283

Comment:
**Unrelated whitespace-only reformatting**

This test was reformatted (the `{ timeout }` options object moved inline) without any semantic change. It would be cleaner to keep such cosmetic changes in a separate commit so the diff of this PR stays focused on the performance feature.

How can I resolve this? If you propose a fix, please make it concise.

Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!

Comment on lines +130 to +145
function formatJsonPrettyStreaming({
text,
jobId,
indentSize,
maxOutputBytes,
}: {
text: string;
jobId: number;
indentSize: number;
maxOutputBytes: number;
}):
| { ok: true; text: string }
| { ok: false; errorCode: "INVALID_JSON" | "CANCELED" | "OUTPUT_TOO_LARGE" } {
// 严格流式 pretty printer:不构建对象树,按字符扫描并校验 JSON 语法。
// 目标:只要输入是合法 JSON,就能输出完整 pretty;若非法,返回 INVALID_JSON(不做“容错修复”)。

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Synchronous formatter blocks worker queue — isCancelled checks are unreachable during execution

formatJsonPrettyStreaming is entirely synchronous. In a Web Worker the event loop is still single-threaded: while this function runs, no onmessage handler can fire. This means:

  1. A cancel message sent during the formatter's execution is queued but not delivered until the function returns.
  2. Every isCancelled(jobId) call inside the formatter checks a cancelledJobs Set that will never be updated during the run — all cancellation checks inside the streaming formatter are effectively dead code.
  3. Any new buildLineIndex or searchLines jobs queued while the formatter runs must wait until it completes, even if the format was already "cancelled" by the user on the main thread.

The main-thread UX is fine (the client resolves the promise immediately via cancelWorkerJob), but for very large JSON files the worker thread can remain occupied for several seconds after the user hits Cancel, delaying all subsequent search/index operations.

Consider either:

  • Adding periodic await yieldToEventLoop() calls inside formatJsonPrettyStreaming at the (i & 8191) === 0 checkpoints (turns it async and allows cancel messages to be processed), or
  • Documenting that cancellation of a running format job only takes effect on the next job.
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/components/ui/code-display.worker.ts
Line: 130-145

Comment:
**Synchronous formatter blocks worker queue — `isCancelled` checks are unreachable during execution**

`formatJsonPrettyStreaming` is entirely synchronous. In a Web Worker the event loop is still single-threaded: while this function runs, no `onmessage` handler can fire. This means:

1. A `cancel` message sent during the formatter's execution is queued but **not delivered** until the function returns.
2. Every `isCancelled(jobId)` call inside the formatter checks a `cancelledJobs` Set that will never be updated during the run — all cancellation checks inside the streaming formatter are effectively dead code.
3. Any new `buildLineIndex` or `searchLines` jobs queued while the formatter runs **must wait** until it completes, even if the format was already "cancelled" by the user on the main thread.

The main-thread UX is fine (the client resolves the promise immediately via `cancelWorkerJob`), but for very large JSON files the worker thread can remain occupied for several seconds after the user hits Cancel, delaying all subsequent search/index operations.

Consider either:
- Adding periodic `await yieldToEventLoop()` calls inside `formatJsonPrettyStreaming` at the `(i & 8191) === 0` checkpoints (turns it async and allows cancel messages to be processed), or
- Documenting that cancellation of a running format job only takes effect on the next job.

How can I resolve this? If you propose a fix, please make it concise.

Comment on lines +326 to +373
export async function stringifyJsonPretty({
value,
indentSize,
maxOutputBytes,
signal,
}: {
value: unknown;
indentSize: number;
maxOutputBytes: number;
signal?: AbortSignal;
}): Promise<StringifyJsonPrettyResult> {
ensureInitialized();
const w = getWorker();
if (!w) {
if (signal?.aborted) return { ok: false, errorCode: "CANCELED" };
try {
const pretty = JSON.stringify(value, null, indentSize);
if (pretty.length * 2 > maxOutputBytes) return { ok: false, errorCode: "OUTPUT_TOO_LARGE" };
return { ok: true, text: pretty };
} catch {
return { ok: false, errorCode: "UNKNOWN" };
}
}

const jobId = genJobId();
return new Promise((resolve) => {
if (signal?.aborted) {
resolve({ ok: false, errorCode: "CANCELED" });
return;
}

const onAbort = () => cancelWorkerJob(jobId);
signal?.addEventListener("abort", onAbort);

const wrappedResolve: StringifyJsonPrettyResolve = (v) => {
signal?.removeEventListener("abort", onAbort);
resolve(v);
};

pending.set(jobId, { kind: "stringifyJsonPretty", resolve: wrappedResolve });
try {
w.postMessage({ type: "stringifyJsonPretty", jobId, value, indentSize, maxOutputBytes });
} catch {
pending.delete(jobId);
wrappedResolve({ ok: false, errorCode: "UNKNOWN" });
}
});
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

stringifyJsonPretty is exported but never used

stringifyJsonPretty is exported from this module and the worker handles stringifyJsonPretty messages, but no component in the codebase imports or calls this function. It is dead code that adds maintenance surface (the worker message type, the PendingJob union arm, and the response handler all need to stay in sync with it).

If it is genuinely unneeded, remove it from this file, from the worker, and from WorkerRequest/WorkerResponse. If it is intended for future use, add a // TODO: used by X comment so reviewers know it is intentional.

Prompt To Fix With AI
This is a comment left during a code review.
Path: src/components/ui/code-display-worker-client.ts
Line: 326-373

Comment:
**`stringifyJsonPretty` is exported but never used**

`stringifyJsonPretty` is exported from this module and the worker handles `stringifyJsonPretty` messages, but no component in the codebase imports or calls this function. It is dead code that adds maintenance surface (the worker message type, the `PendingJob` union arm, and the response handler all need to stay in sync with it).

If it is genuinely unneeded, remove it from this file, from the worker, and from `WorkerRequest`/`WorkerResponse`. If it is intended for future use, add a `// TODO: used by X` comment so reviewers know it is intentional.

How can I resolve this? If you propose a fix, please make it concise.

Comment on lines +265 to +267
if (indexStatus !== "ready" || !lineStarts || !range) {
return null;
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One-frame null render causes brief flash when index becomes ready

When buildLineIndex completes it calls setIndexStatus("ready"), setLineStarts(...), etc. in the .then() callback — triggering a React render. The useEffect that calls updateWindow(true) runs after that render. Between those two renders:

  • indexStatus === "ready"
  • range === null (not yet computed)

This condition causes the component to return null for one frame, briefly replacing the loading spinner with empty space before the first window appears.

A simple fix is to also guard on lineStarts:

Suggested change
if (indexStatus !== "ready" || !lineStarts || !range) {
return null;
}
if (indexStatus !== "ready" || !lineStarts || lineCount <= 0 || !range) {
return null;
}

Or, keep the loading spinner visible until range is non-null:

if (indexStatus === "loading" || (indexStatus === "ready" && !range)) {
  // show spinner
}
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/components/ui/code-display-virtual-highlighter.tsx
Line: 265-267

Comment:
**One-frame null render causes brief flash when index becomes ready**

When `buildLineIndex` completes it calls `setIndexStatus("ready")`, `setLineStarts(...)`, etc. in the `.then()` callback — triggering a React render. The `useEffect` that calls `updateWindow(true)` runs **after** that render. Between those two renders:

- `indexStatus === "ready"`- `range === null` (not yet computed)

This condition causes the component to return `null` for one frame, briefly replacing the loading spinner with empty space before the first window appears.

A simple fix is to also guard on `lineStarts`:

```suggestion
  if (indexStatus !== "ready" || !lineStarts || lineCount <= 0 || !range) {
    return null;
  }
```

Or, keep the loading spinner visible until `range` is non-null:

```tsx
if (indexStatus === "loading" || (indexStatus === "ready" && !range)) {
  // show spinner
}
```

How can I resolve this? If you propose a fix, please make it concise.

Comment on lines +126 to +143
{Array.from({ length: Math.max(0, endIndex - startIndex) }, (_, localIdx) => {
const idx = startIndex + localIdx;
const lineNo = matches[idx] ?? 0;
const [start, end] = getLineRange(text, lineStarts, lineNo);
const lineText = text.slice(start, end);

return (
<div
// idx 作为 key 已足够:matches 是稳定数组(仅随查询/内容变化)
key={idx}
className="flex items-center gap-3 px-3"
style={{ height: lineHeightPx, lineHeight: `${lineHeightPx}px` }}
>
<span className="w-14 shrink-0 text-muted-foreground tabular-nums">{lineNo + 1}</span>
<span className="whitespace-pre">{lineText}</span>
</div>
);
})}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Matched term not highlighted within result lines

Each result row shows the full line text but does not visually highlight where the search query appears. For long lines with the match buried in the middle, users have to manually scan the entire line to find the term.

Consider splitting the line text around the match positions and wrapping the matched fragment in a <mark> or <span className="bg-yellow-200 dark:bg-yellow-800"> element:

// simple example (case-sensitive, query passed as prop)
const idx = lineText.indexOf(query);
if (idx === -1) return <span className="whitespace-pre">{lineText}</span>;
return (
  <>
    <span className="whitespace-pre">{lineText.slice(0, idx)}</span>
    <mark className="bg-yellow-200 dark:bg-yellow-800">{lineText.slice(idx, idx + query.length)}</mark>
    <span className="whitespace-pre">{lineText.slice(idx + query.length)}</span>
  </>
);

This would require passing query as a prop to CodeDisplayMatchesList.

Prompt To Fix With AI
This is a comment left during a code review.
Path: src/components/ui/code-display-matches-list.tsx
Line: 126-143

Comment:
**Matched term not highlighted within result lines**

Each result row shows the full line text but does not visually highlight where the search query appears. For long lines with the match buried in the middle, users have to manually scan the entire line to find the term.

Consider splitting the line text around the match positions and wrapping the matched fragment in a `<mark>` or `<span className="bg-yellow-200 dark:bg-yellow-800">` element:

```tsx
// simple example (case-sensitive, query passed as prop)
const idx = lineText.indexOf(query);
if (idx === -1) return <span className="whitespace-pre">{lineText}</span>;
return (
  <>
    <span className="whitespace-pre">{lineText.slice(0, idx)}</span>
    <mark className="bg-yellow-200 dark:bg-yellow-800">{lineText.slice(idx, idx + query.length)}</mark>
    <span className="whitespace-pre">{lineText.slice(idx + query.length)}</span>
  </>
);
```

This would require passing `query` as a prop to `CodeDisplayMatchesList`.

How can I resolve this? If you propose a fix, please make it concise.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

area:session area:UI enhancement New feature or request size/XL Extra Large PR (> 1000 lines)

Projects

Status: Backlog

Development

Successfully merging this pull request may close these issues.

1 participant