perf: 优化 sessions/messages 长内容 Pretty 性能#860
Conversation
- 方案1(默认开):长内容 Pretty 视图降级为纯文本 textarea,避免 SyntaxHighlighter DOM 爆炸 - 方案3(可选开):虚拟窗口语法高亮(仅渲染可视切片),并提供 Plain/Virtual 视图切换 - JSON pretty / 行索引 / only-matches 搜索迁移到 Web Worker,支持进度与取消,失败自动回落 - 新增 CodeDisplay 相关 env 配置并更新 .env.example - 补充单元测试覆盖大内容策略与回退路径
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthrough新增 CodeDisplay 的运行时配置解析与上下文、Worker(格式化/索引/搜索)与无 Worker 回退、虚拟高亮/纯文本/匹配列表/SSE 渲染实现、文本指纹工具及大量本地化文案与测试;在根布局将解析后的配置传入提供器。 Changes
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related PRs
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! 此拉取请求旨在解决 Highlights
Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
|
|
||
| const [viewportHeight, setViewportHeight] = useState(0); | ||
| const [scrollTop, setScrollTop] = useState(0); | ||
| const [expandedRows, setExpandedRows] = useState<Set<number>>(() => new Set()); |
There was a problem hiding this comment.
当 events 列表因为搜索过滤而改变时,expandedRows 中存储的行索引会变得不正确,可能导致错误的行被意外展开。为了提升用户体验和修复此问题,建议添加一个 useEffect hook,在 events prop 改变时重置 expandedRows 状态。这样可以确保在筛选结果变化后,展开状态被清空,避免混淆。
const [expandedRows, setExpandedRows] = useState<Set<number>>(() => new Set());
useEffect(() => {
// 当事件列表(例如,由于搜索)发生变化时,重置展开行的状态以避免不一致
setExpandedRows(new Set());
}, [events]);
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (5)
src/components/ui/code-display-virtual-highlighter.tsx (1)
79-133: 建议:考虑稳定化onRequestPlainView回调依赖
onRequestPlainView在useEffect依赖数组中,如果父组件未使用useCallback包装此回调,每次父组件渲染都会导致索引构建 effect 重新执行,触发不必要的重复索引。建议的修复方案
可以使用
useRef存储回调的最新引用,避免将其加入依赖数组:+ const onRequestPlainViewRef = useRef(onRequestPlainView); + onRequestPlainViewRef.current = onRequestPlainView; useEffect(() => { // ... existing code ... void buildLineIndex({ // ... }).then((res) => { // ... if (!res.ok) { setIndexStatus("error"); setIndexProgress(null); - onRequestPlainView?.(); + onRequestPlainViewRef.current?.(); return; } // ... }); return () => controller.abort(); - }, [maxLines, onRequestPlainView, perfDebugEnabled, textKey]); + }, [maxLines, perfDebugEnabled, textKey]);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/ui/code-display-virtual-highlighter.tsx` around lines 79 - 133, The effect depends on onRequestPlainView which may be unstable and cause unnecessary reruns; stabilize by capturing the latest callback in a ref (e.g. create onRequestPlainViewRef and update it whenever onRequestPlainView changes) and use onRequestPlainViewRef.current inside the useEffect promise/result handling instead of referencing onRequestPlainView in the dependency array, then remove onRequestPlainView from the useEffect deps while keeping other deps (maxLines, perfDebugEnabled, textKey) so buildLineIndex and the abort logic (indexAbortRef/controller/jobId) only rerun when truly needed.src/components/ui/code-display.tsx (3)
871-886:resolveTextForAction函数应使用useCallback包装。该函数依赖多个状态变量,作为
handleDownload和handleCopy的依赖。虽然当前实现正确,但使用useCallback可以更清晰地表达依赖关系并避免潜在的闭包陷阱。不过,由于函数内部依赖过多状态,当前的内联函数实现也是合理的。
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/ui/code-display.tsx` around lines 871 - 886, Wrap the resolveTextForAction function in useCallback to make its dependencies explicit and avoid closure issues: replace the inline declaration of resolveTextForAction with a useCallback that returns the same logic and include all referenced variables in its dependency array (language, content, mode, resolvedPrettyText, showOnlyMatches, onlyMatchesQuery, shouldOptimizeOnlyMatches, nonSseFilteredText, onlyMatchesLineStarts, onlyMatchesMatches, and buildOnlyMatchesText). Then ensure handleDownload and handleCopy list resolveTextForAction in their dependency arrays so they update when the resolved-text logic changes.
288-373: SSE 行渲染的 key 可能不唯一。Line 295 使用
`${index}-${evt.event}`作为 key,但如果同一 index 有多个相同 event 类型的事件(虽然不太可能),key 会重复。考虑使用唯一标识符或仅使用index。建议修改
- key={`${index}-${evt.event}`} + key={index}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/ui/code-display.tsx` around lines 288 - 373, The key for each rendered SSE row (in the rows.map callback that returns the <div>) is currently `${index}-${evt.event}`, which can collide; update the key on the root element returned by rows.map to use a truly unique identifier—prefer using an explicit unique field from the event if available (e.g., evt.id or evt.timestamp) or fall back to the numeric index alone (key={index}) or a stable combination like `${index}-${evt.id}`; change the key prop on the element created in the rows.map callback to the chosen unique value (replace the `${index}-${evt.event}` usage).
224-249: 潜在的 rAF 句柄泄漏风险。当组件卸载时,第二个
useEffect(Lines 244-249)清理了rafRef,但如果scrollRef.current在卸载前被清理,可能存在时序问题。建议将 rAF 清理逻辑合并到第一个useEffect的清理函数中,或确保两个 effect 的依赖一致。当前实现在实践中应该是安全的,因为 React 会按顺序执行清理函数。
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/ui/code-display.tsx` around lines 224 - 249, The rAF handle cleanup (rafRef and cancelAnimationFrame) may race with the resize observer cleanup; move the rAF cancellation into the same useEffect that sets up scrollRef/ResizeObserver (the effect that defines el, update, ro) so both ro?.disconnect() and if (rafRef.current !== null) cancelAnimationFrame(rafRef.current) run together in the same cleanup, or alternatively make both effects share the same dependency array to guarantee ordering; update references: scrollRef, rafRef, useEffect, ResizeObserver, update, ro, and cancelAnimationFrame to implement this change.src/components/ui/code-display-worker-client.ts (1)
251-268: 移除不必要的await。
return await new Promise(...)中的await是多余的,直接return new Promise(...)即可。这在多个函数中重复出现。建议修改
- return await new Promise((resolve) => { + return new Promise((resolve) => {同样适用于
stringifyJsonPretty(Line 296)、buildLineIndex(Line 356)、searchLines(Line 415)。🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/ui/code-display-worker-client.ts` around lines 251 - 268, The returned Promise in functions like the one creating jobId via genJobId() is being awaited unnecessarily (e.g., "return await new Promise(...)"); remove the redundant await and just "return new Promise(...)" in this block and the other similar functions stringifyJsonPretty, buildLineIndex, and searchLines so the promise is returned directly; keep the existing abort handling (signal?.addEventListener/onAbort, cancelWorkerJob(jobId)), pending.set(...) and w.postMessage(...) logic intact and only remove the await keyword.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/components/ui/code-display-matches-list.tsx`:
- Around line 16-18: The code unconditionally uses nextStart - 1 as an inclusive
end which, combined with the later slice(start, end) (exclusive upper bound),
drops the final character for the last line or single-line texts; change the end
computation to produce an exclusive end index: compute nextStart = lineNo + 1 <
lineCount ? (lineStarts[lineNo + 1] ?? text.length) : text.length, then set end
= (text[nextStart - 1] === '\n' ? nextStart - 1 : nextStart) and return [start,
end]; this preserves the final character while still excluding a trailing
newline when present (references: variables nextStart, lineStarts, lineNo,
lineCount, text and the later slice(start, end)).
---
Nitpick comments:
In `@src/components/ui/code-display-virtual-highlighter.tsx`:
- Around line 79-133: The effect depends on onRequestPlainView which may be
unstable and cause unnecessary reruns; stabilize by capturing the latest
callback in a ref (e.g. create onRequestPlainViewRef and update it whenever
onRequestPlainView changes) and use onRequestPlainViewRef.current inside the
useEffect promise/result handling instead of referencing onRequestPlainView in
the dependency array, then remove onRequestPlainView from the useEffect deps
while keeping other deps (maxLines, perfDebugEnabled, textKey) so buildLineIndex
and the abort logic (indexAbortRef/controller/jobId) only rerun when truly
needed.
In `@src/components/ui/code-display-worker-client.ts`:
- Around line 251-268: The returned Promise in functions like the one creating
jobId via genJobId() is being awaited unnecessarily (e.g., "return await new
Promise(...)"); remove the redundant await and just "return new Promise(...)" in
this block and the other similar functions stringifyJsonPretty, buildLineIndex,
and searchLines so the promise is returned directly; keep the existing abort
handling (signal?.addEventListener/onAbort, cancelWorkerJob(jobId)),
pending.set(...) and w.postMessage(...) logic intact and only remove the await
keyword.
In `@src/components/ui/code-display.tsx`:
- Around line 871-886: Wrap the resolveTextForAction function in useCallback to
make its dependencies explicit and avoid closure issues: replace the inline
declaration of resolveTextForAction with a useCallback that returns the same
logic and include all referenced variables in its dependency array (language,
content, mode, resolvedPrettyText, showOnlyMatches, onlyMatchesQuery,
shouldOptimizeOnlyMatches, nonSseFilteredText, onlyMatchesLineStarts,
onlyMatchesMatches, and buildOnlyMatchesText). Then ensure handleDownload and
handleCopy list resolveTextForAction in their dependency arrays so they update
when the resolved-text logic changes.
- Around line 288-373: The key for each rendered SSE row (in the rows.map
callback that returns the <div>) is currently `${index}-${evt.event}`, which can
collide; update the key on the root element returned by rows.map to use a truly
unique identifier—prefer using an explicit unique field from the event if
available (e.g., evt.id or evt.timestamp) or fall back to the numeric index
alone (key={index}) or a stable combination like `${index}-${evt.id}`; change
the key prop on the element created in the rows.map callback to the chosen
unique value (replace the `${index}-${evt.event}` usage).
- Around line 224-249: The rAF handle cleanup (rafRef and cancelAnimationFrame)
may race with the resize observer cleanup; move the rAF cancellation into the
same useEffect that sets up scrollRef/ResizeObserver (the effect that defines
el, update, ro) so both ro?.disconnect() and if (rafRef.current !== null)
cancelAnimationFrame(rafRef.current) run together in the same cleanup, or
alternatively make both effects share the same dependency array to guarantee
ordering; update references: scrollRef, rafRef, useEffect, ResizeObserver,
update, ro, and cancelAnimationFrame to implement this change.
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to Reviews > Disable Cache setting
📒 Files selected for processing (17)
.env.examplemessages/en/dashboard.jsonmessages/ja/dashboard.jsonmessages/ru/dashboard.jsonmessages/zh-CN/dashboard.jsonmessages/zh-TW/dashboard.jsonsrc/app/[locale]/layout.tsxsrc/app/providers.tsxsrc/components/ui/code-display-config-context.tsxsrc/components/ui/code-display-config.tssrc/components/ui/code-display-matches-list.tsxsrc/components/ui/code-display-plain-textarea.tsxsrc/components/ui/code-display-virtual-highlighter.tsxsrc/components/ui/code-display-worker-client.tssrc/components/ui/code-display.tsxsrc/components/ui/code-display.worker.tstests/unit/code-display-perf-ui.test.tsx
|
|
||
| function getTextKey(text: string): string { | ||
| const len = text.length; | ||
| if (len === 0) return "0:0:0:0:0:0"; |
There was a problem hiding this comment.
[MEDIUM] [SIMPLIFY-COMPLEXITY] Duplicate function definition
The getTextKey function is defined identically in two separate files:
src/components/ui/code-display-virtual-highlighter.tsx(this file, lines 31-42)src/components/ui/code-display.tsx(added lines 89-100 in the diff)
Why this is a problem: Code duplication increases maintenance burden and risk of inconsistency if one copy is updated without the other.
Suggested fix: Extract to a shared utility file and import from both locations:
// src/lib/utils/text-key.ts
export function getTextKey(text: string): string {
const len = text.length;
if (len === 0) return "0:0:0:0:0:0";
const first = text.charCodeAt(0);
const second = len > 1 ? text.charCodeAt(1) : 0;
const mid = text.charCodeAt(len >> 1);
const penultimate = len > 1 ? text.charCodeAt(len - 2) : 0;
const last = text.charCodeAt(len - 1);
return `${len}:${first}:${second}:${mid}:${penultimate}:${last}`;
}There was a problem hiding this comment.
Code Review Summary
This PR introduces significant performance optimizations for the CodeDisplay component when handling large content. The approach uses Web Workers for heavy computation, virtual highlighting for large text, and plain textarea fallbacks to prevent DOM explosion.
Important Note: The PR title contains "[未完成]" indicating this is work in progress. Consider whether a full review is appropriate at this stage.
PR Size: XL
- Lines changed: 3,584 (3,336 additions, 248 deletions)
- Files changed: 17
For XL PRs, consider splitting into smaller PRs:
- Core infrastructure: Config types, context provider, worker client
- UI components: Plain textarea, virtual highlighter, matches list
- Main component refactor: CodeDisplay changes
- Tests: Test coverage for new features
Issues Found
| Category | Critical | High | Medium | Low |
|---|---|---|---|---|
| Logic/Bugs | 0 | 0 | 0 | 0 |
| Security | 0 | 0 | 0 | 0 |
| Error Handling | 0 | 0 | 0 | 0 |
| Types | 0 | 0 | 0 | 0 |
| Comments/Docs | 0 | 0 | 0 | 0 |
| Tests | 0 | 0 | 1 | 0 |
| Simplification | 0 | 0 | 1 | 0 |
Medium Priority Issues (Should Fix)
-
[SIMPLIFY-COMPLEXITY] Duplicate function definition (inline comment posted)
- The
getTextKeyfunction is duplicated in two files - Extract to shared utility to reduce maintenance burden
- The
-
[TEST-INCOMPLETE] Missing test coverage for worker error paths
- Tests exist for happy paths but not for:
- Worker client fallback scenarios (when Worker unavailable)
- Error handling in formatJsonPretty/stringifyJsonPretty
- VirtualHighlighter index failure fallback
- Tests exist for happy paths but not for:
Positive Observations
- i18n properly implemented for all 5 languages
- Good use of AbortController for cancellable operations
- Appropriate fallback mechanisms when Worker unavailable
- Clean separation of concerns with config/context/worker
Review Coverage
- Logic and correctness - Clean
- Security (OWASP Top 10) - Clean (uses safe Blob types, no XSS vectors)
- Error handling - Clean (errors properly surfaced via result types)
- Type safety - Clean (good use of discriminated unions)
- Documentation accuracy - Clean (comments match implementation)
- Test coverage - Needs improvement (worker error paths untested)
- Code clarity - Minor issue (duplicate function)
Automated review by Claude AI
src/components/ui/code-display.tsx
Outdated
| function getTextKey(text: string): string { | ||
| const len = text.length; | ||
| if (len === 0) return "0:0:0:0:0:0"; | ||
|
|
||
| const first = text.charCodeAt(0); | ||
| const second = len > 1 ? text.charCodeAt(1) : 0; | ||
| const mid = text.charCodeAt(len >> 1); | ||
| const penultimate = len > 1 ? text.charCodeAt(len - 2) : 0; | ||
| const last = text.charCodeAt(len - 1); | ||
|
|
||
| return `${len}:${first}:${second}:${mid}:${penultimate}:${last}`; | ||
| } |
There was a problem hiding this comment.
Duplicated code - this getTextKey function is identical to the one in code-display-virtual-highlighter.tsx:12-23. Extract to a shared utility to avoid duplication.
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/components/ui/code-display.tsx
Line: 55-66
Comment:
Duplicated code - this `getTextKey` function is identical to the one in `code-display-virtual-highlighter.tsx:12-23`. Extract to a shared utility to avoid duplication.
How can I resolve this? If you propose a fix, please make it concise.Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!
| let workerSingleton: Worker | null = null; | ||
| let initialized = false; | ||
| let nextJobId = 1; |
There was a problem hiding this comment.
Worker singleton could cause test isolation issues. Consider instance-based workers or proper cleanup between tests.
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/components/ui/code-display-worker-client.ts
Line: 96-98
Comment:
Worker singleton could cause test isolation issues. Consider instance-based workers or proper cleanup between tests.
How can I resolve this? If you propose a fix, please make it concise.| w.onerror = () => { | ||
| // Worker 崩溃/加载失败:清空 pending,后续自动回落到主线程 | ||
| for (const [jobId, job] of pending.entries()) { | ||
| job.resolve({ ok: false, errorCode: "UNKNOWN" }); | ||
| pending.delete(jobId); | ||
| } | ||
| workerSingleton?.terminate(); | ||
| workerSingleton = null; | ||
| initialized = false; | ||
| }; |
There was a problem hiding this comment.
Worker crashes reset initialized flag but don't provide error feedback to UI. Consider adding retry logic or user-visible error state.
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/components/ui/code-display-worker-client.ts
Line: 179-188
Comment:
Worker crashes reset `initialized` flag but don't provide error feedback to UI. Consider adding retry logic or user-visible error state.
How can I resolve this? If you propose a fix, please make it concise.Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!
- 修复 CodeDisplayMatchesList / buildOnlyMatchesText 的最后一行截断问题(支持 CRLF) - SSE 事件列表过滤后重置展开状态,避免索引错位 - 抽取 getTextKey 到共享 util,避免重复实现 - 虚拟高亮使用 ref 稳定 onRequestPlainView,避免重复索引 - 简化 Worker client 的 Promise 返回(移除冗余 await) - 增加单测覆盖最后一行不截断
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (1)
tests/unit/code-display-perf-ui.test.tsx (1)
256-259: 建议避免断言中硬编码"Show all"文案。这里可直接取
dashboardMessages对应字段,避免翻译文案调整导致无关失败。🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/code-display-perf-ui.test.tsx` around lines 256 - 259, 当前测试在 waitFor 断言中硬编码了 "Show all" 文案;请改为从本地化消息对象中读取对应字段并断言该值。定位到使用 waitFor 和 container.querySelector('[data-testid="code-display-only-matches-toggle"]') 的代码,将 (btn?.textContent || "").includes("Show all") 替换为使用 dashboardMessages 中对应的键(例如 dashboardMessages.showAll 或 dashboardMessages.codeDisplay.onlyMatchesToggle)并断言包含该变量,以避免因翻译文案变更导致的测试失败。
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/components/ui/code-display-worker-client.ts`:
- Around line 110-118: getWorker currently constructs a new Worker synchronously
and can throw (bypassing callers' null-check fallback); wrap the Worker creation
in a try/catch around the new Worker(...) call inside getWorker, log or swallow
the error, ensure workerSingleton remains null on failure, and return null so
callers relying on if (!w) can perform fallback; reference getWorker,
workerSingleton and supportsWorker to locate the change.
In `@src/lib/utils/text-key.ts`:
- Around line 10-21: The current getTextKey samples 5 chars and causes high
collision risk; replace the sampling scheme with a full-text hash: update
getTextKey to compute a deterministic hash of the entire text (for example a
fast non-crypto hash like FNV-1a or a SHA-1/sha256 hex) and return that
(optionally prefixed by text length for extra safety), so keys uniquely
represent content; ensure the function still accepts a string and returns a
string and update any consumers expecting the old `${len}:...` format if needed.
---
Nitpick comments:
In `@tests/unit/code-display-perf-ui.test.tsx`:
- Around line 256-259: 当前测试在 waitFor 断言中硬编码了 "Show all"
文案;请改为从本地化消息对象中读取对应字段并断言该值。定位到使用 waitFor 和
container.querySelector('[data-testid="code-display-only-matches-toggle"]')
的代码,将 (btn?.textContent || "").includes("Show all") 替换为使用 dashboardMessages
中对应的键(例如 dashboardMessages.showAll 或
dashboardMessages.codeDisplay.onlyMatchesToggle)并断言包含该变量,以避免因翻译文案变更导致的测试失败。
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to Reviews > Disable Cache setting
📒 Files selected for processing (7)
src/components/ui/code-display-matches-list.tsxsrc/components/ui/code-display-virtual-highlighter.tsxsrc/components/ui/code-display-worker-client.tssrc/components/ui/code-display.tsxsrc/lib/utils/index.tssrc/lib/utils/text-key.tstests/unit/code-display-perf-ui.test.tsx
| if (text.charCodeAt(i) === 10) { | ||
| lineCount += 1; |
There was a problem hiding this comment.
Only counts \n (LF) as line breaks, but doesn't handle \r\n (CRLF) or \r (CR) line endings. This creates inconsistency with code-display-matches-list.tsx:22-26 which does handle CRLF. Files with Windows line endings will have incorrect line indices.
| if (text.charCodeAt(i) === 10) { | |
| lineCount += 1; | |
| if (text.charCodeAt(i) === 10 || text.charCodeAt(i) === 13) { | |
| // Skip CRLF as single line break | |
| if (text.charCodeAt(i) === 13 && i + 1 < text.length && text.charCodeAt(i + 1) === 10) { | |
| i += 1; | |
| } | |
| lineCount += 1; |
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/components/ui/code-display.worker.ts
Line: 604-605
Comment:
Only counts `\n` (LF) as line breaks, but doesn't handle `\r\n` (CRLF) or `\r` (CR) line endings. This creates inconsistency with `code-display-matches-list.tsx:22-26` which does handle CRLF. Files with Windows line endings will have incorrect line indices.
```suggestion
if (text.charCodeAt(i) === 10 || text.charCodeAt(i) === 13) {
// Skip CRLF as single line break
if (text.charCodeAt(i) === 13 && i + 1 < text.length && text.charCodeAt(i + 1) === 10) {
i += 1;
}
lineCount += 1;
```
How can I resolve this? If you propose a fix, please make it concise.| errorCode: "CANCELED" | "UNKNOWN"; | ||
| }; | ||
|
|
||
| const cancelledJobs = new Set<number>(); |
There was a problem hiding this comment.
Memory leak - cancelledJobs Set grows unbounded. Jobs are deleted in finally block after completion, but if a job crashes before reaching finally or if cancellations accumulate, this Set will grow indefinitely.
Add periodic cleanup or size limit check.
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/components/ui/code-display.worker.ts
Line: 98
Comment:
Memory leak - `cancelledJobs` Set grows unbounded. Jobs are deleted in `finally` block after completion, but if a job crashes before reaching `finally` or if cancellations accumulate, this Set will grow indefinitely.
Add periodic cleanup or size limit check.
How can I resolve this? If you propose a fix, please make it concise.| } | ||
| } | ||
|
|
||
| const starts = new Int32Array(lineCount); |
There was a problem hiding this comment.
Using Int32Array limits maximum file size to ~2GB (Int32 max value is 2,147,483,647). Files larger than this will cause integer overflow in line start positions. Consider documenting this limitation or using BigInt64Array for very large file support.
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/components/ui/code-display.worker.ts
Line: 621
Comment:
Using `Int32Array` limits maximum file size to ~2GB (Int32 max value is 2,147,483,647). Files larger than this will cause integer overflow in line start positions. Consider documenting this limitation or using `BigInt64Array` for very large file support.
How can I resolve this? If you propose a fix, please make it concise.| if (!res.ok) { | ||
| setIndexStatus("error"); | ||
| setIndexProgress(null); | ||
| onRequestPlainViewRef.current?.(); | ||
| return; |
There was a problem hiding this comment.
Silent fallback - when line indexing fails, it automatically calls onRequestPlainView() without showing an error message to the user. Users won't know why the virtual highlight view suddenly switched to plain text. Consider showing a toast notification explaining the fallback reason.
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/components/ui/code-display-virtual-highlighter.tsx
Line: 108-112
Comment:
Silent fallback - when line indexing fails, it automatically calls `onRequestPlainView()` without showing an error message to the user. Users won't know why the virtual highlight view suddenly switched to plain text. Consider showing a toast notification explaining the fallback reason.
How can I resolve this? If you propose a fix, please make it concise.Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!
| const chunks: string[] = []; | ||
| let outLen = 0; | ||
|
|
||
| const indentCache = new Map<number, string>(); | ||
| const getIndent = (level: number) => { | ||
| const cached = indentCache.get(level); | ||
| if (cached) return cached; | ||
| const str = " ".repeat(level * indentSize); | ||
| indentCache.set(level, str); | ||
| return str; | ||
| }; | ||
|
|
||
| const push = (s: string): { ok: true } | { ok: false; errorCode: "OUTPUT_TOO_LARGE" } => { | ||
| chunks.push(s); | ||
| outLen += s.length; | ||
| if (estimateUtf16Bytes(outLen) > maxOutputBytes) { | ||
| return { ok: false, errorCode: "OUTPUT_TOO_LARGE" }; | ||
| } | ||
| return { ok: true }; |
There was a problem hiding this comment.
The chunks array grows without upfront size validation. For malformed JSON with deeply nested structures, this could accumulate large amounts of data before hitting maxOutputBytes check. Consider adding periodic size checks or early termination if chunk count exceeds a threshold.
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/components/ui/code-display.worker.ts
Line: 140-158
Comment:
The `chunks` array grows without upfront size validation. For malformed JSON with deeply nested structures, this could accumulate large amounts of data before hitting `maxOutputBytes` check. Consider adding periodic size checks or early termination if chunk count exceeds a threshold.
How can I resolve this? If you propose a fix, please make it concise.| const code = text.charCodeAt(i); | ||
| if (!isWhitespaceCharCode(code)) break; | ||
| i += 1; | ||
| if ((i & 8191) === 0) maybeReportProgress(); |
There was a problem hiding this comment.
Cancellation check happens every 8192 iterations ((i & 8191) === 0). For very large files, this might be too coarse - it could take seconds to respond to cancellation. Consider reducing to 2048 or 4096 for more responsive cancellation.
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/components/ui/code-display.worker.ts
Line: 203
Comment:
Cancellation check happens every 8192 iterations (`(i & 8191) === 0`). For very large files, this might be too coarse - it could take seconds to respond to cancellation. Consider reducing to 2048 or 4096 for more responsive cancellation.
How can I resolve this? If you propose a fix, please make it concise.Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!
src/components/ui/code-display.tsx
Outdated
| return JSON.stringify(value, null, 2); | ||
| } | ||
|
|
||
| function estimateUtf8BytesUpToLimit(text: string, limitBytes: number): number { |
There was a problem hiding this comment.
Resolved duplication: getTextKey is now exported from @/lib/utils/text-key.ts and imported here (line 33). The duplication issue from the previous review has been addressed.
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/components/ui/code-display.tsx
Line: 55
Comment:
Resolved duplication: `getTextKey` is now exported from `@/lib/utils/text-key.ts` and imported here (line 33). The duplication issue from the previous review has been addressed.
How can I resolve this? If you propose a fix, please make it concise.- Worker 构造失败(CSP/资源异常)时返回 null,确保走主线程 fallback - JSON Pretty 的取消按钮同步更新状态,避免 loading 卡住 - getTextKey 改为 FNV-1a 哈希(小文本全量,大文本固定窗口),降低碰撞风险
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/components/ui/code-display-worker-client.ts`:
- Around line 272-274: The call to w.postMessage(...) after pending.set(...) can
throw synchronously and leave pending entries and abort listeners dangling; wrap
each w.postMessage(...) (the sites that set pending with { kind: "...", resolve:
wrappedResolve, onProgress } and then post) in a try/catch, and in the catch
remove the pending entry (pending.delete(jobId)), remove any associated
abort/cleanup listener, and reject the original promise (call
wrappedResolve.reject or otherwise reject the Promise) with the caught error so
callers don’t hang; ensure you apply this same try/catch + cleanup pattern for
all occurrences that use pending, jobId, wrappedResolve, and onProgress before
posting to the worker.
In `@src/components/ui/code-display.tsx`:
- Around line 516-525: The effect that starts JSON pretty tasks is incorrectly
allowing a duplicate run when jsonPrettyStatus changes to "loading" because the
early-return check (comparing jsonPrettyTextKey === jsonSourceKey and
jsonPrettyStatus in ["ready","invalid","tooLarge","canceled","error"]) omits
"loading"; update that conditional to also treat "loading" as an
in-progress/terminal state so the effect returns early and doesn't abort the
previous controller then start a duplicate task. Apply the same change to the
other similar checks referenced (the blocks around the symbols
jsonPrettyTextKey, jsonSourceKey, jsonPrettyStatus and the controller abort
logic in the other effect ranges) so all duplicate-trigger locations include
"loading" in their excluded statuses.
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to Reviews > Disable Cache setting
📒 Files selected for processing (3)
src/components/ui/code-display-worker-client.tssrc/components/ui/code-display.tsxsrc/lib/utils/text-key.ts
src/components/ui/code-display.tsx
Outdated
|
|
||
| useEffect(() => { | ||
| // events 可能由搜索过滤产生新列表:重置展开状态以避免索引错位 | ||
| void events.length; |
There was a problem hiding this comment.
Redundant statement - accessing events.length has no effect. The [events] dependency in the effect (line 222) already triggers the reset. Remove this line.
| void events.length; | |
| // events 可能由搜索过滤产生新列表:重置展开状态以避免索引错位 |
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/components/ui/code-display.tsx
Line: 220
Comment:
Redundant statement - accessing `events.length` has no effect. The `[events]` dependency in the effect (line 222) already triggers the reset. Remove this line.
```suggestion
// events 可能由搜索过滤产生新列表:重置展开状态以避免索引错位
```
How can I resolve this? If you propose a fix, please make it concise.| if (!ws.ok) return ws; | ||
|
|
||
| if (stack.length === 0) { | ||
| if ((rootState as "value" | "end") === "end") { |
There was a problem hiding this comment.
Unnecessary type assertion - rootState is already typed as "value" | "end" (line 184). The cast has no effect.
| if ((rootState as "value" | "end") === "end") { | |
| if (rootState === "end") { |
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/components/ui/code-display.worker.ts
Line: 428
Comment:
Unnecessary type assertion - `rootState` is already typed as `"value" | "end"` (line 184). The cast has no effect.
```suggestion
if (rootState === "end") {
```
How can I resolve this? If you propose a fix, please make it concise.- JSON pretty 同 key 进入 loading 时直接返回,避免重复 abort+重启 - Worker postMessage 同步抛错时清理 pending 与监听器,避免 Promise 永久挂起
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/components/ui/code-display-worker-client.ts`:
- Around line 349-368: The fallback long-loop routines in buildLineIndex and
searchLines need to check for cancellation inside their loops: add periodic
checks of signal.aborted (or equivalent AbortSignal) within both the
line-counting loop and the line-starts building loop and immediately return a
consistent abort result (e.g., { ok: false, errorCode: "ABORTED" }) when
aborted; apply the same pattern to the analogous loop block referenced (the code
at the other fallback around lines 420-432) so that both counting and indexing
loops exit promptly on cancellation.
- Around line 224-227: The cancel postMessage call is missing the try/catch used
elsewhere; wrap the call to getWorker() and w.postMessage({ type: "cancel",
jobId }) in a try-catch (same pattern as
formatJsonPretty/stringifyJsonPretty/buildLineIndex/searchLines) so any errors
from worker state are caught and handled (log or ignore consistent with existing
handlers) and do not throw; reference getWorker(), w.postMessage, type: "cancel"
and jobId when applying the change.
In `@src/components/ui/code-display.tsx`:
- Line 432: The component initializes mode with getDefaultMode(language) but
never updates it when language changes, causing stale mode; add logic in the
CodeDisplay component to update mode whenever language changes by invoking
setMode(getDefaultMode(language)) (e.g., in a useEffect that depends on
language) so mode stays in sync with language while preserving user toggles when
appropriate; reference the existing state variables mode and setMode and the
helper getDefaultMode to locate where to add the effect.
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to Reviews > Disable Cache setting
📒 Files selected for processing (2)
src/components/ui/code-display-worker-client.tssrc/components/ui/code-display.tsx
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/components/ui/code-display-worker-client.ts`:
- Around line 127-133: The initialized flag in ensureInitialized is set too
early (initialized = true) before getWorker() and binding handlers; move the
assignment so it occurs only after a non-null worker is returned and after you
attach the worker.onmessage and worker.onerror handlers in ensureInitialized (so
that when getWorker() succeeds you won't skip binding), and keep initialized
false when getWorker() fails so subsequent calls can retry binding pending
handlers; reference ensureInitialized, initialized, getWorker, worker.onmessage,
worker.onerror, and pending to locate the changes.
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to Reviews > Disable Cache setting
📒 Files selected for processing (1)
src/components/ui/code-display-worker-client.ts
|
已根据 CodeRabbit 最新 actionable 修复 另外补齐:
本地验证: 麻烦 CodeRabbit 重新 review / 更新状态。 |
| if (cached && cached.key === nonSseTextKey) { | ||
| setOnlyMatchesLineStarts(cached.lineStarts); | ||
| setOnlyMatchesIndexStatus("ready"); | ||
| setOnlyMatchesIndexProgress(null); | ||
| } else { | ||
| const controller = new AbortController(); | ||
| onlyMatchesIndexAbortRef.current?.abort(); | ||
| onlyMatchesIndexAbortRef.current = controller; | ||
|
|
||
| setOnlyMatchesLineStarts(null); | ||
| setOnlyMatchesIndexStatus("loading"); | ||
| setOnlyMatchesIndexProgress({ processed: 0, total: text.length }); | ||
|
|
||
| void buildLineIndex({ | ||
| text, | ||
| maxLines: codeDisplayConfig.maxLineIndexLines, | ||
| onProgress: (p) => { | ||
| if (controller.signal.aborted) return; | ||
| if (p.stage !== "index") return; | ||
| setOnlyMatchesIndexProgress({ processed: p.processed, total: p.total }); | ||
| }, | ||
| signal: controller.signal, | ||
| }).then((res) => { | ||
| if (controller.signal.aborted) return; | ||
| if (jobId !== onlyMatchesIndexJobRef.current) return; | ||
|
|
||
| if (!res.ok) { | ||
| setOnlyMatchesLineStarts(null); | ||
| setOnlyMatchesIndexStatus("error"); | ||
| setOnlyMatchesIndexProgress(null); | ||
| return; | ||
| } | ||
|
|
||
| onlyMatchesLineIndexCacheRef.current = { | ||
| key: nonSseTextKey, | ||
| lineStarts: res.lineStarts, | ||
| lineCount: res.lineCount, | ||
| }; | ||
| setOnlyMatchesLineStarts(res.lineStarts); | ||
| setOnlyMatchesIndexStatus("ready"); | ||
| setOnlyMatchesIndexProgress(null); | ||
| }); |
There was a problem hiding this comment.
cache key relies solely on nonSseTextKey which uses FNV-1a windowed hashing for large texts. for files >200k chars, hash collision is theoretically possible if two same-length files differ only in unsampled regions (outside first/middle/last 8KB windows)
while collision is rare, when it occurs the cached line index would be wrong, causing incorrect search results. consider adding a secondary validation (e.g., compare first 100 chars or line count) before using cached index
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/components/ui/code-display.tsx
Line: 767-808
Comment:
cache key relies solely on `nonSseTextKey` which uses FNV-1a windowed hashing for large texts. for files >200k chars, hash collision is theoretically possible if two same-length files differ only in unsampled regions (outside first/middle/last 8KB windows)
while collision is rare, when it occurs the cached line index would be wrong, causing incorrect search results. consider adding a secondary validation (e.g., compare first 100 chars or line count) before using cached index
How can I resolve this? If you propose a fix, please make it concise.| const skipWhitespace = () => { | ||
| while (i < total) { | ||
| if (isCancelled(jobId)) return { ok: false as const, errorCode: "CANCELED" as const }; | ||
| const code = text.charCodeAt(i); | ||
| if (!isWhitespaceCharCode(code)) break; | ||
| i += 1; | ||
| if ((i & 8191) === 0) maybeReportProgress(); | ||
| } | ||
| return { ok: true as const }; | ||
| }; |
There was a problem hiding this comment.
whitespace skipping reports progress every 8192 chars ((i & 8191) === 0), but for inputs with massive leading whitespace (e.g., gigabytes of spaces), the 200ms throttle check might not provide enough granular updates since no actual format output is generated
this could make the UI appear frozen. consider also checking progress in push() or reducing throttle interval for whitespace-heavy inputs
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/components/ui/code-display.worker.ts
Line: 197-206
Comment:
whitespace skipping reports progress every 8192 chars (`(i & 8191) === 0`), but for inputs with massive leading whitespace (e.g., gigabytes of spaces), the 200ms throttle check might not provide enough granular updates since no actual format output is generated
this could make the UI appear frozen. consider also checking progress in `push()` or reducing throttle interval for whitespace-heavy inputs
How can I resolve this? If you propose a fix, please make it concise.| env.CCH_CODEDISPLAY_MAX_PRETTY_OUTPUT_BYTES, | ||
| DEFAULT_CODE_DISPLAY_CONFIG.maxPrettyOutputBytes, | ||
| { min: 1_000_000 } | ||
| ), |
There was a problem hiding this comment.
maxPrettyOutputBytes minimum is 1MB but highlightMaxChars default is 30k chars. for prettified JSON, expansion ratio can be 3-5x, so 30k chars → ~150KB output, well below 1MB min
however if user sets very low CCH_CODEDISPLAY_HIGHLIGHT_MAX_CHARS (e.g., 5000) while keeping default maxPrettyOutputBytes, the limits become inconsistent. consider validating that maxPrettyOutputBytes >= highlightMaxChars * 10 (assuming max 5x expansion)
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/components/ui/code-display-config.ts
Line: 144-147
Comment:
`maxPrettyOutputBytes` minimum is 1MB but `highlightMaxChars` default is 30k chars. for prettified JSON, expansion ratio can be 3-5x, so 30k chars → ~150KB output, well below 1MB min
however if user sets very low `CCH_CODEDISPLAY_HIGHLIGHT_MAX_CHARS` (e.g., 5000) while keeping default `maxPrettyOutputBytes`, the limits become inconsistent. consider validating that `maxPrettyOutputBytes >= highlightMaxChars * 10` (assuming max 5x expansion)
How can I resolve this? If you propose a fix, please make it concise.| useEffect(() => { | ||
| if (!shouldOptimizeOnlyMatches) return; | ||
|
|
||
| const jobId = (onlyMatchesSearchJobRef.current += 1); | ||
| const jobTextKey = nonSseTextKey; | ||
| const text = nonSseTextForModeRef.current; | ||
| const query = debouncedOnlyMatchesQuery; | ||
|
|
||
| const controller = new AbortController(); | ||
| onlyMatchesSearchAbortRef.current?.abort(); | ||
| onlyMatchesSearchAbortRef.current = controller; | ||
|
|
||
| const lineFilteredText = useMemo(() => { | ||
| setOnlyMatchesMatches(null); | ||
| setOnlyMatchesSearchStatus("loading"); | ||
| setOnlyMatchesSearchProgress({ processed: 0, total: text.length }); | ||
|
|
||
| void searchLines({ | ||
| text, | ||
| query, | ||
| maxResults: Math.min(resolvedMaxLines, 50_000), | ||
| onProgress: (p) => { | ||
| if (controller.signal.aborted) return; | ||
| if (p.stage !== "search") return; | ||
| setOnlyMatchesSearchProgress({ processed: p.processed, total: p.total }); | ||
| }, | ||
| signal: controller.signal, | ||
| }).then((res) => { | ||
| if (controller.signal.aborted) return; | ||
| if (jobId !== onlyMatchesSearchJobRef.current) return; | ||
| if (jobTextKey !== nonSseTextKeyRef.current) return; | ||
|
|
||
| if (!res.ok) { | ||
| setOnlyMatchesMatches(null); | ||
| setOnlyMatchesSearchStatus("error"); | ||
| setOnlyMatchesSearchProgress(null); |
There was a problem hiding this comment.
if user types search query rapidly, each keystroke triggers a new searchLines worker job. the debounce on onlyMatchesQuery helps, but useDebounce is mocked to return immediate value in tests (line 30 of test file)
in production with real debounce, rapid typing is fine. but if debounce delay is very short (e.g., 100ms) and user types fast, could still queue multiple concurrent searches. consider adding check to skip search if previous one is still loading, or increase debounce delay
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/components/ui/code-display.tsx
Line: 816-850
Comment:
if user types search query rapidly, each keystroke triggers a new `searchLines` worker job. the debounce on `onlyMatchesQuery` helps, but `useDebounce` is mocked to return immediate value in tests (line 30 of test file)
in production with real debounce, rapid typing is fine. but if debounce delay is very short (e.g., 100ms) and user types fast, could still queue multiple concurrent searches. consider adding check to skip search if previous one is still `loading`, or increase debounce delay
How can I resolve this? If you propose a fix, please make it concise.| beforeEach(() => { | ||
| workerClientMocks.buildLineIndex.mockReset(); | ||
| workerClientMocks.searchLines.mockReset(); | ||
| workerClientMocks.formatJsonPretty.mockReset(); | ||
|
|
||
| workerClientMocks.buildLineIndex.mockResolvedValue({ | ||
| ok: true, | ||
| lineStarts: new Int32Array([0]), | ||
| lineCount: 1, | ||
| }); | ||
| workerClientMocks.searchLines.mockResolvedValue({ ok: true, matches: new Int32Array(0) }); | ||
| workerClientMocks.formatJsonPretty.mockResolvedValue({ | ||
| ok: false, | ||
| errorCode: "UNKNOWN", | ||
| }); | ||
| }); |
There was a problem hiding this comment.
all worker functions are mocked with default error responses (formatJsonPretty returns UNKNOWN error), requiring each test to override mocks
this pattern is error-prone - if a test forgets to mock a needed function, it silently gets error behavior instead of success. consider defaulting mocks to success states and only override for error-case tests
Prompt To Fix With AI
This is a comment left during a code review.
Path: tests/unit/code-display-perf-ui.test.tsx
Line: 120-135
Comment:
all worker functions are mocked with default error responses (`formatJsonPretty` returns `UNKNOWN` error), requiring each test to override mocks
this pattern is error-prone - if a test forgets to mock a needed function, it silently gets error behavior instead of success. consider defaulting mocks to success states and only override for error-case tests
How can I resolve this? If you propose a fix, please make it concise.Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!
| const FULL_HASH_MAX_CHARS = 200_000; | ||
| const WINDOW_CHARS = 8192; | ||
|
|
||
| let hash = 2166136261; | ||
|
|
||
| const update = (code: number) => { | ||
| hash ^= code; | ||
| hash = Math.imul(hash, 16777619); | ||
| }; | ||
|
|
||
| if (len <= FULL_HASH_MAX_CHARS) { | ||
| for (let i = 0; i < len; i += 1) { | ||
| update(text.charCodeAt(i)); | ||
| } | ||
| return `${len}:${(hash >>> 0).toString(36)}`; | ||
| } |
There was a problem hiding this comment.
for texts at exactly 200,000 chars (the FULL_HASH_MAX_CHARS boundary), full hashing is used. for 200,001 chars, windowed hashing is used
this creates a sharp performance cliff - a single character difference causes O(n) → O(1) behavior change. consider using gradual transition (e.g., 180k-220k) or documenting this threshold as critical performance boundary
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/lib/utils/text-key.ts
Line: 16-31
Comment:
for texts at exactly 200,000 chars (the `FULL_HASH_MAX_CHARS` boundary), full hashing is used. for 200,001 chars, windowed hashing is used
this creates a sharp performance cliff - a single character difference causes O(n) → O(1) behavior change. consider using gradual transition (e.g., 180k-220k) or documenting this threshold as critical performance boundary
How can I resolve this? If you propose a fix, please make it concise.- language 变更时按默认规则同步 raw/pretty(尽量保留用户切换)\n- SSE 搜索避免对 data 做全量 toLowerCase 复制\n- Worker 不可用时跳过超大 JSON 主线程格式化,避免卡顿\n- 补充单测覆盖语言切换场景
|
补充自检后又做了 3 个稳健性修复(已推送到本 PR 分支):
并补充单测覆盖 “language 切换默认模式” 场景;本地验证 麻烦 CodeRabbit/维护者重新 review/更新 |
|
@coderabbitai review 已按最新 review 修正并推送(commit b426802):
麻烦更新 review 结论,谢谢。 |
|
✅ only-matches 优化 - 索引未 ready 或失败时不再启动 searchLines 全文扫描,避免高开销路径,并补充回归断言确保不会误调用 ✅ Worker 取消优先 - formatJsonPretty 回落 parse/stringify 前后增加 isCancelled(jobId) 检查,确保取消后不输出结果 我将立即触发完整的 PR review。 ✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (4)
tests/unit/code-display-perf-ui.test.tsx (1)
591-613: CR-only 行尾测试的 lineStarts 数组可能存在偏差。测试用例使用
lineStarts: Int32Array.from([0, 2, 4]),但对于文本"a\rb\r"(长度为 4):
- 第 0 行起始位置:0
- 第 1 行起始位置(
\r后):2- 第 2 行起始位置(第二个
\r后):4但第 2 行实际上是空行(位置 4 已经是字符串末尾),所以
matches: Int32Array.from([0, 1])只匹配前两行是正确的。不过建议添加注释说明这个边界情况的预期行为。建议添加注释说明
test("matches list strips CR-only line endings", async () => { + // Text "a\rb\r" has 3 lines: "a" (0-1), "b" (2-3), "" (4-4 empty) + // lineStarts: [0, 2, 4] corresponds to line start positions after CR const text = "a\rb\r"; const { container, unmount } = renderWithIntl(🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/code-display-perf-ui.test.tsx` around lines 591 - 613, The test "matches list strips CR-only line endings" uses text = "a\rb\r" with lineStarts = Int32Array.from([0, 2, 4]) and matches = Int32Array.from([0, 1]); add a concise comment in that test explaining that the final lineStart value 4 equals text.length which represents an empty trailing line (so only lines 0 and 1 contain content and are expected to match), and note that if the intent were to include the empty trailing line then lineStarts/matches would need to be adjusted—refer to the variables text, lineStarts and matches when adding the comment.src/components/ui/code-display.tsx (2)
374-378: SSE 数据渲染时的 JSON 格式化可能阻塞主线程。当 SSE 事件的
data小于highlightMaxChars时,会在渲染时同步执行safeJsonParse和stringifyPretty(Lines 375-376)。虽然单个事件数据通常较小,但如果有大量事件同时展开,可能导致渲染卡顿。当前阈值检查(Line 364:
evt.data.length <= highlightMaxChars)已经限制了单个数据的大小,这个风险在实际使用中较低。🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/ui/code-display.tsx` around lines 374 - 378, The inline synchronous JSON parse/pretty-print inside the JSX using safeJsonParse and stringifyPretty on evt.data (rendered in SyntaxHighlighter) can block the main thread when many events expand; move this work out of the render path by precomputing the formatted string (or a parse result flag) either when the SSE message is received (before storing into state) or by wrapping the logic in a useMemo keyed by evt.data and highlightMaxChars so parsing/formatting only runs when the input changes; ensure you still respect the existing threshold check (evt.data.length <= highlightMaxChars) and preserve the fallback to raw evt.data when parsing fails (parsed.ok).
287-301: SSE 虚拟化的行高估算可能导致滚动跳跃。
estimatedRowHeight = 44是固定值,但实际行高会因内容不同而变化(折叠时约 44px,但标题较长时可能更高)。当用户快速滚动时,可能出现轻微的内容跳跃。这是虚拟列表的常见权衡,对于本场景(> 200 条事件才启用虚拟化)可以接受。如果未来用户反馈滚动体验不佳,可考虑使用动态行高测量。
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/ui/code-display.tsx` around lines 287 - 301, The fixed estimatedRowHeight (currently estimatedRowHeight = 44) can cause scroll jump because actual row heights vary; replace the constant with a measured/derived value: add state (e.g. estimatedRowHeight state) and measure visible row DOM heights (via refs or a ResizeObserver inside the component that renders rows referenced by rows, or compute an average height from a sample) and update estimatedRowHeight accordingly, then recalc startIndex/endIndex/topPad/bottomPad whenever the measured value changes so virtualization uses a realistic row height; keep the existing useVirtual, startIndex, endIndex, topPad, bottomPad and rows logic but derive estimatedRowHeight dynamically to avoid jumps.src/components/ui/code-display.worker.ts (1)
634-652: 第二遍扫描缺少进度报告。第一遍计数时有进度报告(Lines 622-628),但第二遍构建
lineStarts数组时没有。对于非常大的文件,第二遍扫描可能耗时较长,用户无法获知进度。不过考虑到第二遍的计算量比第一遍小(不需要判断是否超过 maxLines),且第一遍已经汇报了 100% 进度,这不是阻塞性问题。
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/ui/code-display.worker.ts` around lines 634 - 652, 第二遍构建 lineStarts(循环使用 text.charCodeAt、starts、idx、isCancelled 和 jobId)缺少进度上报;在该循环内按与第一遍相同的机制(即文件中用于第一遍计数的进度上报代码块)定期发出进度更新并在发现 isCancelled(jobId) 时提前返回,确保使用相同的 progress-reporting function/format/threshold 来计算和发送百分比(避免改变上报接口或语义),并保持频率与第一遍一致以在处理大文件时提供用户可见的进度反馈。
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/components/ui/code-display.worker.ts`:
- Around line 818-837: The stringifyJsonPretty handler calls
stringifyPretty(msg.value, msg.indentSize) without re-checking cancellation, so
long-running stringify can finish after a cancel and still post a success; after
calling stringifyPretty (but before size checks and before calling post), call
isCancelled(jobId) and if true post the stringifyJsonPrettyResult with ok: false
and errorCode: "CANCELED" (same pattern used earlier) and return; ensure you
reference the existing symbols stringifyJsonPretty handler, stringifyPretty,
isCancelled, post, and the stringifyJsonPrettyResult message shape when adding
this check.
---
Nitpick comments:
In `@src/components/ui/code-display.tsx`:
- Around line 374-378: The inline synchronous JSON parse/pretty-print inside the
JSX using safeJsonParse and stringifyPretty on evt.data (rendered in
SyntaxHighlighter) can block the main thread when many events expand; move this
work out of the render path by precomputing the formatted string (or a parse
result flag) either when the SSE message is received (before storing into state)
or by wrapping the logic in a useMemo keyed by evt.data and highlightMaxChars so
parsing/formatting only runs when the input changes; ensure you still respect
the existing threshold check (evt.data.length <= highlightMaxChars) and preserve
the fallback to raw evt.data when parsing fails (parsed.ok).
- Around line 287-301: The fixed estimatedRowHeight (currently
estimatedRowHeight = 44) can cause scroll jump because actual row heights vary;
replace the constant with a measured/derived value: add state (e.g.
estimatedRowHeight state) and measure visible row DOM heights (via refs or a
ResizeObserver inside the component that renders rows referenced by rows, or
compute an average height from a sample) and update estimatedRowHeight
accordingly, then recalc startIndex/endIndex/topPad/bottomPad whenever the
measured value changes so virtualization uses a realistic row height; keep the
existing useVirtual, startIndex, endIndex, topPad, bottomPad and rows logic but
derive estimatedRowHeight dynamically to avoid jumps.
In `@src/components/ui/code-display.worker.ts`:
- Around line 634-652: 第二遍构建 lineStarts(循环使用
text.charCodeAt、starts、idx、isCancelled 和
jobId)缺少进度上报;在该循环内按与第一遍相同的机制(即文件中用于第一遍计数的进度上报代码块)定期发出进度更新并在发现 isCancelled(jobId)
时提前返回,确保使用相同的 progress-reporting function/format/threshold
来计算和发送百分比(避免改变上报接口或语义),并保持频率与第一遍一致以在处理大文件时提供用户可见的进度反馈。
In `@tests/unit/code-display-perf-ui.test.tsx`:
- Around line 591-613: The test "matches list strips CR-only line endings" uses
text = "a\rb\r" with lineStarts = Int32Array.from([0, 2, 4]) and matches =
Int32Array.from([0, 1]); add a concise comment in that test explaining that the
final lineStart value 4 equals text.length which represents an empty trailing
line (so only lines 0 and 1 contain content and are expected to match), and note
that if the intent were to include the empty trailing line then
lineStarts/matches would need to be adjusted—refer to the variables text,
lineStarts and matches when adding the comment.
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to Reviews > Disable Cache setting
📒 Files selected for processing (3)
src/components/ui/code-display.tsxsrc/components/ui/code-display.worker.tstests/unit/code-display-perf-ui.test.tsx
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/components/ui/code-display.worker.ts`:
- Around line 587-728: The long-running loops in buildLineIndex and searchLines
only check cancellation and post progress every 8192 iterations, causing slow
cancel responsiveness; make both functions async and inside their hot loops (the
places using (i & 8191) === 0 and (scan & 8191) === 0) add a periodic yield by
awaiting new Promise(r => setTimeout(r, 0)) (or Promise.resolve() if you prefer
microtask) alongside the existing progress/post and isCancelled checks, keep
using isCancelled(jobId) and post(...) as before, and update callers to await
the new async signatures so behavior and progress reporting remain identical
while cancellation becomes responsive.
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to Reviews > Disable Cache setting
📒 Files selected for processing (3)
src/components/ui/code-display.tsxsrc/components/ui/code-display.worker.tstests/unit/code-display-perf-ui.test.tsx
|
@coderabbitai review 本次补充修正(commit 3bda57d):
麻烦更新 review 结论,谢谢。 |
|
✅ SSE 滚动性能优化 - JSON 解析/pretty 移至 useMemo,未启用虚拟化时避免 scroll 触发全表 rerender 我将立即触发完整的 PR review。 🧠 Learnings used✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
♻️ Duplicate comments (2)
src/components/ui/code-display.tsx (1)
1029-1039:⚠️ Potential issue | 🟡 Minor大文本下载的 MIME 仍可能被误标为
application/json当前分支在
isCandidateJson && text.length > 200_000时跳过校验,但仍保持application/json。这会把非 JSON 大文本误标为 JSON。建议修复(默认保守为 text/plain,仅在可证实为 JSON 时使用 application/json)
const isCandidateJson = language === "json" && !(showOnlyMatches && onlyMatchesQuery); - let downloadType: "application/json" | "text/plain" = isCandidateJson - ? "application/json" - : "text/plain"; + let downloadType: "application/json" | "text/plain" = "text/plain"; // 避免对超大内容在主线程 JSON.parse;小内容可用于更准确决定 MIME 类型。 const maxValidateJsonChars = 200_000; - if (isCandidateJson && text.length <= maxValidateJsonChars) { - const parsed = safeJsonParse(text); - if (!parsed.ok) downloadType = "text/plain"; + if (isCandidateJson) { + const parsed = text.length <= maxValidateJsonChars ? safeJsonParse(text) : { ok: false as const }; + if (parsed.ok) { + downloadType = "application/json"; + } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/ui/code-display.tsx` around lines 1029 - 1039, The current logic leaves downloadType as "application/json" for large texts when isCandidateJson is true even if unvalidated; change the flow so downloadType defaults to "text/plain" and only switch to "application/json" when isCandidateJson AND text.length <= maxValidateJsonChars AND safeJsonParse(text).ok; update the block around isCandidateJson, downloadType, maxValidateJsonChars and safeJsonParse to perform this conservative check (use symbols: isCandidateJson, downloadType, maxValidateJsonChars, safeJsonParse, text, language, showOnlyMatches, onlyMatchesQuery).src/components/ui/code-display.worker.ts (1)
587-719:⚠️ Potential issue | 🟠 Major长循环仍无法及时响应取消请求
buildLineIndex/searchLines仍是同步热循环,cancel消息要等当前任务返回后才会进入cancelledJobs。因此这里的isCancelled(jobId)对“运行中任务”基本不生效,取消/重试在大文本下会明显滞后。建议修复(为热循环增加 yield 点)
+const yieldToEventLoop = () => new Promise<void>((resolve) => setTimeout(resolve, 0)); -function buildLineIndex({ +async function buildLineIndex({ text, jobId, maxLines, }: { text: string; jobId: number; maxLines: number; -}): +}): Promise< | { ok: true; lineStarts: Int32Array; lineCount: number } - | { ok: false; errorCode: "CANCELED" | "TOO_MANY_LINES"; lineCount?: number } { + | { ok: false; errorCode: "CANCELED" | "TOO_MANY_LINES"; lineCount?: number } +> { @@ if ((i & 8191) === 0) { + await yieldToEventLoop(); + if (isCancelled(jobId)) return { ok: false, errorCode: "CANCELED" }; const now = performance.now(); if (now - lastProgressAt > 200) { lastProgressAt = now; post({ type: "progress", jobId, stage: "index", processed: i, total }); } } } @@ -function searchLines({ +async function searchLines({ text, query, jobId, maxResults, }: { @@ -}): { ok: true; matches: Int32Array } | { ok: false; errorCode: "CANCELED" } { +}): Promise<{ ok: true; matches: Int32Array } | { ok: false; errorCode: "CANCELED" }> { @@ while (scan < pos) { if ((scan & 8191) === 0 && isCancelled(jobId)) return { ok: false, errorCode: "CANCELED" }; + if ((scan & 8191) === 0) { + await yieldToEventLoop(); + if (isCancelled(jobId)) return { ok: false, errorCode: "CANCELED" }; + } const code = text.charCodeAt(scan); @@ -(self as DedicatedWorkerGlobalScope).onmessage = (e: MessageEvent<WorkerRequest>) => { +(self as DedicatedWorkerGlobalScope).onmessage = async (e: MessageEvent<WorkerRequest>) => { @@ - const result = buildLineIndex({ text: msg.text, jobId, maxLines: msg.maxLines }); + const result = await buildLineIndex({ text: msg.text, jobId, maxLines: msg.maxLines }); @@ - const result = searchLines({ + const result = await searchLines({ text: msg.text, query: msg.query, jobId, maxResults: msg.maxResults, });Also applies to: 721-878
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/ui/code-display.worker.ts` around lines 587 - 719, buildLineIndex and searchLines run long synchronous loops so isCancelled(jobId) cannot observe incoming cancel messages promptly; make both functions async (change signatures of buildLineIndex and searchLines) and add periodic yield points inside their hot loops (e.g. when (i & 8191)===0 in buildLineIndex and on the scan/pos loop in searchLines) by awaiting a microtask (await Promise.resolve()) or a short timeout, then re-check isCancelled(jobId) and continue, and keep the existing progress post logic around the yield; update any callers to await these functions.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@src/components/ui/code-display.tsx`:
- Around line 1029-1039: The current logic leaves downloadType as
"application/json" for large texts when isCandidateJson is true even if
unvalidated; change the flow so downloadType defaults to "text/plain" and only
switch to "application/json" when isCandidateJson AND text.length <=
maxValidateJsonChars AND safeJsonParse(text).ok; update the block around
isCandidateJson, downloadType, maxValidateJsonChars and safeJsonParse to perform
this conservative check (use symbols: isCandidateJson, downloadType,
maxValidateJsonChars, safeJsonParse, text, language, showOnlyMatches,
onlyMatchesQuery).
In `@src/components/ui/code-display.worker.ts`:
- Around line 587-719: buildLineIndex and searchLines run long synchronous loops
so isCancelled(jobId) cannot observe incoming cancel messages promptly; make
both functions async (change signatures of buildLineIndex and searchLines) and
add periodic yield points inside their hot loops (e.g. when (i & 8191)===0 in
buildLineIndex and on the scan/pos loop in searchLines) by awaiting a microtask
(await Promise.resolve()) or a short timeout, then re-check isCancelled(jobId)
and continue, and keep the existing progress post logic around the yield; update
any callers to await these functions.
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to Reviews > Disable Cache setting
📒 Files selected for processing (4)
src/components/ui/code-display.tsxsrc/components/ui/code-display.worker.tssrc/lib/utils/text-key.tstests/unit/code-display-perf-ui.test.tsx
🚧 Files skipped from review as they are similar to previous changes (2)
- tests/unit/code-display-perf-ui.test.tsx
- src/lib/utils/text-key.ts
There was a problem hiding this comment.
🧹 Nitpick comments (4)
src/components/ui/code-display.tsx (4)
314-314: 虚拟化阈值硬编码为 200,建议提取为常量或配置项
useVirtual条件中的events.length > 200是一个魔法数字。考虑将其提取为具名常量或纳入codeDisplayConfig,便于后续调整和维护。建议修改
+const SSE_VIRTUAL_THRESHOLD = 200; + function CodeDisplaySseEvents({ // ... }) { // ... - const useVirtual = events.length > 200 && expandedRows.size === 0 && viewportHeight > 0; + const useVirtual = events.length > SSE_VIRTUAL_THRESHOLD && expandedRows.size === 0 && viewportHeight > 0;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/ui/code-display.tsx` at line 314, The virtualization threshold is hardcoded as a magic number in the useVirtual condition (events.length > 200). Replace the literal with a named constant (e.g. VIRTUALIZATION_THRESHOLD) or move it into the existing configuration object (e.g. codeDisplayConfig.virtualizationThreshold) and update the useVirtual computation (which references events, expandedRows and viewportHeight) to use that constant/config value so the threshold can be tuned and maintained easily.
315-316: SSE 虚拟化参数可配置化
estimatedRowHeight = 44和overscan = 12与组件其他部分使用的codeDisplayConfig.virtualLineHeightPx/codeDisplayConfig.virtualOverscanLines不一致。建议统一使用配置值或明确说明差异原因。🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/ui/code-display.tsx` around lines 315 - 316, 当前硬编码的 estimatedRowHeight 和 overscan 与组件中其它地方使用的 codeDisplayConfig.virtualLineHeightPx / codeDisplayConfig.virtualOverscanLines 不一致;请把这两个常量替换为从配置读取的值(使用 codeDisplayConfig.virtualLineHeightPx 替代 estimatedRowHeight,使用 codeDisplayConfig.virtualOverscanLines 替代 overscan),并在读取时提供合理的回退默认值以保持向后兼容;如果确实需要与配置不一致,改为在同一位置添加明确注释解释差异并保持配置值可覆盖。
1035-1035: 下载 MIME 验证阈值与其他阈值不一致
maxValidateJsonChars = 200_000与jsonPrettySyncText中的maxSyncChars(line 699) 使用了相同的值,但两处独立定义。建议提取为共享常量以确保一致性。建议修改
+const MAX_SYNC_JSON_CHARS = 200_000; + // in jsonPrettySyncText: - const maxSyncChars = Math.min(codeDisplayConfig.highlightMaxChars, 200_000); + const maxSyncChars = Math.min(codeDisplayConfig.highlightMaxChars, MAX_SYNC_JSON_CHARS); // in handleDownload: - const maxValidateJsonChars = 200_000; + const maxValidateJsonChars = MAX_SYNC_JSON_CHARS;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/ui/code-display.tsx` at line 1035, Two independently defined thresholds (maxValidateJsonChars and maxSyncChars inside jsonPrettySyncText) use the same numeric value; extract a shared constant (e.g., JSON_MAX_CHARS) and replace both usages with that constant so they remain consistent. Update the constant declaration (export or file-scope) and change references in maxValidateJsonChars and inside the jsonPrettySyncText function to use the new shared symbol, removing the duplicated literal.
472-472: 大内容判定阈值硬编码
isLargeContent的判定条件content.length > 4000 || lineCount > 200使用了硬编码值。这些阈值与其他配置项(如highlightMaxChars)相互独立,可能导致行为不一致。建议考虑将这些阈值也纳入配置体系。🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/ui/code-display.tsx` at line 472, The large-content check currently hardcodes thresholds in the expression isLargeContent = content.length > 4000 || lineCount > 200; — change this to derive thresholds from existing configuration (e.g., reuse highlightMaxChars or a new config like largeContentMaxChars / largeContentMaxLines) so behavior is consistent with other settings; update any relevant UI/config entry points that supply highlightMaxChars to also provide the new threshold or reuse it, and replace the direct numeric literals in isLargeContent with those config values (referencing isLargeContent, content.length, lineCount, and highlightMaxChars/new largeContentMax* config names to locate the code).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@src/components/ui/code-display.tsx`:
- Line 314: The virtualization threshold is hardcoded as a magic number in the
useVirtual condition (events.length > 200). Replace the literal with a named
constant (e.g. VIRTUALIZATION_THRESHOLD) or move it into the existing
configuration object (e.g. codeDisplayConfig.virtualizationThreshold) and update
the useVirtual computation (which references events, expandedRows and
viewportHeight) to use that constant/config value so the threshold can be tuned
and maintained easily.
- Around line 315-316: 当前硬编码的 estimatedRowHeight 和 overscan 与组件中其它地方使用的
codeDisplayConfig.virtualLineHeightPx / codeDisplayConfig.virtualOverscanLines
不一致;请把这两个常量替换为从配置读取的值(使用 codeDisplayConfig.virtualLineHeightPx 替代
estimatedRowHeight,使用 codeDisplayConfig.virtualOverscanLines 替代
overscan),并在读取时提供合理的回退默认值以保持向后兼容;如果确实需要与配置不一致,改为在同一位置添加明确注释解释差异并保持配置值可覆盖。
- Line 1035: Two independently defined thresholds (maxValidateJsonChars and
maxSyncChars inside jsonPrettySyncText) use the same numeric value; extract a
shared constant (e.g., JSON_MAX_CHARS) and replace both usages with that
constant so they remain consistent. Update the constant declaration (export or
file-scope) and change references in maxValidateJsonChars and inside the
jsonPrettySyncText function to use the new shared symbol, removing the
duplicated literal.
- Line 472: The large-content check currently hardcodes thresholds in the
expression isLargeContent = content.length > 4000 || lineCount > 200; — change
this to derive thresholds from existing configuration (e.g., reuse
highlightMaxChars or a new config like largeContentMaxChars /
largeContentMaxLines) so behavior is consistent with other settings; update any
relevant UI/config entry points that supply highlightMaxChars to also provide
the new threshold or reuse it, and replace the direct numeric literals in
isLargeContent with those config values (referencing isLargeContent,
content.length, lineCount, and highlightMaxChars/new largeContentMax* config
names to locate the code).
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to Reviews > Disable Cache setting
📒 Files selected for processing (4)
src/components/ui/code-display.tsxsrc/components/ui/code-display.worker.tssrc/lib/utils/text-key.tstests/unit/code-display-perf-ui.test.tsx
🚧 Files skipped from review as they are similar to previous changes (2)
- tests/unit/code-display-perf-ui.test.tsx
- src/lib/utils/text-key.ts
There was a problem hiding this comment.
♻️ Duplicate comments (1)
src/components/ui/code-display.tsx (1)
1038-1048:⚠️ Potential issue | 🟡 Minor下载 MIME 判定在超阈值分支仍可能误标为 JSON。
当前逻辑会在
text.length > MAX_SYNC_JSON_CHARS时跳过校验并保留application/json,这会把“未知是否为合法 JSON”的下载结果误标为 JSON。建议修正
- const isCandidateJson = language === "json" && !(showOnlyMatches && onlyMatchesQuery); - let downloadType: "application/json" | "text/plain" = isCandidateJson - ? "application/json" - : "text/plain"; + const isCandidateJson = language === "json" && !(showOnlyMatches && onlyMatchesQuery); + let downloadType: "application/json" | "text/plain" = "text/plain"; @@ - if (isCandidateJson && text.length <= maxValidateJsonChars) { - const parsed = safeJsonParse(text); - if (!parsed.ok) downloadType = "text/plain"; - } + if (isCandidateJson && text.length <= maxValidateJsonChars) { + if (safeJsonParse(text).ok) { + downloadType = "application/json"; + } + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/ui/code-display.tsx` around lines 1038 - 1048, The current logic leaves downloadType as "application/json" for large texts (> MAX_SYNC_JSON_CHARS) without validation, which can mislabel non-JSON files; update the branch around isCandidateJson/downloadType so that if text.length > MAX_SYNC_JSON_CHARS you set downloadType = "text/plain" (i.e., treat large/unvalidated content as plain text) instead of keeping application/json; keep the existing small-text safeJsonParse(path) check for validated JSON and preserve the showOnlyMatches/onlyMatchesQuery logic used to compute isCandidateJson.
🧹 Nitpick comments (1)
src/components/ui/code-display.tsx (1)
1001-1010: 非 Worker 回退路径建议复用去抖查询,降低大文本输入抖动。这里直接使用
onlyMatchesQuery,在 Worker 不可用且文本较大时会每次按键触发全量split/filter。建议改为debouncedOnlyMatchesQuery(或按大小条件启用去抖)。建议修改
- if (!onlyMatchesQuery) return null; + if (!debouncedOnlyMatchesQuery) return null; @@ - const matches = lines.filter((line) => line.includes(onlyMatchesQuery)); + const matches = lines.filter((line) => line.includes(debouncedOnlyMatchesQuery)); @@ - }, [language, nonSseTextForMode, onlyMatchesQuery, showOnlyMatches, shouldOptimizeOnlyMatches]); + }, [ + language, + nonSseTextForMode, + debouncedOnlyMatchesQuery, + showOnlyMatches, + shouldOptimizeOnlyMatches, + ]);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/ui/code-display.tsx` around lines 1001 - 1010, The useMemo for nonSseFilteredText currently uses onlyMatchesQuery directly causing expensive split/filter on every keystroke for large texts; change it to use the debouncedOnlyMatchesQuery when shouldOptimizeOnlyMatches is true (i.e., replace checks and the line.includes(onlyMatchesQuery) with debouncedOnlyMatchesQuery when optimizing), and update the useMemo dependency array to include debouncedOnlyMatchesQuery (and retain onlyMatchesQuery if you still use it when not optimizing); this reuses the debounced query for the non-Worker fallback and prevents repeated full-text processing on each keypress.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@src/components/ui/code-display.tsx`:
- Around line 1038-1048: The current logic leaves downloadType as
"application/json" for large texts (> MAX_SYNC_JSON_CHARS) without validation,
which can mislabel non-JSON files; update the branch around
isCandidateJson/downloadType so that if text.length > MAX_SYNC_JSON_CHARS you
set downloadType = "text/plain" (i.e., treat large/unvalidated content as plain
text) instead of keeping application/json; keep the existing small-text
safeJsonParse(path) check for validated JSON and preserve the
showOnlyMatches/onlyMatchesQuery logic used to compute isCandidateJson.
---
Nitpick comments:
In `@src/components/ui/code-display.tsx`:
- Around line 1001-1010: The useMemo for nonSseFilteredText currently uses
onlyMatchesQuery directly causing expensive split/filter on every keystroke for
large texts; change it to use the debouncedOnlyMatchesQuery when
shouldOptimizeOnlyMatches is true (i.e., replace checks and the
line.includes(onlyMatchesQuery) with debouncedOnlyMatchesQuery when optimizing),
and update the useMemo dependency array to include debouncedOnlyMatchesQuery
(and retain onlyMatchesQuery if you still use it when not optimizing); this
reuses the debounced query for the non-Worker fallback and prevents repeated
full-text processing on each keypress.
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to Reviews > Disable Cache setting
📒 Files selected for processing (1)
src/components/ui/code-display.tsx
|
已按 review 建议补齐 Worker 取消响应性(避免取消后任务仍长时间占用 Worker,阻塞后续新任务):
本地已验证:�un run lint、�un run typecheck、�un run test、�un run build 均通过。 @coderabbitai review |
|
✅ Job 队列引入 - 在 ✅ 周期性 yield - 在 ✅ 本地验证通过 - lint、typecheck、test、build 均已通过 这些改进有效解决了取消后任务仍长时间占用 Worker 的问题,避免阻塞后续新任务。我将立即触发完整的 PR review。 ✅ Actions performedReview triggered.
|
|
上一条 comment 的反引号在 PowerShell 字符串里被转义,导致显示异常;这里重新贴一次(内容一致):
本地已验证: |
|
已按 AI review 意见做了进一步修正(仅改动本 PR 引入的代码):
本地已验证:bun run lint / bun run typecheck / bun run test / bun run build 均通过。 |
| if (!q) return sseEvents; | ||
| const re = new RegExp(escapeRegExp(q), "i"); | ||
| return sseEvents.filter((evt) => re.test(evt.event) || re.test(evt.data)); | ||
| }, [searchQuery, sseEvents]); |
There was a problem hiding this comment.
Case-sensitivity inconsistency between SSE and non-SSE search
The SSE filter at this line uses a case-insensitive regex ("i" flag), while the non-SSE optimized search path uses searchLines which internally calls String.prototype.indexOf — a case-sensitive operation.
A user who types "error" to find an occurrence of "Error" will:
- Get matches in SSE view (case-insensitive)
- Get zero matches in large JSON/text view (case-sensitive)
This silent behavioral inconsistency will confuse users. Consider lowercasing both query and text in searchLines (or applying .toLowerCase() on both sides in the fallback and passing a caseInsensitive flag to the worker), or documenting the difference in the UI placeholder text.
| if (!q) return sseEvents; | |
| const re = new RegExp(escapeRegExp(q), "i"); | |
| return sseEvents.filter((evt) => re.test(evt.event) || re.test(evt.data)); | |
| }, [searchQuery, sseEvents]); | |
| const re = new RegExp(escapeRegExp(q), "i"); | |
| return sseEvents.filter((evt) => re.test(evt.event) || re.test(evt.data)); |
The real fix needs to happen in code-display.worker.ts searchLines to support case-insensitive matching.
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/components/ui/code-display.tsx
Line: 1113-1116
Comment:
**Case-sensitivity inconsistency between SSE and non-SSE search**
The SSE filter at this line uses a **case-insensitive** regex (`"i"` flag), while the non-SSE optimized search path uses `searchLines` which internally calls `String.prototype.indexOf` — a **case-sensitive** operation.
A user who types `"error"` to find an occurrence of `"Error"` will:
- Get matches in SSE view (case-insensitive)
- Get zero matches in large JSON/text view (case-sensitive)
This silent behavioral inconsistency will confuse users. Consider lowercasing both query and text in `searchLines` (or applying `.toLowerCase()` on both sides in the fallback and passing a `caseInsensitive` flag to the worker), or documenting the difference in the UI placeholder text.
```suggestion
const re = new RegExp(escapeRegExp(q), "i");
return sseEvents.filter((evt) => re.test(evt.event) || re.test(evt.data));
```
The real fix needs to happen in `code-display.worker.ts` `searchLines` to support case-insensitive matching.
How can I resolve this? If you propose a fix, please make it concise.| const SSE_ESTIMATED_ROW_HEIGHT_PX = 44; | ||
| const SSE_OVERSCAN = 12; | ||
|
|
There was a problem hiding this comment.
Hardcoded SSE row height may break virtualization accuracy
SSE_ESTIMATED_ROW_HEIGHT_PX = 44 is used as a fixed estimate for collapsed SSE event row height. This estimate is sensitive to:
- Font size changes (different user agents / zoom levels)
- Theme differences (border widths, padding)
- Tailwind class changes to the
<div className="rounded-md border bg-background/50">wrapper
When the actual row height differs from the estimate, the virtual scroll position will drift — rows will render at wrong positions or overlapping areas will appear empty. Since this component already has a ResizeObserver for viewport height, consider measuring the actual first rendered row height with a ref once it mounts, and using that as the estimatedRowHeight:
const [measuredRowHeight, setMeasuredRowHeight] = useState(SSE_ESTIMATED_ROW_HEIGHT_PX);
// measure via ref after first visible row renders
const estimatedRowHeight = measuredRowHeight;Prompt To Fix With AI
This is a comment left during a code review.
Path: src/components/ui/code-display.tsx
Line: 46-48
Comment:
**Hardcoded SSE row height may break virtualization accuracy**
`SSE_ESTIMATED_ROW_HEIGHT_PX = 44` is used as a fixed estimate for collapsed SSE event row height. This estimate is sensitive to:
- Font size changes (different user agents / zoom levels)
- Theme differences (border widths, padding)
- Tailwind class changes to the `<div className="rounded-md border bg-background/50">` wrapper
When the actual row height differs from the estimate, the virtual scroll position will drift — rows will render at wrong positions or overlapping areas will appear empty. Since this component already has a `ResizeObserver` for viewport height, consider measuring the actual first rendered row height with a ref once it mounts, and using that as the `estimatedRowHeight`:
```tsx
const [measuredRowHeight, setMeasuredRowHeight] = useState(SSE_ESTIMATED_ROW_HEIGHT_PX);
// measure via ref after first visible row renders
const estimatedRowHeight = measuredRowHeight;
```
How can I resolve this? If you propose a fix, please make it concise.| if (!ref.current) return; | ||
| if (ref.current.value === value) return; | ||
| ref.current.value = value; | ||
| }, [value]); | ||
|
|
||
| return ( | ||
| <textarea |
There was a problem hiding this comment.
Mixed controlled/uncontrolled textarea pattern can diverge on fast updates
The component uses defaultValue={value} (uncontrolled — only applied at initial mount) together with a useEffect that manually writes to ref.current.value. If the parent re-renders with a new value before the useEffect fires (e.g., in concurrent mode or under Suspense), the displayed content could briefly lag or show stale text.
A safer pattern is to drop defaultValue entirely and rely only on the imperative DOM write, or use a plain <textarea value={value} onChange={() => {}} readOnly /> (fully controlled) for correctness. For the performance goal of avoiding controlled-input overhead, keeping the current pattern is acceptable, but the stale-render window should be acknowledged in a comment:
// Using uncontrolled textarea + manual DOM write to avoid React re-render overhead
// for very large strings. defaultValue seeds the initial render; subsequent changes
// are applied imperatively in the effect.Prompt To Fix With AI
This is a comment left during a code review.
Path: src/components/ui/code-display-plain-textarea.tsx
Line: 21-27
Comment:
**Mixed controlled/uncontrolled textarea pattern can diverge on fast updates**
The component uses `defaultValue={value}` (uncontrolled — only applied at initial mount) together with a `useEffect` that manually writes to `ref.current.value`. If the parent re-renders with a new `value` before the `useEffect` fires (e.g., in concurrent mode or under Suspense), the displayed content could briefly lag or show stale text.
A safer pattern is to drop `defaultValue` entirely and rely only on the imperative DOM write, or use a plain `<textarea value={value} onChange={() => {}} readOnly />` (fully controlled) for correctness. For the performance goal of avoiding controlled-input overhead, keeping the current pattern is acceptable, but the stale-render window should be acknowledged in a comment:
```tsx
// Using uncontrolled textarea + manual DOM write to avoid React re-render overhead
// for very large strings. defaultValue seeds the initial render; subsequent changes
// are applied imperatively in the effect.
```
How can I resolve this? If you propose a fix, please make it concise.|
已补齐一个容易被忽略的卡顿路径(仅改动本 PR 引入的代码):
改动点:
本地已验证: |
| const controller = new AbortController(); | ||
| indexAbortRef.current = controller; | ||
|
|
||
| setIndexStatus("loading"); | ||
| setIndexProgress({ processed: 0, total: currentText.length }); | ||
| setLineStarts(null); | ||
| setLineCount(0); | ||
|
|
There was a problem hiding this comment.
workerEnabled config setting silently ignored
buildLineIndex is called without the workerEnabled parameter, so it always defaults to true. When CCH_CODEDISPLAY_WORKER_ENABLE=false is set by the user, this component will still attempt to use the Worker (and only fall back if the Worker itself is unavailable), effectively ignoring the explicit configuration.
The parent CodeDisplay passes all other config props (lineHeightPx, overscanLines, contextLines, maxLines, perfDebugEnabled) but omits workerEnabled. Add it to the component's props and thread it through:
// In the component props:
workerEnabled: boolean;
// In the buildLineIndex call:
void buildLineIndex({
text: currentText,
maxLines,
onProgress: (p) => { ... },
signal: controller.signal,
workerEnabled, // ← pass through
}).then((res) => { ... });And at the call site in code-display.tsx:
<CodeDisplayVirtualHighlighter
...
workerEnabled={codeDisplayConfig.workerEnabled}
/>Prompt To Fix With AI
This is a comment left during a code review.
Path: src/components/ui/code-display-virtual-highlighter.tsx
Line: 77-84
Comment:
**`workerEnabled` config setting silently ignored**
`buildLineIndex` is called without the `workerEnabled` parameter, so it always defaults to `true`. When `CCH_CODEDISPLAY_WORKER_ENABLE=false` is set by the user, this component will still attempt to use the Worker (and only fall back if the Worker itself is unavailable), effectively ignoring the explicit configuration.
The parent `CodeDisplay` passes all other config props (`lineHeightPx`, `overscanLines`, `contextLines`, `maxLines`, `perfDebugEnabled`) but omits `workerEnabled`. Add it to the component's props and thread it through:
```tsx
// In the component props:
workerEnabled: boolean;
// In the buildLineIndex call:
void buildLineIndex({
text: currentText,
maxLines,
onProgress: (p) => { ... },
signal: controller.signal,
workerEnabled, // ← pass through
}).then((res) => { ... });
```
And at the call site in `code-display.tsx`:
```tsx
<CodeDisplayVirtualHighlighter
...
workerEnabled={codeDisplayConfig.workerEnabled}
/>
```
How can I resolve this? If you propose a fix, please make it concise.| const rafRef = useRef<number | null>(null); | ||
| const scrollTopRef = useRef(0); | ||
|
|
||
| const [viewportHeight, setViewportHeight] = useState(0); | ||
| const [scrollTop, setScrollTop] = useState(0); |
There was a problem hiding this comment.
Expanding any SSE row disables virtualization for the entire list
useVirtual is false whenever expandedRows.size > 0, which means opening a single row causes all events to be rendered simultaneously. For a list with 1 000+ events, this causes the very main-thread DOM explosion the PR aims to prevent.
Consider keeping virtualization active even with expanded rows by measuring the expanded row's actual rendered height instead of toggling virtualization off entirely. A simpler intermediate approach: cap non-virtual rendering to a safe event count even when rows are expanded:
const NON_VIRTUAL_EXPANDED_LIMIT = 500; // tuneable
const useVirtual =
events.length > SSE_VIRTUAL_THRESHOLD &&
viewportHeight > 0 &&
(expandedRows.size === 0 || events.length > NON_VIRTUAL_EXPANDED_LIMIT);This restores the performance guard for large lists while still allowing the simpler non-virtual path for smaller ones.
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/components/ui/code-display.tsx
Line: 271-275
Comment:
**Expanding any SSE row disables virtualization for the entire list**
`useVirtual` is `false` whenever `expandedRows.size > 0`, which means opening a **single** row causes **all** events to be rendered simultaneously. For a list with 1 000+ events, this causes the very main-thread DOM explosion the PR aims to prevent.
Consider keeping virtualization active even with expanded rows by measuring the expanded row's actual rendered height instead of toggling virtualization off entirely. A simpler intermediate approach: cap non-virtual rendering to a safe event count even when rows are expanded:
```tsx
const NON_VIRTUAL_EXPANDED_LIMIT = 500; // tuneable
const useVirtual =
events.length > SSE_VIRTUAL_THRESHOLD &&
viewportHeight > 0 &&
(expandedRows.size === 0 || events.length > NON_VIRTUAL_EXPANDED_LIMIT);
```
This restores the performance guard for large lists while still allowing the simpler non-virtual path for smaller ones.
How can I resolve this? If you propose a fix, please make it concise.| "use client"; | ||
|
|
||
| export type FormatJsonPrettyErrorCode = | ||
| | "INVALID_JSON" | ||
| | "CANCELED" |
There was a problem hiding this comment.
MAX_SYNC_JSON_CHARS constant duplicated
MAX_SYNC_JSON_CHARS = 200_000 is defined at module level in code-display.tsx (line 41) and also inside the formatJsonPretty function in this file. The two values are in sync today, but having the same magic number in two places risks silent drift.
Extract it to code-display-config.ts (or a shared constants file) and import it in both locations:
// code-display-config.ts (or a new constants.ts)
export const MAX_SYNC_JSON_CHARS = 200_000;Prompt To Fix With AI
This is a comment left during a code review.
Path: src/components/ui/code-display-worker-client.ts
Line: 1-5
Comment:
**`MAX_SYNC_JSON_CHARS` constant duplicated**
`MAX_SYNC_JSON_CHARS = 200_000` is defined at module level in `code-display.tsx` (line 41) and also inside the `formatJsonPretty` function in this file. The two values are in sync today, but having the same magic number in two places risks silent drift.
Extract it to `code-display-config.ts` (or a shared constants file) and import it in both locations:
```ts
// code-display-config.ts (or a new constants.ts)
export const MAX_SYNC_JSON_CHARS = 200_000;
```
How can I resolve this? If you propose a fix, please make it concise.| { | ||
| "$schema": "https://biomejs.dev/schemas/2.4.4/schema.json", | ||
| "vcs": { | ||
| "enabled": true, |
There was a problem hiding this comment.
Unrelated $schema removal
The "$schema" line was dropped from biome.json — this is unrelated to the performance optimization and removes IDE JSON-schema validation for the biome config. Please revert this change or keep it in a separate commit.
Prompt To Fix With AI
This is a comment left during a code review.
Path: biome.json
Line: 1-3
Comment:
**Unrelated `$schema` removal**
The `"$schema"` line was dropped from `biome.json` — this is unrelated to the performance optimization and removes IDE JSON-schema validation for the biome config. Please revert this change or keep it in a separate commit.
How can I resolve this? If you propose a fix, please make it concise.|
已跟进 Greptile/CodeRabbit/Gemini 的 review 点并修正(仅改动本 PR 新增/变更代码):
本地已验证:bun run lint / bun run lint:fix / bun run typecheck / bun run test / bun run build 全通过。 |
|
tesgth032 感谢您持续的改进和对 Greptile、CodeRabbit 以及 Gemini 提出的 review 意见的修正! 我已注意到您在最新 commit 中进行的以下优化:
我将立即触发完整的 PR review,以验证这些修正是否符合预期。感谢您的努力! |
| export type BuildLineIndexErrorCode = "CANCELED" | "TOO_MANY_LINES" | "UNKNOWN"; | ||
|
|
||
| export type BuildLineIndexResult = | ||
| | { ok: true; lineStarts: Int32Array; lineCount: number } |
There was a problem hiding this comment.
stringifyJsonPretty exported but never used in application code
stringifyJsonPretty is exported from this module but is not imported anywhere in the actual application — only in tests/unit/code-display-worker-client.test.ts. Exporting an unused public API adds noise and unnecessary bundle surface.
If this is intentionally reserved for future use, a comment saying so would help. Otherwise, consider removing the export (or the function entirely if it has no caller) to keep the module surface clean:
| export type BuildLineIndexErrorCode = "CANCELED" | "TOO_MANY_LINES" | "UNKNOWN"; | |
| export type BuildLineIndexResult = | |
| | { ok: true; lineStarts: Int32Array; lineCount: number } | |
| export type StringifyJsonPrettyErrorCode = "CANCELED" | "OUTPUT_TOO_LARGE" | "UNKNOWN"; | |
| export type StringifyJsonPrettyResult = | |
| | { ok: true; text: string } | |
| | { ok: false; errorCode: StringifyJsonPrettyErrorCode }; |
(removing the export keyword on the function itself, or noting intent with a comment)
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/components/ui/code-display-worker-client.ts
Line: 22-25
Comment:
**`stringifyJsonPretty` exported but never used in application code**
`stringifyJsonPretty` is exported from this module but is not imported anywhere in the actual application — only in `tests/unit/code-display-worker-client.test.ts`. Exporting an unused public API adds noise and unnecessary bundle surface.
If this is intentionally reserved for future use, a comment saying so would help. Otherwise, consider removing the export (or the function entirely if it has no caller) to keep the module surface clean:
```suggestion
export type StringifyJsonPrettyErrorCode = "CANCELED" | "OUTPUT_TOO_LARGE" | "UNKNOWN";
export type StringifyJsonPrettyResult =
| { ok: true; text: string }
| { ok: false; errorCode: StringifyJsonPrettyErrorCode };
```
(removing the `export` keyword on the function itself, or noting intent with a comment)
How can I resolve this? If you propose a fix, please make it concise.Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!
| const rafRef = useRef<number | null>(null); | ||
| const scrollTopRef = useRef(0); | ||
|
|
||
| const [measuredRowHeight, setMeasuredRowHeight] = useState(SSE_ESTIMATED_ROW_HEIGHT_PX); | ||
| const measuredRowHeightRef = useRef(measuredRowHeight); |
There was a problem hiding this comment.
SSE virtual mode disables expandedRows-based multi-expand UX silently
When events.length > SSE_VIRTUAL_THRESHOLD (200), the component renders button rows and a single activeRow detail panel — the <details> / expandedRows path is completely bypassed. Any rows a user previously expanded in non-virtual mode (e.g. from a filtered result set that had ≤200 events) would silently switch to the single-active-panel UX if the filter is cleared and the list grows past 200.
More concretely: expandedRows state is maintained inside CodeDisplaySseEvents and resets when events changes. The render branches between virtual (button + active panel) and non-virtual (<details>) based purely on events.length. This creates two completely different interaction models with no visual warning to the user.
Consider at minimum showing a note like "Viewing in compact mode (N events)" when virtual mode is active, or keeping <details> semantics by measuring each expanded row's actual height (as suggested in a previous review round).
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/components/ui/code-display.tsx
Line: 271-275
Comment:
**SSE virtual mode disables `expandedRows`-based multi-expand UX silently**
When `events.length > SSE_VIRTUAL_THRESHOLD` (200), the component renders button rows and a single `activeRow` detail panel — the `<details>` / `expandedRows` path is completely bypassed. Any rows a user previously expanded in non-virtual mode (e.g. from a filtered result set that had ≤200 events) would silently switch to the single-active-panel UX if the filter is cleared and the list grows past 200.
More concretely: `expandedRows` state is maintained inside `CodeDisplaySseEvents` and resets when `events` changes. The render branches between virtual (button + active panel) and non-virtual (`<details>`) based purely on `events.length`. This creates two completely different interaction models with no visual warning to the user.
Consider at minimum showing a note like "Viewing in compact mode (N events)" when virtual mode is active, or keeping `<details>` semantics by measuring each expanded row's actual height (as suggested in a previous review round).
How can I resolve this? If you propose a fix, please make it concise.| function parseSseForCodeDisplay(sseText: string): DisplaySseEvent[] { | ||
| const events: DisplaySseEvent[] = []; | ||
|
|
||
| let eventName = ""; | ||
| let dataLines: string[] = []; | ||
|
|
||
| const flushEvent = () => { | ||
| if (dataLines.length === 0) { |
There was a problem hiding this comment.
parseSseForCodeDisplay silently ignores unrecognised field names
The SSE spec allows id: (last-event-id) and retry: fields. Lines carrying those fields reach the end of the loop body without matching any if branch, so they are silently dropped. This is intentional for a display-only parser, but it means that if a server sends an id: line as the last thing before a blank line (no data: line), flushEvent() is still called at the end of the loop and will no-op harmlessly. The behaviour is correct, but a brief comment would help future readers understand why id: / retry: handling is intentionally absent.
// id: and retry: fields are intentionally not handled — this parser
// is display-only and does not track last-event-id or reconnect timing.Prompt To Fix With AI
This is a comment left during a code review.
Path: src/components/ui/code-display.tsx
Line: 158-165
Comment:
**`parseSseForCodeDisplay` silently ignores unrecognised field names**
The SSE spec allows `id:` (last-event-id) and `retry:` fields. Lines carrying those fields reach the end of the loop body without matching any `if` branch, so they are silently dropped. This is intentional for a display-only parser, but it means that if a server sends an `id:` line as the last thing before a blank line (no `data:` line), `flushEvent()` is still called at the end of the loop and will no-op harmlessly. The behaviour is correct, but a brief comment would help future readers understand why `id:` / `retry:` handling is intentionally absent.
```ts
// id: and retry: fields are intentionally not handled — this parser
// is display-only and does not track last-event-id or reconnect timing.
```
How can I resolve this? If you propose a fix, please make it concise.Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!
| describe("backfill", () => { | ||
| test( | ||
| "backfill copies non-warmup message_request rows when ledger rows are missing", | ||
| { timeout: 60_000 }, | ||
| async () => { | ||
| const userId = nextUserId(); | ||
| const providerId = nextProviderId(); | ||
| const keepA = await insertMessageRequestRow({ | ||
| key: nextKey("backfill-a"), | ||
| userId, | ||
| providerId, | ||
| costUsd: "1.100000000000000", | ||
| }); | ||
| const keepB = await insertMessageRequestRow({ | ||
| key: nextKey("backfill-b"), | ||
| userId, | ||
| providerId, | ||
| costUsd: "2.200000000000000", | ||
| }); | ||
| const warmup = await insertMessageRequestRow({ | ||
| key: nextKey("backfill-warmup"), | ||
| userId, | ||
| providerId, | ||
| blockedBy: "warmup", | ||
| }); | ||
|
|
||
| await db.delete(usageLedger).where(inArray(usageLedger.requestId, [keepA, keepB, warmup])); | ||
|
|
||
| const summary = await backfillUsageLedger(); | ||
| expect(summary.totalProcessed).toBeGreaterThanOrEqual(2); | ||
|
|
||
| const rows = await db | ||
| .select({ requestId: usageLedger.requestId }) | ||
| .from(usageLedger) | ||
| .where(inArray(usageLedger.requestId, [keepA, keepB, warmup])); | ||
| const requestIds = rows.map((row) => row.requestId); | ||
|
|
||
| expect(requestIds).toContain(keepA); | ||
| expect(requestIds).toContain(keepB); | ||
| expect(requestIds).not.toContain(warmup); | ||
| } | ||
| ); | ||
| test("backfill copies non-warmup message_request rows when ledger rows are missing", { | ||
| timeout: 60_000, | ||
| }, async () => { |
There was a problem hiding this comment.
Unrelated whitespace-only reformatting
This test was reformatted (the { timeout } options object moved inline) without any semantic change. It would be cleaner to keep such cosmetic changes in a separate commit so the diff of this PR stays focused on the performance feature.
Prompt To Fix With AI
This is a comment left during a code review.
Path: tests/integration/usage-ledger.test.ts
Line: 280-283
Comment:
**Unrelated whitespace-only reformatting**
This test was reformatted (the `{ timeout }` options object moved inline) without any semantic change. It would be cleaner to keep such cosmetic changes in a separate commit so the diff of this PR stays focused on the performance feature.
How can I resolve this? If you propose a fix, please make it concise.Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!
| function formatJsonPrettyStreaming({ | ||
| text, | ||
| jobId, | ||
| indentSize, | ||
| maxOutputBytes, | ||
| }: { | ||
| text: string; | ||
| jobId: number; | ||
| indentSize: number; | ||
| maxOutputBytes: number; | ||
| }): | ||
| | { ok: true; text: string } | ||
| | { ok: false; errorCode: "INVALID_JSON" | "CANCELED" | "OUTPUT_TOO_LARGE" } { | ||
| // 严格流式 pretty printer:不构建对象树,按字符扫描并校验 JSON 语法。 | ||
| // 目标:只要输入是合法 JSON,就能输出完整 pretty;若非法,返回 INVALID_JSON(不做“容错修复”)。 | ||
|
|
There was a problem hiding this comment.
Synchronous formatter blocks worker queue — isCancelled checks are unreachable during execution
formatJsonPrettyStreaming is entirely synchronous. In a Web Worker the event loop is still single-threaded: while this function runs, no onmessage handler can fire. This means:
- A
cancelmessage sent during the formatter's execution is queued but not delivered until the function returns. - Every
isCancelled(jobId)call inside the formatter checks acancelledJobsSet that will never be updated during the run — all cancellation checks inside the streaming formatter are effectively dead code. - Any new
buildLineIndexorsearchLinesjobs queued while the formatter runs must wait until it completes, even if the format was already "cancelled" by the user on the main thread.
The main-thread UX is fine (the client resolves the promise immediately via cancelWorkerJob), but for very large JSON files the worker thread can remain occupied for several seconds after the user hits Cancel, delaying all subsequent search/index operations.
Consider either:
- Adding periodic
await yieldToEventLoop()calls insideformatJsonPrettyStreamingat the(i & 8191) === 0checkpoints (turns it async and allows cancel messages to be processed), or - Documenting that cancellation of a running format job only takes effect on the next job.
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/components/ui/code-display.worker.ts
Line: 130-145
Comment:
**Synchronous formatter blocks worker queue — `isCancelled` checks are unreachable during execution**
`formatJsonPrettyStreaming` is entirely synchronous. In a Web Worker the event loop is still single-threaded: while this function runs, no `onmessage` handler can fire. This means:
1. A `cancel` message sent during the formatter's execution is queued but **not delivered** until the function returns.
2. Every `isCancelled(jobId)` call inside the formatter checks a `cancelledJobs` Set that will never be updated during the run — all cancellation checks inside the streaming formatter are effectively dead code.
3. Any new `buildLineIndex` or `searchLines` jobs queued while the formatter runs **must wait** until it completes, even if the format was already "cancelled" by the user on the main thread.
The main-thread UX is fine (the client resolves the promise immediately via `cancelWorkerJob`), but for very large JSON files the worker thread can remain occupied for several seconds after the user hits Cancel, delaying all subsequent search/index operations.
Consider either:
- Adding periodic `await yieldToEventLoop()` calls inside `formatJsonPrettyStreaming` at the `(i & 8191) === 0` checkpoints (turns it async and allows cancel messages to be processed), or
- Documenting that cancellation of a running format job only takes effect on the next job.
How can I resolve this? If you propose a fix, please make it concise.| export async function stringifyJsonPretty({ | ||
| value, | ||
| indentSize, | ||
| maxOutputBytes, | ||
| signal, | ||
| }: { | ||
| value: unknown; | ||
| indentSize: number; | ||
| maxOutputBytes: number; | ||
| signal?: AbortSignal; | ||
| }): Promise<StringifyJsonPrettyResult> { | ||
| ensureInitialized(); | ||
| const w = getWorker(); | ||
| if (!w) { | ||
| if (signal?.aborted) return { ok: false, errorCode: "CANCELED" }; | ||
| try { | ||
| const pretty = JSON.stringify(value, null, indentSize); | ||
| if (pretty.length * 2 > maxOutputBytes) return { ok: false, errorCode: "OUTPUT_TOO_LARGE" }; | ||
| return { ok: true, text: pretty }; | ||
| } catch { | ||
| return { ok: false, errorCode: "UNKNOWN" }; | ||
| } | ||
| } | ||
|
|
||
| const jobId = genJobId(); | ||
| return new Promise((resolve) => { | ||
| if (signal?.aborted) { | ||
| resolve({ ok: false, errorCode: "CANCELED" }); | ||
| return; | ||
| } | ||
|
|
||
| const onAbort = () => cancelWorkerJob(jobId); | ||
| signal?.addEventListener("abort", onAbort); | ||
|
|
||
| const wrappedResolve: StringifyJsonPrettyResolve = (v) => { | ||
| signal?.removeEventListener("abort", onAbort); | ||
| resolve(v); | ||
| }; | ||
|
|
||
| pending.set(jobId, { kind: "stringifyJsonPretty", resolve: wrappedResolve }); | ||
| try { | ||
| w.postMessage({ type: "stringifyJsonPretty", jobId, value, indentSize, maxOutputBytes }); | ||
| } catch { | ||
| pending.delete(jobId); | ||
| wrappedResolve({ ok: false, errorCode: "UNKNOWN" }); | ||
| } | ||
| }); | ||
| } |
There was a problem hiding this comment.
stringifyJsonPretty is exported but never used
stringifyJsonPretty is exported from this module and the worker handles stringifyJsonPretty messages, but no component in the codebase imports or calls this function. It is dead code that adds maintenance surface (the worker message type, the PendingJob union arm, and the response handler all need to stay in sync with it).
If it is genuinely unneeded, remove it from this file, from the worker, and from WorkerRequest/WorkerResponse. If it is intended for future use, add a // TODO: used by X comment so reviewers know it is intentional.
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/components/ui/code-display-worker-client.ts
Line: 326-373
Comment:
**`stringifyJsonPretty` is exported but never used**
`stringifyJsonPretty` is exported from this module and the worker handles `stringifyJsonPretty` messages, but no component in the codebase imports or calls this function. It is dead code that adds maintenance surface (the worker message type, the `PendingJob` union arm, and the response handler all need to stay in sync with it).
If it is genuinely unneeded, remove it from this file, from the worker, and from `WorkerRequest`/`WorkerResponse`. If it is intended for future use, add a `// TODO: used by X` comment so reviewers know it is intentional.
How can I resolve this? If you propose a fix, please make it concise.| if (indexStatus !== "ready" || !lineStarts || !range) { | ||
| return null; | ||
| } |
There was a problem hiding this comment.
One-frame null render causes brief flash when index becomes ready
When buildLineIndex completes it calls setIndexStatus("ready"), setLineStarts(...), etc. in the .then() callback — triggering a React render. The useEffect that calls updateWindow(true) runs after that render. Between those two renders:
indexStatus === "ready"✓range === null(not yet computed)
This condition causes the component to return null for one frame, briefly replacing the loading spinner with empty space before the first window appears.
A simple fix is to also guard on lineStarts:
| if (indexStatus !== "ready" || !lineStarts || !range) { | |
| return null; | |
| } | |
| if (indexStatus !== "ready" || !lineStarts || lineCount <= 0 || !range) { | |
| return null; | |
| } |
Or, keep the loading spinner visible until range is non-null:
if (indexStatus === "loading" || (indexStatus === "ready" && !range)) {
// show spinner
}Prompt To Fix With AI
This is a comment left during a code review.
Path: src/components/ui/code-display-virtual-highlighter.tsx
Line: 265-267
Comment:
**One-frame null render causes brief flash when index becomes ready**
When `buildLineIndex` completes it calls `setIndexStatus("ready")`, `setLineStarts(...)`, etc. in the `.then()` callback — triggering a React render. The `useEffect` that calls `updateWindow(true)` runs **after** that render. Between those two renders:
- `indexStatus === "ready"` ✓
- `range === null` (not yet computed)
This condition causes the component to return `null` for one frame, briefly replacing the loading spinner with empty space before the first window appears.
A simple fix is to also guard on `lineStarts`:
```suggestion
if (indexStatus !== "ready" || !lineStarts || lineCount <= 0 || !range) {
return null;
}
```
Or, keep the loading spinner visible until `range` is non-null:
```tsx
if (indexStatus === "loading" || (indexStatus === "ready" && !range)) {
// show spinner
}
```
How can I resolve this? If you propose a fix, please make it concise.| {Array.from({ length: Math.max(0, endIndex - startIndex) }, (_, localIdx) => { | ||
| const idx = startIndex + localIdx; | ||
| const lineNo = matches[idx] ?? 0; | ||
| const [start, end] = getLineRange(text, lineStarts, lineNo); | ||
| const lineText = text.slice(start, end); | ||
|
|
||
| return ( | ||
| <div | ||
| // idx 作为 key 已足够:matches 是稳定数组(仅随查询/内容变化) | ||
| key={idx} | ||
| className="flex items-center gap-3 px-3" | ||
| style={{ height: lineHeightPx, lineHeight: `${lineHeightPx}px` }} | ||
| > | ||
| <span className="w-14 shrink-0 text-muted-foreground tabular-nums">{lineNo + 1}</span> | ||
| <span className="whitespace-pre">{lineText}</span> | ||
| </div> | ||
| ); | ||
| })} |
There was a problem hiding this comment.
Matched term not highlighted within result lines
Each result row shows the full line text but does not visually highlight where the search query appears. For long lines with the match buried in the middle, users have to manually scan the entire line to find the term.
Consider splitting the line text around the match positions and wrapping the matched fragment in a <mark> or <span className="bg-yellow-200 dark:bg-yellow-800"> element:
// simple example (case-sensitive, query passed as prop)
const idx = lineText.indexOf(query);
if (idx === -1) return <span className="whitespace-pre">{lineText}</span>;
return (
<>
<span className="whitespace-pre">{lineText.slice(0, idx)}</span>
<mark className="bg-yellow-200 dark:bg-yellow-800">{lineText.slice(idx, idx + query.length)}</mark>
<span className="whitespace-pre">{lineText.slice(idx + query.length)}</span>
</>
);This would require passing query as a prop to CodeDisplayMatchesList.
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/components/ui/code-display-matches-list.tsx
Line: 126-143
Comment:
**Matched term not highlighted within result lines**
Each result row shows the full line text but does not visually highlight where the search query appears. For long lines with the match buried in the middle, users have to manually scan the entire line to find the term.
Consider splitting the line text around the match positions and wrapping the matched fragment in a `<mark>` or `<span className="bg-yellow-200 dark:bg-yellow-800">` element:
```tsx
// simple example (case-sensitive, query passed as prop)
const idx = lineText.indexOf(query);
if (idx === -1) return <span className="whitespace-pre">{lineText}</span>;
return (
<>
<span className="whitespace-pre">{lineText.slice(0, idx)}</span>
<mark className="bg-yellow-200 dark:bg-yellow-800">{lineText.slice(idx, idx + query.length)}</mark>
<span className="whitespace-pre">{lineText.slice(idx + query.length)}</span>
</>
);
```
This would require passing `query` as a prop to `CodeDisplayMatchesList`.
How can I resolve this? If you propose a fix, please make it concise.
Summary
优化
/dashboard/sessions/[sessionId]/messages页面中 CodeDisplay 组件处理大内容(Large JSON、headers、长文本)时的性能问题。通过引入多层级降级策略(纯文本展示 → 虚拟化语法高亮 → Web Worker 处理),解决react-syntax-highlighter因大量 DOM 生成导致的主线程长任务和页面卡死问题。Problem
在会话消息详情页中,当 Pretty 视图展示超长内容时:
react-syntax-highlighter生成大量 DOM 节点,导致主线程阻塞Related Issues & PRs
Related Issues
Related PRs
Solution
本 PR 采用三层降级策略,在不引入 CodeMirror/Monaco 等重型编辑器的前提下,实现大内容的流畅展示:
方案 1:纯文本降级(默认开启)
CCH_CODEDISPLAY_HIGHLIGHT_MAX_CHARS,默认 30000 字符)时,自动降级为textarea纯文本展示方案 3:虚拟化语法高亮(可选开启)
CCH_CODEDISPLAY_VIRTUAL_HIGHLIGHT=true开启Worker 化架构
Changes
Core Changes (主要实现)
src/components/ui/code-display.tsxsrc/components/ui/code-display.worker.tssrc/components/ui/code-display-worker-client.tssrc/components/ui/code-display-virtual-highlighter.tsxsrc/components/ui/code-display-config.tsSupporting Changes (支持性变更)
src/components/ui/code-display-matches-list.tsxtests/unit/code-display-perf-ui.test.tsxsrc/components/ui/code-display-plain-textarea.tsxsrc/components/ui/code-display-config-context.tsx.env.examplemessages/*/dashboard.jsonsrc/app/[locale]/layout.tsxsrc/app/providers.tsxFiles Changed Summary
Configuration
新增环境变量配置(已更新
.env.example):Testing
Automated Tests
tests/unit/code-display-perf-ui.test.tsx(+323 行)Build & Quality Checks
bun run build- 生产构建通过bun run lint- Biome 代码检查通过bun run lint:fix- 自动修复完成bun run typecheck- TypeScript 类型检查通过bun run test- Vitest 单元测试通过Manual Testing Checklist
Breaking Changes
无破坏性变更 - 所有新功能默认关闭或通过配置启用,现有行为保持不变。
Screenshots / Demo
Checklist
描述由 Claude AI 增强
Greptile Summary
This PR introduces a well-structured three-layer degradation strategy for large content in
CodeDisplay: plain-textarea fallback, optional virtual syntax highlighting, and Web Worker offloading for JSON formatting, line-indexing, and search. The architecture is sound and most issues from the previous review rounds have been addressed (CRLF consistency,getTextKeyconsolidation,isCancelledTTL cleanup, worker cancellation interval alignment,WORKER_UNAVAILABLEtyping, retry button for canceled state, andworkerEnabledthreading through tobuildLineIndex).Key remaining observations:
formatJsonPrettyStreamingruns without anyawait, so cancel messages and new search/index jobs are queued and cannot execute until the formatter finishes. For very large files this can delay user-visible operations several seconds after clicking Cancel.stringifyJsonPrettyis dead code — exported from the worker client and handled in the worker, but never imported by any component; the correspondingPendingJobarm and worker message branch are unreachable.CodeDisplayVirtualHighlighter— betweenindexStatusbecoming"ready"and the firstupdateWindoweffect firing, the component returnsnull, causing a brief visual flash.CodeDisplayMatchesList— result rows display the full matching line but do not highlight the search term within it, making it harder for users to spot the match in long lines.Confidence Score: 3/5
formatJsonPrettyStreamingblocking the worker job queue is a real responsiveness regression for users on large files who cancel formatting and then try to search — they will see searches appear to "hang" until the worker finishes the cancelled format job. The dead-codestringifyJsonPrettyadds unnecessary maintenance surface but is not a runtime risk.src/components/ui/code-display.worker.ts(synchronous formatter blocks queue) andsrc/components/ui/code-display-worker-client.ts(unusedstringifyJsonPrettyexport).Important Files Changed
stringifyJsonPrettyis exported and handled by the worker but never imported or called by any component — dead code.Sequence Diagram
sequenceDiagram participant UI as CodeDisplay (Main Thread) participant Client as WorkerClient participant Worker as code-display.worker Note over UI,Worker: Large JSON → Pretty Tab UI->>Client: formatJsonPretty({ text, signal }) Client->>Worker: postMessage(formatJsonPretty, jobId) Worker-->>Client: postMessage(progress, 30%) Client-->>UI: onProgress(30%) UI->>UI: show spinner + Cancel button Note over UI,Worker: User clicks Cancel UI->>Client: signal.abort() Client->>Client: cancelWorkerJob(jobId) → resolve(CANCELED) Client->>Worker: postMessage(cancel, jobId) UI->>UI: show "Canceled" + Retry button Note over Worker: cancel message queued — formatter still running synchronously Worker->>Worker: formatJsonPrettyStreaming finishes Worker-->>Client: postMessage(result) — ignored (no pending entry) Worker->>Worker: handleCancel fires (TTL cleanup) Note over UI,Worker: Search on large content UI->>Client: buildLineIndex({ text, signal }) Client->>Worker: postMessage(buildLineIndex, jobId2) Worker-->>Client: postMessage(progress, 50%) Client-->>UI: onProgress(50%) UI->>Client: searchLines({ text, query, signal }) Client->>Worker: postMessage(searchLines, jobId3) Worker-->>Client: postMessage(searchLinesResult) Client-->>UI: matches Int32Array (transferred) UI->>UI: render CodeDisplayMatchesListLast reviewed commit: 030de03