Open
Conversation
- Add expandable/collapsible chat section with chevron toggle in header - Chat header becomes clickable to expand/hide kanban board and panels - Smooth 0.3s chevron rotation animation when expanding/collapsing - Create new AgentStatusDot component with animated indicators - Blue pulsing dots (1.5s) for active agents showing progress - Green pulsing dots (2s) for completed agents - Static red dots for failed agents - Integrated animated dots into AgentsPanel for visual status feedback
…unction Add new `get_project_dirs_with_code_workdir()` and `get_project_dirs_for_workdir()` functions to handle code_workdir parameter consistently across tools. Update file edit, cat, mv, rm, tree, and tool_strategic_planning to use the new function instead of duplicating workdir logic.
- Restructure Chat component with flex layout for better content scrolling - Add model parameter to createChatWithId action for task agent chats - Update ChatRawJSON to accept generic thread objects instead of ChatHistoryItem - Relax copyChatHistoryToClipboard type to accept Record<string, unknown> - Fix ModelSelector nullish coalescing operator (|| to ??) - Add task_meta handling in reducer snapshot event for task chat detection - Support task name updates in UpdateTaskMetaRequest and handle_update_task_meta - Add model field to updateTaskMeta mutation in tasks service - Pass default_agent_model when creating agent chats in TaskWorkspace - Add selectThreadById fallback in ThreadHistory for active thread lookup - Enable task tab renaming in Toolbar with updateTaskMeta integration
…trategic_planning, memory_bank)
Disable message compression logic in history limiting to simplify token management. Add parse_depends_on helper to accept both array and comma-separated string formats for task card dependencies, improving API flexibility.
…9b0bb8f1/card/T-1/5d257e1f
…9b0bb8f1/card/T-3/1bafe783
…9b0bb8f1/card/T-5/fdd3b6cb
…9b0bb8f1/card/T-4/1cc14148
Update handle_v1_trajectories_get and handle_v1_trajectories_delete to use find_trajectory_path() for consistent trajectory resolution across workspace and task directories. Filter task trajectories from main history view.
- Rename response types: PreviewResponse → TransformPreviewResponse/HandoffPreviewResponse - Update response fields to match backend: stats → individual token counts and reduction percent - Simplify transform options: remove summarize_conversation, add dedup_and_compress_context - Expand handoff options: add include_last_user_plus, llm_summary_for_excluded - Refactor TrajectoryPopover into separate button and content components - Update trajectory hook to request SSE refresh after successful transform apply - Add sse_refresh_requested field to chat state for reconnection signaling - Update useChatSubscription to listen for refresh requests and reconnect - Wrap request bodies in options envelope for API consistency
…d stats - Add drop_all_memories and drop_project_information options to CompressOptions - Add should_preserve_tool() to preserve deep_research, subagent, strategic_planning - Improve tool compression logic with name lookup to avoid unnecessary truncation - Add cd_instruction message with compression summary after compress_in_place - Calculate and display token reduction percentage in compression stats - Refactor handoff_select logic for cleaner diff handling - Unify API responses to use TransformStats struct instead of custom fields - Add describe_handoff_actions() helper for consistent action descriptions - Update all response types (Transform/Handoff Preview/Apply) to include stats - Simplify frontend logic by removing redundant calculations - Remove debug logging from useTrajectoryOps hook
…nships Add support for tracking parent-child relationships between chat sessions through new parent_id and link_type fields. This enables: - Handoff operations to create linked chat sessions - Subagent spawning to track parent-child relationships - History tree visualization showing chat lineage - Trajectory compression and handoff with parent metadata preservation Key changes: - Add parent_id and link_type to ThreadParams and TrajectorySnapshot - Implement handoff_select with generate_summary parameter for LLM summaries - Update subagent and task spawn tools to set parent relationships - Enhance history tree building to display handoff relationships - Add UI controls for trajectory compression options - Preserve parent metadata when saving chat history Fixes #[issue_number]
- Add tool filtering in generation based on thread.tool_use comma-separated list - Support optional code_workdir parameter in scope resolution and search tools - Refactor subchat to use stateful/stateless modes with SubchatConfig - Improve trajectory handling and handoff relationship inversion in history - Simplify tool subagent execution with new run_subchat API - Add visual indicators for chat relationships (subagent, handoff) in history tree - Preserve system message prefix in handoff operations - Add test utilities for GlobalContext creation
- Remove MAX_AGENT_CYCLES constant and simplify generation loop - Fix tool_use filtering to handle empty string case - Improve subchat state tracking with prev_state variable - Fix context window calculation to respect model limits - Fix string truncation to use character count instead of bytes - Add dead_code attributes for unused compression functions - Refactor notification handling in command queue processor - Reset abort flag when starting new stream - Remove unnecessary "1337" message filtering in subchat bridge - Update test expectations for empty array handling
## Core Changes - Created chat/config.rs with unified subchat APIs: - run_subchat(config) for multi-step conversations - run_subchat_once(tool_name, messages) for single-turn calls - All model params now resolved from YAML subchat_tool_parameters - Strict validation: error if tool missing from YAML (no fallbacks) - Statefulness controlled by config.stateful flag ## Bug Fixes - Fixed HTTP model mismatch in subchat endpoint (serialization vs generation) - Added n_ctx=0 guards in 4 files (subchat, tools, prepare, generation) - Enforced n=1 in streaming layer (removed parameter exposure) - Fixed budget underflow validation in strategic_planning - Replaced unwrap() calls with proper error handling ## API Simplification - Removed n_choices parameter from run_subchat_once (always 1) - Removed ChatPost.n field (unused) - Migrated all 12 tool callers to new unified API ## Dead Code Cleanup - Removed count_matches() from files_correction_cache.rs - Removed shortest_path() from files_correction_cache.rs - Removed run_at_commands_remotely() from at_commands/execute_at.rs ## Files Changed (31) - NEW: chat/config.rs (subchat configuration and resolution) - Core: subchat.rs, stream_core.rs, generation.rs, prepare.rs - HTTP: routers/v1/subchat.rs - Tools: 7 tool files updated to new API - Config: customization_compiled_in.yaml (subchat params) All 496 tests passing.
Allow temperature to be None in subchat configurations, enabling models to use their default temperature behavior when not explicitly specified. Update SubchatConfig, SubchatParameters, and related functions to handle Option<f32> temperature values. Adjust customization YAML defaults accordingly.
Add real-time voice input streaming with live transcript updates:
- Backend: Implement streaming voice sessions with debounced transcription
- New StreamingSession for managing audio buffers and events
- Session worker processes audio with configurable debounce (300ms)
- Emit live transcripts and handle final transcription on stop
- Support for language selection per session
- Frontend: Replace batch transcription with streaming pipeline
- New useStreamingVoiceRecording hook with real-time audio capture
- Float32 to PCM conversion and base64 encoding
- Live transcript updates via EventSource subscription
- Integrate with ChatForm for immediate user feedback
- UI improvements:
- Add live transcript display in textarea during recording
- Disable send button while recording/finishing
- New finishing animation state for microphone button
- Readonly textarea styling during voice input
- API endpoints:
- GET /voice/stream/{session_id}/subscribe for SSE events
- POST /voice/stream/{session_id}/chunk for audio chunks
- Support concurrent streaming sessions
- Refactor chat error handling to clear per-thread errors
- Allow handoff from error state in addition to idle state
- Add cancelRecording function to voice recording hook with keyboard shortcuts (Enter to finish, Escape to cancel) - Add synthetic tool results for server-executed tools (e.g., Anthropic's web_search) to prevent LLM response regeneration - Expose cancelRecording in useVoiceInput hook interface
Add support for strict tool schemas and improve handoff workflow with automatic regeneration. Includes trajectory operations enhancements and voice recording improvements. - Add supports_strict_tools field to ChatModelRecord for strict parameter validation - Add ToolChoice enum and tool_choice/parallel_tool_calls to ChatPrepareOptions - Update make_openai_tool_value to generate strict schemas with additionalProperties - Pass strict flag through tool description conversion pipeline - Update anthropic.yaml and openai.yaml with supports_strict_tools: true - Forward tool_choice and parallel_tool_calls in passthrough messages - Refactor handoff_select to bundle context files and preserve agentic tools - Add regenerate call after handoff to continue conversation in new chat - Add recording start time tracking for live transcript window in voice recording - Add imports for port and apiKey selectors in trajectory operations hook
… logic Add tool_choice and parallel_tool_calls fields to ChatPrepareOptions struct. Update handoff_select to generate summary from entire conversation instead of excluded messages only. Improve session state validation in transform_apply. Replace switchToThread with createChatWithId in useTrajectoryOps hook.
Add context usage monitoring with automatic trajectory panel opening when context reaches 97% capacity. Sanitize messages during handoff to remove transient metadata. Update usage thresholds (warning: 85%, full: 97%).
Add support for cleaning up message metadata and normalizing tool IDs when users change the model in a chat session. This ensures compatibility across different model providers that may have different tool ID formats. - Import sanitize_messages_for_model_switch in handlers - Detect model changes in SetParams command and trigger sanitization - Add sanitize_messages_for_model_switch function to normalize tool IDs - Strip usage, finish_reason, reasoning_content, and extra fields - Add validation for tool ID format (alphanumeric, underscore, hyphen only) - Generate valid tool IDs using UUID-based format when needed - Include comprehensive test coverage for sanitization logic
Remove automatic_patch from PersistedThreadParams interface and stop persisting it across sessions. The automatic_patch setting is now always reset to false when creating new chats instead of being restored from previous thread parameters.
…ss tracking Add new `tail_needs_assistant` function to determine if conversation tail requires an assistant response. This checks if the last message is a user message or if there are unanswered client tool calls (non-server tools like "srvtoolu_*"). Update `start_generation` to use this logic for continuing generation when no tool calls are pending, enabling multi-turn agentic conversations. Improve subchat progress tracking by: - Adding `parent_tool_call_id` to SubchatConfig for linking subchats to parent tools - Passing step index and max_steps to tool execution for better progress reporting - Truncating tool arguments in progress messages (max 200 chars) - Updating progress format to "step/max: tool(args)" for clarity Enhance UI progress display: - Refactor `parseProgressEntry` to handle both step-based and raw log entries - Add loading spinners to tool components while results are pending - Show current progress step in tool headers (cat, tree, search, shell, subagent, etc) - Improve progress message parsing in subagent and strategic planning tools Update trajectory UI labels for clarity: - "Compress" → "Compress in-place" - "Compress tool results" → "Truncate tool results" - "Include context files" → "Include all opened files" - "Include tool results" → "Include research, subagent & planning results" - "From last user message only" → "Include last user message + responses" - Update default handoff options (disable context/tools by default, enable summary) Add comprehensive tests for `tail_needs_assistant` covering: - Assistant messages with/without tool calls - Server tool calls (srvtoolu_*) vs client tool calls - Context file and tool result messages - Empty messages and edge cases
- Fix no-misused-promises errors in TrajectoryPopover.tsx by wrapping async handlers - Fix no-unnecessary-condition in historySlice.ts using 'in' operator - Remove unnecessary type assertions in trajectory.ts - Fix no-unnecessary-condition in useStreamingVoiceRecording.ts - Add comments to empty catch handlers in useVoiceInput.ts - Add eslint-disable-next-line for debug console statements in useChatSubscription.ts - Fix react-hooks/exhaustive-deps in TaskWorkspace.tsx and ThreadHistory.tsx with useMemo - Fix react-refresh/only-export-components by extracting utilities to internalLinkUtils.ts - Format code with Prettier All frontend quality checks now passing: - ESLint: 0 errors, 0 warnings - Tests: 315 passing - TypeScript: compilation successful
…rting by update timestamp
…ditor Introduce dynamic YAML schema parsing for provider configuration with: - Separate important/extra fields based on f_extra flag - Per-field save with backend YAML merge preserving secrets - Remove legacy OAuth UI in favor of direct credential input - Add API key support alongside OAuth tokens for Claude Code Backend changes: - Simplify settings merge logic - Remove obsolete OAuth endpoints - Dual auth support (API key + OAuth) for Anthropic/Claude Code
Implement full OpenAI OAuth2 flow with PKCE for secure browser-based login:
- New openai_codex_oauth module with verifier/challenge generation
- Support for both in-app OAuth tokens and Codex CLI credentials
- HTTP callback endpoint for automatic auth completion
- Updated provider schema to use oauth: { supported: true }
- Enhanced GUI with auto-polling and provider-specific labels
- Simplified OpenAICodexProvider auth resolution
Supports ChatGPT Plus/Pro subscriptions for GPT-5-Codex model access.
Apply consistent line breaking for multiline JSX elements and expressions in ToolCard components, ProviderForm, and related files.
…ries surfacing Introduce KnowledgeIndex for O(1) retrieval by files/tags/entities/related fields. Auto-builds in background from all knowledge dirs (local+global). Key improvements: - Surface "Related memories (short form)" in 20+ tools (cat, search, tree, edit tools, knowledge creation, subagents, etc.) - <50ms in-memory lookup - Richer frontmatter (created_at, summary, entities, related_files/entities, hashes) - Scoped VecDB search (knowledge/trajectory dirs) with de-dup + usefulness scoring - Consistent archiving/indexing across local/global knowledge roots Supports both new rich docs and legacy frontmatter.
Replace `supports_reasoning: Option<String>` and `supports_boost_reasoning` with explicit `reasoning_effort_options`, `supports_thinking_budget`, and `supports_adaptive_thinking_budget` fields throughout model records and caps. Simplify model adaptation logic in chat/prepare.rs to use capability checks instead of string matching. Update OAuth token exchange to use form params. Add background OAuth token refresh task. Improve knowledge index ranking. Fix diff rendering to separate JSON from related memories. Update known_models.json, TypeScript types, and tests accordingly.
…de only - Remove openai_codex handling from OAuth start/exchange/reset endpoints - Delete openai_codex OAuth callback handler entirely - Simplify provider checks to only support "claude_code" - Rename save_provider_oauth_tokens to save_oauth_tokens_to_provider - Update redirect URI in openai_codex_oauth (likely for migration) - Hardcode claude_code.yaml config path and provider creation
Move completion_models to completion_presets.json and embedding_models to embedding_presets.json for better modularity. Update CONTRIBUTING.md reference. Retain chat_models in known_models.json (not shown in diff).
… discovery Remove dependency on OpenAI models API call and instead use: - Hardcoded known Codex model IDs - Regex discovery of Codex models from model_caps - Simplified model matching without date suffix stripping Improves reliability by eliminating external API dependency and enables multimodal support by default.
…form endpoints Improve OpenAI Codex auth resolution to prefer OPENAI_API_KEY (from token-exchange) over OAuth access tokens for api.openai.com compatibility. Also add subchat thinking progress streaming for deep_research/strategic_planning/ code_review tools, plus minor GUI provider label/icon additions and YAML compat fixes. - Update auth priority: in-app API key → Codex CLI API key → OAuth tokens - Add token-exchange flow to obtain API key during OAuth - Preserve existing oauth_tokens in YAML refresh - Expose OPENAI_API_KEY at config top-level for backward compat - Implement SubchatProgressCollector for real-time thinking previews - Update GUI constants/icons for openai_codex/claude_code providers
Replace deprecated reqwest-eventsource with eventsource-stream for SSE streaming. Add HTTP status validation before streaming and improve error handling with format_llm_error_body. refactor(openai-codex): support ChatGPT backend OAuth endpoint Add ChatGptBackendOAuth variant with chatgpt_account_id extraction from JWT. Support both Platform API (/v1/responses) and ChatGPT backend (/backend-api/codex/responses) endpoints with conditional request params. refactor(subchat): add run_subchat_once_with_parent and subchat_depth Introduce run_subchat_once_with_parent for nested subchats with proper parent tx/abort/depth propagation. Remove entertainment message helpers in favor of native progress streaming. Add unicode-safe truncation. feat(anthropic)!: filter orphaned web search citations and server blocks Strip citations with encrypted_index lacking server_content_blocks, and server_tool_use blocks without matching web_search_tool_result. Prevents invalid multi-turn requests. refactor(tools): migrate to run_subchat_once_with_parent Update code_review, strategic_planning, deep_research to use new run_subchat_once_with_parent API. Remove hardcoded entertainment messages. chore: add ChatGPT OAuth diagnostics and GUI polling fix Add api_key_exchange_error field and status checks. Fix ProviderOAuth polling to stop on terminal backend errors. fix(adapter): ChatGPT backend param compatibility Detect chatgpt.com/backend-api endpoint and omit unsupported params (max_output_tokens, temperature, stop). Set store:false. fix(refact): sanitize thinking_blocks and citations Filter thinking_blocks to valid types only. Strip web citations without server_content_blocks.
- Add AppendReasoning delta and reasoning_tail tracking for OpenAI Responses API - Implement FinalizeToolCalls to handle complete tool calls from .done events - Fix thinking blocks deduplication by ID to prevent duplicates - Enhance UI streaming progress with auto-scroll, markdown rendering, and preserved newlines - Improve scrollbar UX with stable gutters and hover-only thumbs - Robustify OpenAI adapter with ChatGPT backend param filtering and comprehensive event handling - Consistent null-safe reasoning capability checks across UI components Fixes streaming progress truncation and tool call argument replacement issues.
…nt blocks - Add dedicated ToolCard components for OpenAI server tools (web_search_call, file_search_call, code_interpreter_call, computer_call, image_generation, audio, refusal, mcp_call, mcp_list_tools) - Implement OpenAI Responses API stateful multi-turn support (previous_response_id, store=true, tail-only message sending) - Add server_content_blocks display and server-executed tool result formatting - Enhance stream parsing for lifecycle events, output_items, and citations - Update ThreadParams/TrajectorySnapshot with previous_response_id persistence - Rich rendering: web results with links, file matches, code outputs, images, transcripts with proper icons and summaries BREAKING CHANGE: OpenAI Responses API now requires store=true and chains via previous_response_id. UI expects srvtoolu_* prefixed server tool calls. Fixes #model-capabilities-resolution
- Disable store=true, previous_response_id, include fields for chatgpt.com/backend-api - Filter reasoning items from input (not persisted server-side) - Skip redundant server content blocks for already-streamed data (output_text.done, reasoning.done, etc.) - Refine output_item handling to avoid premature tool call emissions Fixes compatibility issues causing 404s and incorrect streaming behavior.
…uter provider routing Improve provider model management and streaming robustness: **Provider Enhancements** - Add vLLM, Ollama, LM Studio API model discovery (`fetch_available_models`) - OpenRouter provider routing: `selected_provider`, `provider_variants`, endpoints API - Live model filtering, pricing, capabilities from provider APIs - Google Gemini API model listing and health checks - Cache-aware provider enabled state (`enabled: true` in YAML) **Streaming Improvements** - Anthropic interleaved thinking: per-block reasoning via `block_index` - Robust thinking block deduplication by `(id, type+index, type+signature)` - Server content block ordering preservation (`_order_index`) - Cache guard: prompt prefix validation, ephemeral cache_control injection - OpenAI Responses/LiteLLM: `FinalizeToolCalls`, `AppendReasoning` deltas **Fixes** - Fix streaming truncation, tool call deduplication, argument replacement - Model switch clears `previous_response_id` (Responses API) - Filter orphaned `server_tool_use` blocks in multi-turn - UI: ServerContentBlocks rendering, auto-scroll, markdown **UI/UX** - Provider model cards: search, grouping (OpenRouter families), provider selection table - Cache guard confirmation dialog with diff/estimated cost - OpenRouter account balance, health status badges Closes streaming progress and tool finalization issues
Add optional parameters to both att_search and regex_search tools: - context_lines: show line-numbered previews around matches - max_files/max_recs_per_file/max_total_recs: control context attachment - Similar limits for regex_search (max_matches_per_file/max_total_matches) Includes parsing helpers, preview formatters, bounded emission with warnings when limits hit, and improved output formatting with match markers (>). Replaces hardcoded limits with configurable defaults. BREAKING CHANGE: New optional parameters added to tool schemas.
…mary_at Add task metadata field to track when agent completion summaries are emitted, preventing duplicate notifications for historical cards. Include only newly completed cards (since last summary) in "all agents finished" messages. Includes: - task_agent_monitor and task_agent_finish logic updates - TaskMeta schema changes and storage initialization - task_spawn_agent tracking of agent run starts - GUI polling optimizations and notification event types - Updated task schema tests The provided tool-search changes appear in a different commit. BREAKING CHANGE: New optional parameters added to tool schemas.
…mary_at Add task metadata field to track when agent completion summaries are emitted, preventing duplicate notifications for historical cards. Include only newly completed cards (since last summary) in "all agents finished" messages. Includes: - task_agent_monitor and task_agent_finish logic updates - TaskMeta schema changes and storage initialization - task_spawn_agent tracking of agent run starts - GUI polling optimizations and notification event types - Updated task schema tests - Improve chat queue tool handling for task_done/ask_questions states - Optimize multi-thread SSE subscriptions and visibility refresh BREAKING CHANGE: New optional parameters added to tool schemas.
- Fix signature_delta test and logic: treat signatures as opaque tokens (replace latest wins) instead of concatenation - Improve Anthropic adapter block interleaving: proper text/thinking/server/ tool_use ordering with dual-key sort (stream index + seq) - Attach citations to matching text blocks by index instead of appending - Add server_content_blocks parsing and sanitization for model switching - Preserve _anthropic_text_blocks for Refact proxy compatibility - Update login UI to support provider selection workflow Fixes streaming order issues with interleaved thinking/server content.
Apply consistent formatting across TypeScript, Rust, and JSX files: - Fix long dependency arrays in useEffect/useCallback - Break up long template literals and conditional expressions - Format multiline JSX props and object literals - Improve readability of complex Rust lock patterns - Remove trailing whitespace and empty lines
Extract noop function and disable ESLint for clarity, avoiding inline empty functions in Promise.then() chain.
Filter out thinking blocks where signature is empty string (not just null), as some proxies send these during streaming before final signature is ready. Prevents resending invalid blocks in multi-turn conversations. Add test for empty signature handling.
…verflow Introduce MAX_LINE_LENGTH constant (10k chars) and truncate excessively long lines with "..." suffix using floor_char_boundary for safe Unicode handling. Prevents minified single-line files from consuming excessive context tokens.
Reduce comprehensive developer guide to essential reference: - Preserve all key technical details (stack, architecture, APIs) - Remove verbose tables, code examples, tutorials - Keep critical state invariants, patterns, checklists - Update paths, features, endpoints to match current codebase docs(gui): condense AGENTS.md from 4345 to 291 lines Streamline React GUI documentation: - Maintain full feature list, hooks, components, flows - Eliminate redundant examples, deep implementation details - Retain SSE protocol, Redux patterns, IDE integration specs - Update stats, file layouts, test coverage info
Align columns and expand descriptions for better readability in SSE events, API endpoints, hooks, and message role tables.
- Update IntelliJ, VSCode, and engine versions from 7.0.0 to 7.0.1 - Add BYOK mode support with graceful cloud fallback - Improve error handling when cloud caps fetch fails - Add provider auto-initialization for model toggles - Update inference endpoint from app.refact.ai to inference.smallcloud.ai - Fix API key validation in provider enabled check - Improve cache token calculation in LLM adapters - Consolidate scrollbar styling across UI components - Extract ModelSamplingParams into reusable component - Add metering aggregation tests and balance tracking - Update help text for address_url configuration Fixes cloud availability issues and improves local provider experience
- Add login button for unauthenticated Refact Cloud users in Dropdown - Enhance collectEvents with SSE block parsing, stopWhen callback, and proper cleanup - Add withRetry utility for flaky integration test operations - Filter empty-name tool calls in openai_merge and trajectories Fixes unreliable chat subscription tests and improves auth UX.
- Fix OpenRouter provider selection to handle empty available_providers list - Update OpenAI usage parsing test with accurate token math (1200-800-200=200) - Simplify uptime percentage formatting in AvailableModelCard - Improve test readability with line break formatting
Implement streaming parser that extracts content between <think> and </think> tags (case-insensitive) into separate reasoning stream while preserving content stream. Handles split chunks, partial tags, and nested scenarios. Includes comprehensive tests for tag parsing edge cases. refactor(file-edit): strip workspace prefix from patch paths feat(gui): extract provider access logic with hasAnyUsableActiveProvider
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
No description provided.