-
Notifications
You must be signed in to change notification settings - Fork 2
Feature/ai agent #35
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
richardkiene
wants to merge
21
commits into
main
Choose a base branch
from
feature/ai-agent
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Feature/ai agent #35
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
- Add comprehensive AI Assistant system with advanced capabilities - Implement NativeImage-based avatar loading with 64x64 resize optimization - Create pop-out chat window with proper welcome screen and Time Buddy branding - Replace action buttons with inspiring sample sentence prompts - Add Advanced AI integration with RAG system and intelligent query generation - Enhance UI with larger, more visible avatars in both sidebar and pop-out - Remove unnecessary quick action buttons from welcome screen - Implement proper conversation loading with welcome message preservation - Add comprehensive debugging and error handling throughout chat system
- Implement enhanced UI features for AI Assistant responses including confidence indicators, data source badges, query previews, feedback buttons, and timestamps - Add Run Query in Editor buttons to open AI-generated queries directly in the main editor - Update pop-out window to support enhanced response data with full metadata - Fix IPC communication to handle enhanced response objects instead of plain strings - Add data source awareness with visual styling for InfluxQL vs PromQL queries - Implement proper error handling and fallback to clipboard when editor unavailable - Add loading states and success/error feedback for better UX
Major improvements to AI assistant functionality: - Remove hardcoded pattern matching for truly dynamic AI responses - Add conversation context persistence with localStorage - Implement comprehensive system command architecture (GET_SAMPLE_DATA, GET_FIELDS, GET_TAGS) - Fix response format handling for both old and new Grafana formats - Enhance dashboard generation with real datasource UIDs and names - Fix datasource placeholder issues replacing "$datasource" with actual UID - Fix sample data format parsing supporting both flat array and columnar formats - Correct analysis results checking using metric.data instead of metric.success - Improve conversation context to prioritize mentioned measurements - Fix measurement name extraction to handle markdown formatting - Unify prompt generation between system commands and complex responses Resolves issues with AI seeing wrong measurements, "No data" showing when data exists, empty queries and placeholder datasources in generated dashboards, and lost conversation context between messages.
Major improvements to AI assistant functionality: - Fix context isolation between conversations to prevent data bleeding - Replace modal fallbacks with graceful error handling throughout - Implement shared ListManager utility for conversation and history management - Add comprehensive conversation management with inline editing and menus - Create modern pop-out modal system replacing prompt() dialogs - Add proper markdown support in AI responses with syntax highlighting - Fix missing getCurrentConversation function and UI element references - Implement three-dot menu with rename, tag, export, and delete actions - Enhance conversation list with search, filtering, and metadata display - Add persistent conversation context with proper cleanup on deletion Technical changes: - New ListManager class for shared list functionality - Context isolation via conversation-specific storage keys - Enhanced CSS for conversation UI and modal components - Proper error boundaries and fallback handling - Improved accessibility and responsive design
- Implement complete test suite with 187 passing tests (100% success rate) - Add unit tests for dataAccess, queryRequestBuilder, and error scenarios - Add integration tests for analytics, queries, schema, and variables - Create GitHub Actions workflows for automated testing and PR validation - Add cross-platform testing support (Ubuntu, Windows, macOS) - Implement PR protection with automated test result comments - Add comprehensive documentation for testing and CI/CD processes - Include test fixtures, mocks, and utilities for reliable testing - Add security scanning and build validation to CI pipeline - Configure branch naming conventions and PR requirements validation The test suite covers core functionality including data access patterns, query building, error handling, and integration scenarios. All tests pass consistently across different environments.
- Remove hardcoded "telegraf" database logic from InfluxDB query execution - Implement simplified database resolution that only extracts from ON clauses - Let backend handle default database selection for broader compatibility - Update test expectations to match new database resolution behavior - Achieve 100% test pass rate (187/187 tests passing) This ensures the application works with any InfluxDB instance without assuming specific database names, improving portability and reliability.
- Remove automatic AI connection restoration on app startup to prevent incorrect status displays - Fix AdvancedAI duplicate initialization by removing automatic startup checks - Ensure both Grafana and AI connections start in disconnected state consistently - Clear stale active AI connection data from localStorage on startup - Require explicit user action for all connection establishments - Add proper connection state tracking in GrafanaConfig - Improve connection status consistency across title bar and panels This ensures clean startup state where all connections show as disconnected until user explicitly connects, preventing confusing UI states.
- Upgraded from the very outdated Electron 28.0.0 to the latest 37.2.6 - Updated electron-builder from 24.9.1 to 26.0.12 for compatibility - Resolved all npm vulnerabilities in the process
- Added stub implementations for generateProactiveInsights() - Added assessSystemHealth() method with placeholder health metrics - Added notifyProactiveFindings() method for handling analysis results - Fixes TypeError when proactive analysis runs
- Set up Playwright with Electron-specific configuration - Created ElectronApp helper class for test automation - Added comprehensive E2E test suites: - Connection management tests (Grafana and AI) - Query editor functionality tests - Navigation and UI interaction tests - Added npm scripts for running tests in various modes - Included detailed README with testing guide Test with: npm run test:e2e
- Created comprehensive mock server simulating Grafana and AI services - Mock server provides all necessary endpoints for testing: - Grafana authentication, datasources, queries, schema - Ollama and OpenAI AI service endpoints - Updated ElectronApp helper to manage mock server lifecycle - Added helper methods for connecting to mock services - Created integration test suite using mock server - Added GitHub Actions workflow for CI/CD testing - Tests now work without real Grafana/AI connections Benefits: - Tests run consistently in CI/CD pipelines - No external dependencies required - Full query execution flow can be tested - Works on any developer machine without setup
- Enhanced mock server with complete Grafana, InfluxDB, Prometheus, and AI endpoints - Comprehensive UI test suite covering all major application features: * Variables panel functionality and state management * Chart visualization types and switching capabilities * Query result pagination and data handling * Schema explorer interactions and navigation * AI chat functionality and context management * Keyboard shortcuts and accessibility features * File operations (save, load, export) * Export functionality for CSV and JSON formats * Error handling scenarios and recovery mechanisms * Multi-tab operations and workspace management * Query autocomplete and syntax highlighting This ensures 100% coverage of critical UI functionality for reliable automated testing.
Add comprehensive E2E test coverage achieving 100% pass rate across 129 tests: New E2E test suites: - Query execution tests (46 tests) for InfluxDB and Prometheus - Schema explorer tests (22 tests) for database introspection - Query history tests (21 tests) for history management Enhanced test infrastructure: - Improved mock server with proper schema query detection - Enhanced Electron app helpers for better test reliability - Updated fixtures to match Grafana frames format - Better fetch mocking for schema query scenarios Application improvements: - Add query execution timing display - Enhance schema UI with test-friendly selectors - Fix schema datasource dropdown functionality - Improve schema datasource change handling Test fixes: - Correct button and panel selectors across all E2E tests - Fix CodeMirror integration for query editor tests - Resolve CSS selector syntax issues Results: 187/187 unit tests + 129/129 E2E tests passing
- Add clearTestData() method to clean test connections after tests - Use isolated user data directory (.test-data) for E2E tests - Create clean:test-data script to manually remove test data - Filter out test connections (localhost:3001) from localStorage - Add .test-data to .gitignore This ensures E2E tests don't interfere with developer's local configuration
When clicking an AI connection to connect, the OllamaService.initialize() was called but Analytics.initializeAiConnection() was not, leaving OllamaService.isConnected as false from Analytics perspective. This caused 'Run Analysis' to fail with 'connect to AI first' error. Now properly calls Analytics.initializeAiConnection() which: - Sets up the correct service (Ollama or OpenAI) - Updates OllamaService.isConnected properly - Ensures Analytics can use the AI service for analysis
Fixed TypeError by properly referencing Analytics through window object when calling initializeAiConnection method
- Created comprehensive test suite for AI connection workflow - Tests verify AI service initialization and Analytics integration - Tests ensure Run Analysis button works properly when AI is connected - Tests validate error handling when AI is disconnected - Enhanced mock server with analysis-specific responses - Fixed element visibility issues in test helpers This prevents regression of the AI connection bug where Analytics wasn't properly initialized when connecting to AI services
…oint Issues fixed: - Analytics.fetchAvailableModels() was using hardcoded localhost:11434 - Should use actual endpoint from OllamaService.config.endpoint - Analytics.isConnected wasn't being set when connecting via UI - Analytics config wasn't syncing with connected service Solutions: - Update fetchAvailableModels to use OllamaService endpoint when connected - Sync Analytics state with OllamaService after successful connection - Add comprehensive E2E tests to prevent endpoint regression - Add unit tests for endpoint configuration Tests added: - E2E tests ensure Analytics uses correct custom endpoints - Unit tests verify no hardcoded localhost defaults - Tests for switching between different Ollama servers - Tests for OpenAI connections (no Ollama endpoint needed)
The Analytics module was resetting its connection state to false every time Analytics.initialize() was called (which happens when switching views). This caused the "Run Analysis" button to show "connect to AI first" error even when already connected to Ollama/OpenAI. Changes: - Modified Analytics.initialize() to preserve existing connection state - Added logic to detect already-connected services on initialization - Improved sync between OllamaService/OpenAIService and Analytics states - Added E2E tests to prevent regression of this connection persistence issue - Fixed test helper to handle malformed localStorage data gracefully This ensures that once connected to an AI service, the connection persists across view switches and the Analytics functionality remains available.
This commit resolves multiple critical issues in the AI analytics system: 1. Visualization Error Fixes: - Add null checks for missing severity field in anomalies - Add 'unknown' severity level with gray color (#999999) - Handle missing fields gracefully (score, type, explanation) - Default severity to 'unknown' when field is missing 2. JSON Parsing Improvements: - Enhanced markdown JSON extraction (```json, ```, incomplete blocks) - Fix scope issue with cleanResponse variable - Add detection for incomplete responses (done: false) - Handle JSON wrapped in conversational text - Support nested JSON objects in markdown blocks - Remove hacks for partial/corrupted JSON consumption 3. Data Format Clarification: - Change data format from "timestamp: value" to "timestamp=ISO, value=number" - Add explicit instructions in prompts about data format - Prevent AI from misinterpreting timestamp parts as values - Update examples to use realistic value ranges 4. Response Validation: - Accept markdown-wrapped JSON responses as valid - Fix rejection of valid Gemma3 responses - Better error messages with context-specific guidance - Suggest reducing num_ctx when it's too high (>16384) 5. Testing: - Add comprehensive unit tests for JSON parsing (17 test cases) - Add E2E tests for markdown JSON handling (8 test scenarios) - Test various markdown formats from different models The root cause of the chart showing wrong values was the AI misinterpreting the timestamp year (2025) as the data value due to ambiguous format. Changed from "2025-07-15T00:00:00Z: 3.5" to "timestamp=2025-07-15T00:00:00Z, value=3.5"
- Fix missing query editor caused by unclosed div in HTML structure - Add GitHub-style execute dropdown with AI analysis options (Anomaly Detection, Prediction, Trend Analysis) - Simplify AI analysis to work directly with query results instead of field extraction - Fix async execution flow for AI analysis from dropdown - Improve data sampling algorithm in analytics visualizer to preserve min/max values - Add comprehensive integration tests for AI analysis and dropdown functionality - Fix test runner to properly categorize integration tests by file path
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.