🌐 Language --- 🇺🇸 English | 🇨🇳 中文
Sentinel is a distributed real-time vision system framework for local area networks (LAN).
♻️ It repurposes unused Android devices as network camera nodes, enabling:
- Distributed image acquisition
- Real-time PC-side video streaming
- AI-driven monitoring and analysis
It adopts a layered architecture of “mobile capture + PC processing + browser control”, supporting real-time image preview, local video recording, and structured event analysis, and can be extended to integrate multimodal AI models.
- The system consists of a PC Dashboard and an Android client CamFlow.
This project can be used both as a lightweight local monitoring system and as an engineering prototype platform for visual data acquisition and intelligent analysis.
🚀 Before first use, it is strongly recommended to read the chapters in order. Click to jump:
① Project Overview → ② Project Deployment → ③ Run the Project → ④ Dashboard Guide
1. Project Overview ⌃
Sentinel is a real-time monitoring system that runs on a LAN, and it can also serve as a tool for data acquisition and analysis. It consists of the following two parts:
- CamFlow (Android client): captures the phone camera feed and uploads it to the server as single-frame JPEG images.
- PC Dashboard (Flask + Web UI): receives image frames and provides live preview, video recording, screenshot saving, log viewing, and optional AI triggered monitoring.
The system supports running in a local LAN environment without relying on cloud services. However, to run multimodal models, it is recommended to use an online model inference service.
1.1. Core Capabilities ⌃
Sentinel is not designed as a single-purpose monitoring tool. Instead, it aims to build an extensible real-time vision system framework of “mobile capture + PC processing + browser control”. Its core capabilities include:
|
|
1.2. System Architecture ⌃
The system consists of three structural layers:
- Data Acquisition Layer (Android App)
- Service Processing Layer (PC Server)
- Presentation & Control Layer (Browser Dashboard)
┌─────────────────────────────────────────────────────────────────────────┐
│ Android Client (CamFlow)
│
│ Camera Capture → Single JPEG Frame → HTTP POST /upload
└─────────────────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────────────┐
│ PC-side Flask Server
│
│ ① FrameBuffer (Latest Frame Cache)
│ ├── Provides MJPEG Stream (/stream)
│ ├── Provides Snapshot
│ └── Provides Recorder Access
│
│ ② Recorder Module
│ └── Writes Segmented Video Files by FPS
│
│ ③ AI Monitor (Optional Module)
│ ├── Motion Trigger (Traditional CV Detection)
│ ├── Vision Model Interface (Pluggable)
│ └── Event Logging / Real-time Feedback (Local / Web Access)
│
│ ④ Config & Log Management
│ ├── config.json
│ └── server.log
└─────────────────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────────────┐
│ Browser Dashboard
│
│ Live Preview | Recording Control | Parameter Configuration | Log Viewer
└─────────────────────────────────────────────────────────────────────────┘
-
Architecture Design
- All image data flows only within the local area network (LAN);
- FrameBuffer serves as the core shared data structure to avoid repeated decoding;
- Recording and AI analysis both read from FrameBuffer without interfering with each other;
- The AI module adopts an interface-based design, allowing flexible integration of different vision models;
- The Dashboard functions purely as the control and presentation layer, and does not participate in image processing.
1.3. Practical Applications ⌃
Sentinel is not only a real-time monitoring tool, but also an extensible platform for visual data acquisition and analysis. Its LAN-based localized operation gives it practical value in the following scenarios:
🏠 Local Privacy-Oriented Monitoring Solution
🧠 AI Behavior Analysis Experimental Platform
|
📊 Data Acquisition & Analysis Prototype System
🧩 Distributed Vision System Architecture Example
|
2. Implemented Features ⌃
2.1. Real-time Video Preview ⌃
Sentinel provides browser-based real-time video preview capability. The camera feed captured by the Android device is continuously uploaded to the server as single JPEG frames, and the server pushes real-time images to the browser via an MJPEG stream.
This mechanism has the following characteristics:
2.2. System Parameter Customization ⌃
Sentinel is not a fixed-behavior “black-box monitoring tool,” but a highly configurable real-time vision system. Users can finely control video streaming, recording strategies, and AI behavior through the Dashboard to adapt to different hardware environments and application scenarios.
2.3. AI Trigger-based Monitoring and Controllable Cognitive Output ⌃
With the support of multimodal vision models, Sentinel is capable not only of “seeing the scene,” but also of performing structured understanding and risk assessment. The system adopts a layered mechanism of “motion trigger + model analysis”:
- Uses “traditional computer vision algorithms + large model inference” for layered processing, reducing token consumption and API costs;
- When trigger conditions are met, the system calls the model and enters the OBSERVE state;
- Invokes the vision model for semantic analysis and outputs structured JSON results;
- Records observation results and displays them in real time on the Dashboard.
2.4. Mobile Camera Capture Application (CamFlow) ⌃
Sentinel does not rely on dedicated surveillance cameras. Instead, it provides a self-developed Android client CamFlow to handle image acquisition and data upload.
CamFlow transforms an ordinary smartphone into a real-time camera endpoint and:
📘 For detailed functionality and usage instructions, please refer to CamFlow User Guide.
3. Project Deployment ⌃
Please complete the basic project deployment in the following order:
- Obtain the project source code
- Deploy the PC-side server
- Install the Android client APK
Sentinel/
│
├── server.py # Program entry point (starts Flask, initializes runtime environment, launches threads)
├── README.md # Project documentation
├── CamFlow_UserGuide.md # Android user guide
├── requirements.txt # Dependency installation list
├── .gitignore # Git ignore rules
├── LICENSE # MIT License
│
├── app/ # Main application package
│ │
│ ├── __init__.py # Package initialization
│ │
│ ├── ai/ # AI-related modules (optional)
│ │ ├── ai_ark.py # Vision model interface implementation (replaceable)
│ │ ├── ai_monitor_worker.py # AI monitoring thread (SLEEP/OBSERVE state machine)
│ │ ├── ai_store.py # AI event writing and persistence
│ │ ├── motion_trigger.py # Low-cost motion detection module
│ │ └── __init__.py
│ │
│ ├── config/ # Configuration management module
│ │ ├── config.json # Automatically generated after saving in Dashboard
│ │ ├── config_store.py # Configuration validation and read/write logic
│ │ └── __init__.py
│ │
│ ├── core/ # Core runtime modules
│ │ ├── frame_buffer.py # Latest frame cache (system data sharing center)
│ │ ├── logger.py # Logging initialization
│ │ ├── runtime.py # Global runtime state management
│ │ ├── upload_stats.py # Upload statistics
│ │ └── __init__.py
│ │
│ ├── net/ # Networking modules
│ │ ├── net_discovery.py # UDP auto-discovery service
│ │ └── __init__.py
│ │
│ ├── recorder/ # Video recording module
│ │ ├── recorder.py # Video writing logic
│ │ ├── recorder_worker.py # Recording thread controller
│ │ └── __init__.py
│ │
│ └── web/ # Web interface and API layer
│ ├── webapp.py # Flask routes and APIs
│ ├── __init__.py
│ │
│ ├── static/ # Frontend static assets
│ │ ├── dashboard.js
│ │ └── style.css
│ │
│ └── templates/ # HTML templates
│ ├── dashboard.html
│ └── dashboard.txt
│
├── PhoneCamSender/ # Android client source code (Android Studio project)
│ ├── app/ # Android application module
│ ├── gradle/ # Gradle configuration
│ ├── build.gradle
│ ├── settings.gradle
│ └── ...
│ └── CamFlow-v1.0.0-beta.apk # Precompiled APK
│
├── assets/ # Images used in README
│ ├── app_main_page.jpg
│ ├── app_setting_page.jpg
│ └── app_failed_hint.jpg
│
├── log/ # Runtime log directory (empty at first run)
│ ├── server.log # System runtime log
│ └── ai_events.jsonl # AI event records
│
└── recordings/ # Video and snapshot output directory (empty at first run)
├── snapshots/
└── videos/
3.1. Environment Requirements ⌃
| PC Side | Android Side |
|---|---|
| Python 3.9 or above | Android 8.0 or above |
| Windows / macOS / Linux | Allow installation from unknown sources |
| Git installed (for cloning repository) | Connected to the same local area network as the PC |
3.2. Obtain the Project Source Code ⌃
Execute in the cmd terminal:
git clone https://github.com/suzuran0y/Sentinel.git
cd Sentinel3.3. PC-side Deployment ⌃
Windows System
python -m venv venv # Create virtual environment
venv\Scripts\activate # Activate virtual environmentmacOS / Linux System
python3 -m venv venv
source venv/bin/activatepip install -r requirements.txtThe dependency list includes:
flask>=2.2 # Web service
numpy>=1.23 # Image data processing
opencv-python>=4.8 # Image decoding and video writing
volcenginesdkarkruntime # AI module dependency
3.4. Android-side Deployment ⌃
The Android application CamFlow is responsible for camera capture and image upload.
The application source code is located in the repository path:
PhoneCamSender/
The precompiled APK file is located at:
PhoneCamSender/CamFlow-v1.0.0-beta.apk
For general users, installing via the APK file is the most convenient method:
-
Use the APK file included in the cloned repository or download it from the project's Release page;
-
Open the APK file and install CamFlow on the Android device;
-
If prompted by the system, allow installation from unknown sources, then complete installation and launch the application.
For developers who want to debug or modify features, the app can be built from source:
-
Open Android Studio;
-
Open the project directory:
Sentinel/PhoneCamSender; -
Connect an Android device;
-
Wait for Gradle synchronization to complete, then run the project.
📘 For detailed Android-side functionality and usage instructions, please refer to CamFlow User Guide.
4. Run the Project ⌃
After completing project deployment, verify the entire pipeline in the order of Start PC → Connect Mobile → Dashboard Displays Video to ensure the full chain is functioning correctly.
You should observe the following success indicators:
- The terminal outputs something like
http://127.0.0.1:<PORT>/andhttp://<LAN_IP>:<PORT>/ - The browser successfully opens the Dashboard page
- The mobile app shows successful Test connection / Ping (or a connected status message)
Note: After the PC side starts, certain switches will reset to default safe states (for example, ingest / recording are OFF by default).
This is intentional design to prevent unintended recording or data reception.
4.1. Start the PC Side (Server + Dashboard) ⌃
Run the following command in the project root directory:
python server.pyAfter successful startup, you should see output similar to:
===========================================================
PhoneCam Server Started
Local: http://127.0.0.1:<PORT>/ for dashboard web # Recommended address for browser
LAN: http://<LAN_IP>:<PORT>/ for CamFlow link # Address for CamFlow device connection
Default ingest: OFF (enable in dashboard)
===========================================================
* Serving Flask app 'app.web.webapp'
* Debug mode: off
⚠️ Important: The camera device must use the LAN address. If the server starts successfully but ingest is not enabled, the Live View being empty is normal behavior.
At this point, the PC-side service has started successfully.
4.2. Start CamFlow (Android App) ⌃
- The PC and the mobile device must be connected to the same local area network (same Wi-Fi);
- The PC-side service must already be running, and the Dashboard must be accessible in the browser.
-
Automatic Discovery: CamFlow supports attempting to discover the server within the LAN. If discovery is successful, the recognized Server Address will be displayed on the app’s settings page or main page.
-
Manual Entry: If automatic discovery fails, manually enter the Server Address.
In CamFlow settings, enter the LAN IP address printed when running server.py on the PC:
<LAN_IP> or <LAN_IP>:<PORT> # Extract from http://<LAN_IP>:<PORT>/
-
Once the Server Address connection is successful, the app will automatically begin capturing and uploading camera data.
-
Tap the upper-right corner of the app’s main interface to enter the settings page for further configuration.
-
After confirming successful connection in CamFlow, open the browser on the PC and enter the Dashboard address printed by
server.py. -
Since
Ingestis OFF by default, the Live View section will initially displayIngest OFF - enable in dashboardinstead of the camera feed. -
Click the
Enable Ingestbutton to check whether the Live View displays video.
If no video appears, expand the Logs section under Live View (collapsed by default) and check for messages such as ingest disabled, then adjust settings accordingly. -
Under normal conditions, once
Ingestis switched to ON, the Live View section in the Dashboard will immediately update with the camera feed from the device.
At this point, the CamFlow service and data transmission to the PC have been successfully started.
For detailed usage instructions of CamFlow, please refer to the repository documentation: CamFlow User Guide.
4.3. Dashboard Guide ⌃
The Dashboard page consists of
dashboard.html + dashboard.js + style.css, and its core data comes from backend APIs.
The layout of the Dashboard is divided into three sections: top bar + left monitoring panel + right settings panel.
- Top Bar
- Title:
Sentinel System Dashboard& Subtitle:Multi-Device Vision Monitoring & Risk Detection System - Button:
Shutdown
- Title:
- Left Panel
Live View: real-time video preview + quick control buttonsMonitor Status: system status summary +Logs: system logs
- Right Panel
Settings: all configurable parameter input fieldsAI Monitoring: AI module switches and configurationApply / Save / Load: configuration application and persistence
The divider between panels is draggable, allowing adjustment of the display ratio;
Click theCollapsebutton to hide the right-side Settings panel and enlarge the Live View area.
Later, clickExpanding Settingsto reopen the collapsed section.
The latest frame uploaded from the mobile device enters the PC-side FrameBuffer, and the browser continuously refreshes the display in Live View via /stream using MJPEG.
① Display Logic
The video shown in Live View depends on whether CamFlow is uploading frames and whether PC-side Ingest (allowing /upload to write into FrameBuffer) is enabled.
- When Ingest is OFF: displays
Ingest OFF - enable in dashboard; - When Ingest is ON but no frame has been received yet: displays
Waiting for frames....
② Core Button Area
Below Live View, there are three buttons. All are OFF by default:
Enable Ingest, Start Recording, Snapshot.
-
Enable / Disable Ingest
- Function: controls whether the server accepts image uploads from CamFlow;
- Feature: when clicking
Disable Ingest, the preview immediately returns to placeholder state, allowing easy observation of data flow; - Usage: click to toggle
IngestON/OFF.
-
Start / Stop Recording
- Function: controls whether the recording thread writes FrameBuffer data into segmented video files;
- Feature: recording and preview functions are independent. Recording only requires
Ingestto be ON; - Output path structure:
recordings/videos/YYYYMMDD/<cam_name>_YYYYMMDD_HHMMSS.mp4, configurable in Settings; - Usage: click to toggle
RecordingON/OFF.
-
Snapshot
- Function: saves the current frame as an image.
- Output path structure:
recordings/snapshots/YYYYMMDD/snapshot_YYYYMMDD_HHMMSS.jpg, configurable in Settings; - Usage: click to trigger screenshot.
Monitor Status is an area that aggregates system runtime information and refreshes dynamically according to current system settings.
The table below provides a structured explanation of core fields:
| Field | Format | Meaning | Usage & Troubleshooting |
|---|---|---|---|
| Ingest | Boolean (ON / OFF) | Whether camera uploads are being accepted | If OFF, /upload will be rejected. If no video appears, check this first |
| Recording | Boolean (ON / OFF) | Whether video recording is currently active | If ON but no file is generated → check Output Root, codec, or recording thread |
| Last frame age | Number (seconds) | Time since the last successfully written frame | Should remain at small fractions of a second. If continuously increasing → upload interrupted or rejected |
| Upload FPS | Number (frames/sec) | Estimated upload frame rate | Close to 0 → CamFlow not uploading or Ingest is OFF |
| Stream FPS | Number (frames/sec) | Browser MJPEG refresh rate | Reduce if browser lags; too high increases CPU usage |
| JPEG Quality | Number (0–100) | Preview image compression quality | Higher quality = clearer image but higher load; high quality + high FPS may cause performance pressure |
| Record FPS | Number (frames/sec) | Recording frame rate | Too high increases disk pressure; too low results in choppy video |
| Segment Seconds | Number (seconds) | Video segmentation duration | Reduce if files become too large; too small generates many files |
| Recording file | String (file path) | Current video file being written | If Recording = ON but empty → recording thread not functioning properly |
| Rec elapsed | Time string (HH:MM:SS) | Duration of current recording session | If not increasing → recording not actually started |
| Seg remaining | Number (seconds) | Remaining time for current segment | Abnormal jumps → time configuration or segmentation logic error |
| Upload counts | Structured counter fields | Upload result statistics | Increases when Ingest is ON; contains multiple subfields |
| Subfield | Meaning | Normal Trend | Abnormal Explanation |
|---|---|---|---|
| 200_ok | Successful upload and write count | Should continuously increase | If not increasing, no valid uploads |
| 400_missing_image | Request missing image field |
Should remain 0 | Client field name error |
| 400_decode_failed | Data received but failed to decode | Should remain 0 | Data corruption or non-JPEG content |
| 503_ingest_disabled | Rejected uploads due to Ingest OFF | Increases when Ingest = OFF |
Ingest not enabled |
Below Monitor Status, there is a collapsible Logs area used to view real-time server runtime logs.
- Collapsed by default; click the
Logsrow to expand; - Logs are read from
log/server.log(generated during project runtime); - When the system encounters issues, open the Logs section to inspect runtime information.
Note: Logs only display the most recent 50 entries.
To review earlier logs, open thelog/server.logfile directly to view the complete content.
The right-side Settings panel contains system parameter input fields and configuration buttons.
When modifying system configuration, parameters must first be adjusted in the Settings area and then applied to the system.
① Settings Fields
The table below explains configuration options available in the Dashboard.
It is recommended to understand the “purpose” and “recommended range rationale” before modifying parameters to avoid performance or stability issues.
| Configuration | Format | Purpose | Recommended Range |
|---|---|---|---|
| Stream FPS | Number (frames/sec) | Controls MJPEG refresh rate in browser | Recommended 8–15. Too high increases CPU and bandwidth usage; below 5 causes noticeable lag. |
| JPEG Quality | Number | Controls JPEG compression quality | Recommended 60–80. Too low causes blur; too high significantly increases CPU load at high FPS. |
| Record FPS | Number (frames/sec) | Controls recording frame rate | Recommended 10–15. Too high increases disk pressure and file size; below 8 reduces smoothness. |
| Segment Secs | Number (seconds) | Controls video segmentation duration | Recommended 60–3600. Too large creates huge files; too small creates many fragments. |
| Output Root | String (file path) | Video and snapshot output directory | Recommended default recordings; avoid system root directories or unauthorized paths. |
| Cam Name | String (identifier) | Camera identifier written into filenames | Recommended simple identifiers (e.g., phone1); avoid spaces or special characters. |
| Codec | String (FourCC) | Video encoding format, e.g., avc1 / mp4v / XVID |
Recommended avc1 or XVID (better compatibility). Encoder support varies by system; switch if recording fails. |
| Autosave | Boolean (true / false) | Whether to automatically write to config.json after clicking Apply |
Recommended enabled during development; disable during frequent experimentation to avoid overwriting configurations. |
Note: Parameter modifications do not take effect automatically.
Changes must be applied using the buttons below.
It is recommended to stop current system tasks before applying new parameters to ensure full effect.
② AI Monitoring Section
Under default configuration, the AI module is disabled.
To enable it, click the Enable button on the right side of the AI Monitoring panel.
Once enabled, AI settings and status sections will appear on the Dashboard.
For detailed information, refer to 4.4. AI Monitoring Features.
③ Application Buttons
-
Difference between Apply / Save / Load
-
Apply
- Submits current input parameters to backend
/api/configand immediately updates runtime configuration. - If parameters are invalid (out of range / wrong type), the backend will reject and return an error message.
- Submits current input parameters to backend
-
Save
- Writes the current configuration to local file (
app/config/config.json).
- Writes the current configuration to local file (
-
Load
- Loads
config.jsonfrom disk and refreshes the page form.
- Loads
-
- On first system startup, a pre-filled configuration set is used;
- To customize system configuration, modify parameters in the Settings panel and click
Apply;- (If
Autosaveis not enabled) clickSaveto persist configuration locally;- Later, click
Loadto reload saved configuration into the form without re-entering values.
⚠️ This is the “terminate service” button.
The topShutdownbutton triggers the system shutdown procedure: stop recording, disable ingest, stop the Python service, and attempt to close the page.
4.4. AI Monitoring Features (Optional Feature) ⌃
The AI module is an optional enhancement module:
Even without configuring any API Key, the system can fully operate basic functions such as video upload / live preview / recording / snapshot / logs.
Sentinel’s AI module adopts an event-driven layered trigger-based visual cognition architecture.
Instead of performing inference on every frame, the system uses lightweight motion detection, a dual-state machine control mechanism, and structured event management to achieve lower-cost long-term intelligent monitoring.
The modular design supports pluggable models, structured output contracts, and system decoupling, ensuring scalability, interpretability, and maintainability.
┌─────────────────────────────────────────────────────────────────────┐
│ Data Input Layer
│ CamFlow → POST /upload → FrameBuffer (Latest Frame Cache)
└──────────────────────────────┬──────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────────┐
│ Lightweight Motion Trigger Layer (Cost Reduction)
│ - Compute frame difference ratio
│ - If motion_ratio > threshold
│ - Trigger frequency controlled by Motion Min Interval
└──────────────────────────────┬──────────────────────────────────────┘
│
▼ If triggered, STATE: SLEEP → OBSERVE
┌─────────────────────────────────────────────────────────────────────┐
│ AI State Machine Control Layer
│ STATE ∈ { SLEEP, OBSERVE }
│ SLEEP:
│ - Only runs motion trigger
│ - Does not call model
│ OBSERVE:
│ - Calls model every AI Interval
│ - Manages dwell timing
│ - Calculates event_duration
└──────────────────────────────┬──────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────────┐
│ Vision Model Interface Layer
│ - Construct Prompt (Role + Scene + Session + Extra)
│ - Compress JPEG (AI JPEG Quality)
│ - Call third-party vision model API
│ - Receive JSON output
│ - Perform Output Contract validation
└──────────────────────────────┬──────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────────┐
│ Event Lifecycle Management Layer
│ - dwell ≥ threshold → confirmed
│ - No target ≥ End Grace → end event
│ - Write to ai_events.jsonl
│ - Update AI runtime status
└──────────────────────────────┬──────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────────┐
│ Visualization & Display Layer
│ - /api/ai_status
│ - Display current AI Result
│ - Display Info: Times, Metrics
│ - Display History List
└─────────────────────────────────────────────────────────────────────┘
4.5. Dashboard AI Module Guide ⌃
When the Dashboard page is first opened, the AI module is disabled by default.
Click theEnablebutton on the right side of the AI Monitoring panel to activate the AI module.
The AI module on the Dashboard consists of two parts:
- AI Settings (Configuration Area): determines whether AI is enabled, call frequency, thresholds, etc.;
- AI Status (Information Area): displays state machine status, health status, and AI monitoring history.
It is recommended to understand the purpose of each parameter before enabling the AI module, in order to control model call sensitivity and reasonably tune for different scenarios.
| Configuration | Format | Purpose | Recommended Value |
|---|---|---|---|
| Model / Endpoint | String (Model ID or Endpoint) | Specifies the vision model to call | Use a model that supports vision analysis, e.g., doubao-seed-2-0-mini-260215. |
| API Key | String (key) | Credential for third-party model service access | Must be obtained from the provider. See 4.6. Third-party Model Integration. |
| OBSERVE Interval (sec) | Number (seconds) | Minimum interval between model calls in OBSERVE state | Recommended 2–3 seconds. Too small increases token consumption. |
| Dwell Threshold (sec) | Number (seconds) | Duration a target must persist before confirming an event | Recommended 3–5 seconds. Increasing reduces occasional false positives. |
| End Grace (sec) | Number (seconds) | Delay before ending event after target disappears | Recommended 2–3 seconds to avoid interruption due to brief occlusion. |
| AI JPEG Quality | Number (50–95) | JPEG compression quality sent to model | Recommended 75–90. Too high increases bandwidth and encoding cost. |
| Motion Threshold (ratio) | Decimal (ratio) | Frame change ratio threshold for motion detection | Recommended 0.1–0.3. Lower values increase sensitivity; raise if AI triggers too frequently. |
| Motion Min Interval (sec) | Number (seconds) | Minimum time between motion triggers | Recommended 1–3 seconds to prevent excessive triggering. |
| Prompt Template (Role) | Long text (role definition) | Defines model role and output format specification | Recommend explicitly requiring “JSON-only output”. |
| Scene Profile (Long-term) | Long text (long-term context) | Provides fixed environmental background description | Describe camera position, scene type, etc. |
| Session Focus (Short-term) | Long text (temporary focus) | Temporary monitoring objective | Example: “Focus on whether strangers enter the area.” |
| Extra Prompt / Rules | Long text (additional rules) | Additional output format, language, or risk evaluation rules | Can specify output language or restrict fields. |
Parameters usually require tuning according to lighting conditions, camera angle, and target types.
After modification, observe 1–2 complete event cycles before evaluating effectiveness.
① Detailed Explanation of Text-type Parameters (Recommended Reading)
The following four fields belong to Prompt engineering parameters and directly affect model output structure and semantic understanding.
- Prompt Template (Role): defines the model’s role and output structure.
Example:
You are a video surveillance assistant. You will receive a monitoring image and background information. Only output a JSON object including whether a person is present, number of people, activity, risk level, and a brief summary.
- Scene Profile (Long-term): provides long-term fixed background context to help the model determine what is considered “abnormal”.
Example:
Home scenario: the camera faces the living room.
Office scenario: the camera faces the workspace area. Multiple people during working hours are normal.
- Session Focus (Short-term): temporary task configuration that can change according to short-term requirements.
Example:
Currently focus on whether children approach the balcony area.
- Extra Prompt / Rules: supplementary output and language constraints.
Example:
Use Chinese for the summary.
If uncertain, keep has_person=false.
② Output Contract and Automatic Structure Constraint Mechanism
To ensure stable parsing of model outputs in the Dashboard, the backend implements an Output Contract mechanism.
Regardless of how the task is described in the Prompt Template, the model’s final output must conform to the predefined JSON structure required by the system.
This area visualizes the runtime status of the backend AI state machine in real time.
① Status Header
Located at the top of the AI Status panel, displaying the current state machine runtime status:
┌────────────────────────┬──────────────────┬─────────────────┬──────────────────────┐
| STATE: SLEEP / OBSERVE | Event: evt_xxxxx | Dwell: Yes / No | Health: Fine / Error |
└────────────────────────┴──────────────────┴─────────────────┴──────────────────────┘
| Field | Format | Meaning |
|---|---|---|
| STATE | SLEEP / OBSERVE | Indicates whether AI is currently active |
| Event | String (evt_xxxxx) | Current active event ID |
| Dwell | Yes / No | Whether dwell confirmation threshold has been reached |
| Health | Fine / Error | Whether the most recent model call succeeded |
- State Machine Description
🔹 SLEEP: only runs traditional vision algorithms; does not call the vision model; low resource consumption.
🔹 OBSERVE: periodically calls the vision model, analyzes structured fields, and performs event recording.
② Last AI Result
To stabilize output structure, the system enforces an Output Contract mechanism.
The model must output a JSON object containing at least the following fields:
{
"has_person": bool,
"person_count": int,
"activity": str,
"risk_level": "info|warn|critical",
"confidence": float,
"summary": str
}
After structured extraction, the Dashboard presents a human-readable format (example):
Person: No | Count: 0 | Activity: unknown | Risk: info | Conf: 99%
Summary: This indoor CCTV scene contains no people...
| Field | JSON Key | Format | Description |
|---|---|---|---|
| Person | has_person | Boolean (Yes / No) | Whether a person is detected |
| Count | person_count | Number | Number of people |
| Activity | activity | String | Behavior description |
| Risk | risk_level | info / warn / critical | Risk level |
| Conf | confidence | 0–100% | Model confidence |
| Summary | summary | String | Natural language summary |
The system includes Output Contract safeguards.
If the model response is incomplete, it will be processed to ensure the Dashboard always receives valid output.
③ Time Information (Times)
Calculates runtime timing information for the event monitoring module (example):
Last trigger: 2026-02-24 01:48:37 (2m ago)
Last AI call: 2026-02-24 01:51:05 (6s ago)
Event start: 2026-02-24 01:48:37 (2m ago)
| Field | Meaning |
|---|---|
| Last trigger | Most recent motion trigger time |
| Last AI call | Most recent model call time |
| Event start | Start time of current event |
④ Event Metrics (Metrics) (Example)
Person present (acc): 115.0 s
Event duration: 153.9 s
| Metric | Meaning |
|---|---|
| Person present (acc) | Accumulated time a person was confirmed present during event |
| Event duration | Total duration of the current event |
Note: Only after dwell_confirmed (confirmed persistent presence) does the system begin accumulating effective person presence time.
⑤ Raw JSON Output
Below Metrics, there is a collapsible Raw JSON section used to view the latest original JSON output from the model.
- Collapsed by default; click
Raw JSONto expand.
Useful for debugging Prompt configuration and checking field completeness.
⑥ History List
When the system runs for the first time, this section is empty.
After an event ends, it is written into log/ai_events.jsonl and displayed in chronological order on the page (example):
┌────────────────────────────────────────────────────────────┐
│ 2026-02-24 01:51:03 AI 🔴 critical
│ Person: Yes | Count: 1 | Activity: lingering
│ Risk Level: critical | Confidence: 99%
│ Summary: A single person is lying prone...
└────────────────────────────────────────────────────────────┘
┌────────────────────────────────────────────────────────────┐
│ 2026-02-24 01:50:48 AI 🟢 info
│ Person: Yes | Count: 1 | Activity: passing
│ Risk Level: info | Confidence: 98%
│ Summary: One person is walking through the indoor space…
└────────────────────────────────────────────────────────────┘
...
...
...
Note: The History List only displays the most recent 20 records.
To review earlier entries, openlog/ai_events.jsonldirectly to view the complete content.
4.6. Third-party Model Integration (Replaceable) ⌃
The current AI module is built around the vision model API provided by the Volcengine platform.
To use the AI functionality, you must first obtain model access permission and an API Key, then enter them into the Dashboard settings.
Support for additional platforms will be added in future updates.
If you wish to integrate other platforms or locally deployed vision models in the current version, modify theanalyze_frame()function inai_ark.py, and ensure the output structure complies with the Output Contract.
-
Visit the platform login page: https://console.volcengine.com/
If the page language is not preferred, switch via the中文 / ENbutton in the upper-right corner; -
Complete registration and log in to the console;
-
Enter the
Arkpage; -
Go to
Config - Model activation, and under the “Large Language Model” category, locate the vision model.
It is recommended to selectDoubao-Seed-2.0-mini, then click the corresponding “Activate” button;
- Reminder: Also enable the “Free Credits Only Mode” service on the Model activation page.
The model provides a free quota (500,000 tokens). After activation, only the free quota is consumed.
Once exhausted, API access will automatically be disabled.
-
Go to the
Config - API keyspage in the console; -
Click “Create API Key” and configure as needed;
-
After creation, copy and securely store the generated API Key.
As the platform UI and free quota policies may change over time, the steps above are for reference.
However, the core process remains:
- Register and log in to the platform console;
- Activate a vision-capable model;
- Create an API Key;
- Enter the Model ID and API Key into the Dashboard.
Based on development testing, Doubao-Seed-2.0-mini currently provides the best cost-performance balance.
- Doubao-Seed-2.0-mini:
- Lightweight model designed for low-latency and high-concurrency scenarios;
- Supports up to 256k context length;
- Four reasoning depth levels (minimal / low / medium / high);
- Supports multimodal image-text understanding and function calling;
- In non-reasoning mode, token consumption is only 1/10 of reasoning mode, offering excellent cost efficiency for simple scenarios.
After entering the model ID
doubao-seed-2-0-mini-260215and your API Key into the Dashboard and applying the settings, the AI module will function properly.
If you wish to use other models (e.g., OpenAI Vision or a locally deployed multimodal model), modify the following files:
-
ai_ark.py- Modify the
analyze_frame()function - Replace the API call logic
- Ensure JSON output structure remains consistent
- Modify the
-
config_store.py(optional)- Add new model parameters
- Set default values
The Dashboard and AI Status panels rely on the Output Contract for rendering.
If field names change, you must also update:
ai_monitor_worker.pydashboard.js
⚠️ Important: Regardless of the model used, the output structure must conform to the predefined Output Contract.
Otherwise, the Dashboard will not be able to correctly parse the results.
5. Version Information & Notes ⌃
5.1. System Version ⌃
The Sentinel system consists of PC-side service program + Web Dashboard + Android CamFlow client.
The current version information is as follows:
- Sentinel (PC + Dashboard) Version: v1.0.0-beta
- CamFlow (Android) Version: v1.0.0-beta
- Documentation Version: v1.0.0
- Last Updated: 2026-02-26
5.2. Test Environment ⌃
The system has been tested and validated under the following environments:
-
PC Side (Server + Dashboard)
- Operating System: Windows 10 / Windows 11
- Python Version: Python 3.9+
- Main Dependencies: Flask, OpenCV, NumPy, volcenginesdkarkruntime
- Network Environment: Same local area network (LAN / same Wi-Fi)
-
Android Side (CamFlow)
- System Version: Android 8.0+
- Test Device: nova 8 SE Vitality Edition (HarmonyOS 3.0.0)
- Network Environment: Same Wi-Fi / same LAN as PC
- Required Permission: Camera (requested at first launch)
5.3. Roadmap ⌃
Possible future improvements include:
-
Security Mechanisms
- Add Token / API Key authentication (for public network deployment)
-
Transmission Performance
- Replace some HTTP polling or MJPEG mechanisms with WebSocket / WebRTC (reduce latency and resource consumption)
- Adaptive frame rate / resolution control (dynamic tuning based on network and CPU conditions)
-
AI Integration
- Expand pluggable model interfaces (OpenAI / local vision models / other platforms)
- More refined event aggregation and rule engines (reduce false positives / false negatives)
5.4. Usage & License ⌃
Sentinel is released under the MIT License.
Copyright © 2026 Suzuran0y
- This project is intended for learning, research, and technical validation purposes and is a prototype system.
- If deploying the current version in a real production or commercial environment, you must implement additional measures:
- Access authentication
- Transmission encryption
- Log desensitization and privacy compliance policies
- Stability and resource limitation mechanisms
Note: CamFlow involves image data capture and transmission.
Ensure compliance with local laws and obtain appropriate authorization before use.





