A comprehensive navigation system that combines LiDAR obstacle detection with spatial audio feedback for enhanced accessibility and navigation assistance.
SenseNav/
├── SenseNav_frontend/ # React frontend application
│ ├── src/
│ │ ├── components/
│ │ │ ├── SpatialAudioVisualization.jsx
│ │ │ └── NavigationMusicBox.jsx
│ │ └── pages/
│ │ └── visualization.jsx
│ └── package.json
├── SenseNav_backend/ # Python backend API
│ ├── spatial_audio/
│ │ └── closest_obstacle_audio.py
│ ├── api/
│ │ └── app.py
│ ├── utils/
│ │ └── data_processing.py
│ └── requirements.txt
└── README.md
- 360° Obstacle Detection: Divides space into 6 sectors (Front-Left, Front-Right, Back-Left, Back-Right, Up, Down)
- Distance-Based Audio Cues: Closer obstacles trigger higher frequency tones and faster tremolo rates
- Directional Audio: Binaural panning with Interaural Level Difference (ILD) and Interaural Time Difference (ITD)
- Priority Targeting: Automatically prioritizes the most critical obstacles
- Sector-Specific Tones: Each sector has unique audio characteristics for easy identification
- Real-time Obstacle Display: Visual representation of detected obstacles
- Priority Target Highlighting: Shows the most important obstacles to avoid
- Audio Parameter Visualization: Displays frequency, intensity, and direction data
- Connection Status: Shows backend connectivity and data flow status
-
Navigate to the backend directory:
cd SenseNav_backend -
Create a virtual environment:
python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate
-
Install dependencies:
pip install -r requirements.txt
-
Start the backend server:
cd api python app.py
-
Navigate to the frontend directory:
cd SenseNav_frontend -
Install dependencies:
npm install
-
Start the development server:
npm run dev
- GET
/api/health - Returns backend status
- POST
/api/spatial-audio/analyze - Body:
{"points": [[x, y, z], ...]} - Response: Obstacle detection data with audio parameters
- GET
/api/spatial-audio/sectors - Returns information about spatial audio sectors
- Start both backend and frontend servers
- Navigate to the visualization page
- The system will automatically detect obstacles and generate spatial audio cues
- View real-time obstacle data in the "Spatial Audio Detection" section
| Sector | Audio Signature | Description |
|---|---|---|
| FL (Front-Left) | Warm sawtooth + vibrato | Gentle, musical tone |
| FR (Front-Right) | Metallic square + chorus | Sharp, digital sound |
| BL (Back-Left) | Dark warm + deep vibrato | Muffled, behind feeling |
| BR (Back-Right) | Very dark metallic + long chorus | Very muffled, distant |
| UP (Above) | Ascending chirp | Rising frequency sweep |
| DOWN (Below) | Descending pulse + sub-bass | Falling tone with bass |
The system expects point cloud data in the format:
{
"points": [
[x, y, z], // 3D coordinates in meters
[x, y, z],
...
]
}- The frontend includes mock data for testing when the backend is unavailable
- Audio parameters are automatically calculated based on obstacle distance
- The system supports real-time updates through polling
- All spatial calculations use standard coordinate systems (x=forward, y=left, z=up)
- WebSocket support for real-time streaming
- Audio playback integration in the browser
- 3D visualization of obstacle positions
- Machine learning-based obstacle classification
- Mobile app integration