This project is a real-time human fall detection system built using ROS 2, Python, OpenCV, and the YOLO11n-pose estimation model. It captures video from a camera, analyzes human poses to detect falls, and provides a simple Tkinter-based GUI for control and monitoring.
- Real-time Pose Estimation: Utilizes the YOLO11n-pose model to detect human keypoints in a video stream.
- Furniture Detection: Utilizes the YOLO11n model to detect furnitures like bed, couch, and bench.
- Fall Detection Logic: Implements a custom algorithm to classify a person's state (Standing, Sitting, Lying, Fallen) based on spine angle, body aspect ratio, and keypoint dispersion.
- ROS 2 Integration: The system is modularized into several ROS 2 nodes for camera input, pose processing, and fall analysis.
- Graphical User Interface (GUI): A simple Tkinter GUI allows users to easily start and stop the detection process, view the annotated video feed, and see log messages.
- Event Logging: Logs fall detection events with relevant details.
- Screenshot on Fall: Automatically saves a screenshot when a fall is confirmed.
The GIF above demonstrates how the system detects a fall. The current FPS is displayed in the top-left corner. The subject is enclosed in a blue bounding box along with human body keypoints, while furniture (e.g., a bed) is enclosed in a green bounding box. When a fall is detected, a red banner appears at the top of the screen displaying the message "ALERT: FALL DETECTED!" accompanied by an alarm sound. The alarm sound file can be customized by modifying the corresponding entry in the params.yaml file of the alarm node.
The system consists of several ROS 2 nodes that communicate via topics. The nodes are in fall_detection_ws/src/human_fall_detection/human_fall_detection/. The params can be adjusted in fall_detection_ws/src/human_fall_detection/config/
The launch file is in fall_detection_ws/src/human_fall_detection/launch/.
GUI file is in fall_detection_ws/main_gui.tkinter.py
camera_node: Captures frames from a webcam and publishes them to thecamera/image_rawtopic.pose_detection_node: Subscribes tocamera/image_raw, runs the YOLO model on the frames, and publishes the detected keypoints topose/keypointsand an annotated image topose/annotated_image.object_detection_node: Subscribes tocamera/image_raw, runs the YOLO11n model on the frames, and publishes the detected furnitures tofurniture/detected. The furnitures are specified to only detect 3 different types: bed, couch, and bench from COCO dataset.fall_detection_node: Subscribes topose/keypoints, analyzes the pose data to determine the person's state, and publishes fall alerts.visualization_node: Subscribes topose/annotated_image,furniture/detected,human/fall_detectedand publishes/fall_detection/imagefor the visualization.alarm_node: Subscribes tohuman/fall_detectedto play the alert sound.main_gui_tkinter.py: A standalone application that:- Provides "Start" and "Stop" buttons to launch and terminate the ROS 2 nodes.
- Subscribes to
pose/annotated_imageto display the video feed. - Displays log output from the ROS nodes.
This fall detection system was tested on these environment and requirements:
- Macbook Pro with M3 Pro chip
- VMware Fusion Professional Version 13.6.3 (24585314)
- Ubuntu 64-bit Arm Server 24.04, with GNOME Shell 46.0
- 5 processor cores, with 4608 MB Memory, 64 GB Disk (optional)
- ROS 2 Jazzy Jalisco
- Vertual environment with Python 3.12
-
Clone the Workspace: If you haven't already, place the project into a ROS 2 workspace directory.
mkdir -p ~/fall_detection_ws/src mv /path/to/your/human_fall_detection ~/fall_detection_ws/src/
-
Create the Virtual Environment: Due to the venv is too big so I can't upload it to github. So you need to create a venv by yourself.
-
Install Python Dependencies: Navigate to your workspace root and install the required Python packages using the
requirements.txtfile.cd ~/fall_detection_ws pip install -r requirements.txt
-
Build the ROS 2 Workspace: Source your main ROS 2 environment and build the package using
colcon.source /opt/ros/jazzy/setup.bash colcon build
The easiest way to run the system is through the provided GUI.
-
Source the Workspace: Before running, you must source the local setup file of your workspace. Open a new terminal for this. (Optional – runs automatically after step 2.)
cd ~/fall_detection_ws source install/setup.bash
-
Launch the GUI: Run the main Tkinter application script.
python3 main_gui_tkinter.py
- Using the GUI:
- Start: Click the "Start" button to launch all the necessary ROS 2 nodes. The video feed and logs will appear in the window.
- Stop: Click the "Stop" button to terminate the detection process.
- Screenshots: Click this button to open the
~/fall_detection_ws/screenshotsfolder where images of detected falls are saved.
You can also run the ROS 2 launch file directly without the GUI.
-
Uncomment line 167-170 in visulization_node.py
# Display the visualization if self.display_output: cv2.imshow('Fall Detection Visualization', display_image) cv2.waitKey(1)
-
Source the Workspace:
cd ~/fall_detection_ws source install/setup.bash
-
Run the Launch File:
ros2 launch human_fall_detection fall_detection.launch.py
Video Guide: 【MacBook使用UTM安装ubuntu操作系统】
After installing the Ubuntu Server, run:
sudo apt-get update;
sudo apt-get install python3;
sudo apt-get install xinit;
sudo apt-get install ubuntu-desktop;sudo apt update
sudo apt install open-vm-tools open-vm-tools-desktop
sudo reboot
sudo usermod -aG video $USER3. If use a virtual environment, ros2 can't recognize the side packages in the venv automatically. To solve this:
export PYTHONPATH=$(pwd)/venv/lib/python3.12/site-packages:$PYTHONPATHThe following steps solved this problem for me:
systemctl --user stop pipewire.socket
systemctl --user stop pipewire.service
systemctl --user disable pipewire.socket
systemctl --user disable pipewire.service
systemctl --user mask pipewire
systemctl --user mask pipewire.socket
sudo apt install pulseaudio
systemctl --user unmask pulseaudio
systemctl --user unmask pulseaudio.socket
systemctl --user start pulseaudio.service
systemctl --user enable pulseaudio.serviceCheck if pulseaudio is running:
pactl infoReboot the system.
We categorized falls into four distinct types \cite{Alam}: forward, backward, left-side, and right-side. Fall simulations were conducted using a single subject. A camera was positioned approximately 180 cm above the ground at an angle of about 60 degrees to capture the floor area to simulate the surveillance. For the left-side and right-side falls, the subject stood near one side of the frame facing the camera, and performed simulated side falls onto a black mat. For the forward and backward falls, the subject stood with their side facing the camera. For normal daily activities, we imitate the four most frequent activities: standing still, walking around, sitting on a chair, and lying on the bed.
The results are shown in Table IV. The Fig. 5 visualizes the fall detection results while Fig. 6 demonstrates the normal activity showcases.
Table V presents that the fall detection system achieved a high accuracy of 95.4%.
For future work, we plan to explore additional cues for fall detection by monitoring changes in the subject's bounding box ratio. If the ratio changes significantly over a sequence of video frames, the system will classify the event as a fall. We also propose to track variations in the object's center of gravity to further assist in fall detection. A rapid shift beyond a predefined threshold would similarly be interpreted as a fall. Furthermore, we aim to extend the system's visualization capabilities by labeling and displaying the classified human postures (e.g., standing, sitting, lying, fallen) directly on the video output, providing intuitive real-time feedback. In addition, we intend to experiment with alternative models, such as OpenPose, and leverage open-source datasets for training to enhance the model's robustness and generalization capabilities.
This project is a Smart Home Project developed for COMP9991 and COMP9992 at UNSW. It was completed in 2025, spanning Term 1 to Term 2, by student Yiming Peng under the supervision of Professor Claude Sammut.








