This repository was archived by the owner on Dec 8, 2024. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 15
This repository was archived by the owner on Dec 8, 2024. It is now read-only.
Developing the Model #16
Copy link
Copy link
Open
Labels
Description
DDoS Detection Model Challenge
Issue Description
🚀 Challenge: Compete to create the most accurate model for detecting DDoS attacks! Participants are tasked with developing a machine learning model that assigns labels of 1 for DDoS attacks and 0 for normal traffic based on packet features. Additionally, participants are required to maintain a list of source IPs associated with detected DDoS packets.
Evaluation Criteria
- Model Accuracy: The accuracy of the machine learning model in correctly classifying packets.
- False Positive Rate: Minimize false positives to enhance precision.
- List of Source IPs: Maintain an accurate list of source IPs for all packets classified as DDoS attacks.
Data Details
- Three datasets are provided for training and evaluation.
- If you wish to add another dataset, feel free to contribute to the other open-for-all issue and get your data added.
- Features include packet metadata such as IP addresses, TCP/UDP ports, and flags.
- Labels should be binary, with 1 denoting DDoS attacks and 0 denoting normal traffic.
Submission Guidelines
- Participants are encouraged to use a variety of machine learning algorithms and techniques.
- Submissions should include a Jupyter notebook or Python script containing the model implementation.
- Clearly document and comment your code for transparency and understanding.
Reward
- The participant with the most accurate model and well-maintained list of source IPs will be recognized and only their PR will be merged.
Additional Information
- Participants can discuss their approaches, findings, and seek clarifications in the project's discussions.
- Please adhere to project coding standards and guidelines during implementation.
Note
- Refer to
/data/CONTRIBUTING.mdfor information on dataset usage and/scripts/model_evaluation.ipynbfor existing evaluation conventions.