A production-ready, scalable exam management engine powered by Groq's Llama 3 for automated evaluation of MCQs, short answers, and complex essay questions.
The Albab AI-Powered Exam System is a sophisticated platform designed to streamline the assessment process. By leveraging State-of-the-Art LLMs via Groq's high-speed API, it provides near-instantaneous, objective grading for both structured and unstructured responses.
- π¨ Blazing Fast: Powered by Groq for sub-second AI responses.
- π‘οΈ Resilient: Built-in crash recovery and background evaluation queues.
- π Admin Control: Full dashboard for exam monitoring and analytics.
- π³ DevOps Ready: Includes specialized Docker configurations and CI/CD pipelines.
- AI-Powered Grading: Automated scoring using Groq's Llama 3 70B model.
- Hybrid Question Support:
- MCQs: Single and Multi-select support with partial grading.
- Subjective: Short answers and Essays with contextual feedback.
- Feedback Engine: Provides candidates with constructive, AI-generated reasoning for their scores.
- Anti-Cheat Measures: Server-side time enforcement and session locking.
- Resume Capability: Candidates can resume exams after network failures or browser crashes.
- Data Integrity: SQLite with Write-Ahead Logging (WAL) ensures zero data loss during concurrent access.
- Flexible Exam Design: Create exams via AI generation or manual input.
- Multi-Lingual: Native support for English and Bengali assessments.
- Live Monitoring: Track active sessions, time remaining, and submission status in real-time.
- Negative Marking: Granular control over penalty scoring per section.
graph TD
User((Candidate)) -->|FastAPI| WebServer[App Server]
Admin((Admin)) -->|Dashboard| WebServer
WebServer -->|Requests| Queue[Background Priority Queue]
Queue -->|Process| Groq[Groq AI API]
WebServer <-->|Persistence| DB[(SQLite WAL)]
Queue <-->|State| DB
- Core (
app.py): FastAPI engine handling high-concurrency requests. - Logic (
groq_analyzer.py): Intelligent prompt engineering for grading and generation. - Concurrency (
evaluation_queue.py): Custom-built priority queue with exponential backoff and retry logic. - Data (
db.py): Optimized thread-safe database layer.
The fastest way to get started in a production-like environment:
docker-compose up -dAccess the app at: http://localhost:7894
Clone the repo and initialize your environment:
git clone https://github.com/rashedulalbab253/GenAI-Assessment-Engine.git
cd ai-exam-system
python -m venv venv
source venv/bin/activate # Or `venv\Scripts\activate` on Windows
pip install -r requirements.txtCreate a .env file from the template:
cp .env.example .envUpdate these critical values:
API_KEY: Your Groq API key.ADMIN_SECRET_KEY: A secure password for the dashboard.PORT: Default is 8000.
python main.pyThis repository is configured with GitHub Actions (.github/workflows/docker-publish.yml) to automatically build and push Docker images to Docker Hub.
Ensure your GitHub Repository has the following secrets:
DOCKER_USERNAME: Your Docker Hub handle.DOCKER_PASSWORD: Your Docker Hub Access Token.
- Login: Access
/admin/loginusing yourADMIN_SECRET_KEY. - Design: Use the "Create Exam" tool. Define sections (e.g., "General Knowledge", "Coding").
- Deploy: Copy the generated link and share it with candidates.
- Monitor: Keep the dashboard open to watch candidates take the exam in real-time.
- Entry: Join via the unique exam link.
- Assessment: Complete questions; answers are auto-saved to prevent loss.
- Completion: Submit the exam. View status at the evaluation tracker.
- Secrets: Never commit your
.envfile. - API Keys: Use Groq personal access tokens with limited scope.
- Database: Periodically back up the
exam_system.dbfile. - Production: In production, set
RELOAD=falsein your.env.
We welcome contributions!
- Fork the project.
- Clone your fork.
- Create a Feature Branch.
- Submit a Pull Request.
Distributed under the MIT License. See LICENSE for more information.
Developed with precision by Rashedul Albab π