Skip to content
View amrgaberM's full-sized avatar

Block or report amrgaberM

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don't include any personal information such as legal names or email addresses. Markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
amrgaberM/README.md

Hi there, I'm Amr Hassan

Typing SVG

AI Engineer specializing in LLMs, Transformer architectures, and production ML systems

Cairo, Egypt

LinkedIn Medium GitHub Email


About Me

I build production-grade AI systems with a focus on Large Language Models and deep learning. From implementing GPT architectures from scratch to deploying RAG pipelines that handle thousands of documents, I'm passionate about turning complex AI research into practical, scalable solutions.

Currently sharing what I learn through technical deep-dives on Medium and always excited to collaborate on challenging ML projects.


Featured Projects

GPT from Scratch

GitHub Repo Medium Article

Built a 164M parameter GPT model implementing the complete Transformer decoder architecture in PyTorch. Includes Multi-Head Self-Attention, Causal Masking, and Sinusoidal Positional Encoding.

Tech Stack:

PyTorch Python Transformers

TransformerQA

GitHub Repo

Production RAG system for document Q&A with automatic source attribution. Achieved 15% retrieval precision improvement and handles 10,000+ embeddings with sub-second latency.

Tech Stack:

LangChain ChromaDB FastAPI

Athlete Injury Prevention

GitHub Repo Live Demo

ML ensemble model achieving 92.9% accuracy on 10,000+ athlete records. Deployed as Flask API with interactive dashboard for real-time risk assessment.

Tech Stack:

Scikit-learn XGBoost Flask

More Projects

Check out my repositories for more ML experiments, NLP projects, and AI implementations.

Explore Repos


Tech Stack

Languages

Python JavaScript SQL

ML & AI Frameworks

PyTorch TensorFlow Hugging Face LangChain Scikit-learn XGBoost

Tools & Infrastructure

Docker FastAPI Flask Apache Spark Git Linux

Specialization

LLM Fine-tuning (LoRA, PEFT) • RAG SystemsVector DatabasesModel DeploymentTransformer Architectures


Current Focus

Building: Advanced RAG architectures and LLM-powered applications

Learning: Scaling transformer models and optimizing inference pipelines

Writing: Technical deep-dives on GPT architecture,

Pinned Loading

  1. GPT-Implementation GPT-Implementation Public

    Research code implementing the "Attention Is All You Need" architecture. Engineers a stable training loop for a 163M LLM using reduced-precision techniques on free-tier compute.

    Jupyter Notebook

  2. TransformerQA TransformerQA Public

    TransformerQA – A RAG-based chatbot that provides precise, context-aware answers from the “Attention Is All You Need” paper using vector retrieval over segmented content.

    Python

  3. NeuraLang NeuraLang Public

    NeuraLang is a professional NLP framework showcasing embeddings, transformers, and intelligent agents.

    Jupyter Notebook

  4. FabulaGPT FabulaGPT Public

    A high-performance implementation of a GPT-2 architecture optimized for emergent storytelling. Trained on the TinyStories dataset, this project focuses on achieving linguistic coherence and narrati…

    Python

  5. injury-prediction-prevention-ml injury-prediction-prevention-ml Public

    A machine learning system for predicting and preventing athlete injuries using advanced data analysis, risk assessment, and tailored recommendations.

    HTML