I am an AI Research Scientist at Lunit, bridging the gap between Academic Research and Industrial-Scale Engineering.
I build AI that doesn't just "see" medical images but reasons about them. My work focuses on Foundation Models, Agentic AI, and the Distributed Infrastructure required to train them on Petabyte-scale biomedical data.
- Agentic Clinical Intelligence — Moving beyond standard SFT to build multi-agent systems that mine clinical reasoning traces from raw data.
- Foundation Models (ViT-H / DINOv2 / DINOv3) — Adapting Self-Supervised Learning to high-resolution digital histopathology to find robust biomarkers.
- Distributed Systems at Scale — Architecting fault-tolerant orchestration pipelines to process hundreds of Terabytes of WSI data without breaking a sweat.
While my current work on Foundation Models is proprietary, here is some of my open research:
| Project | What It’s About |
|---|---|
| Foundation Model for Cell Painting | Large-scale vision model learning rich, generalizable features from cell morphology images (KAIST Master's Thesis). |
| Multimodal Phenotype–Compound Translation | Bidirectional model translating between cell painting phenotypes and compound SMILES. |
| PanNuke Segmentation (Educational) | Implementing U-Net & U-Net++ from scratch in PyTorch for semantic segmentation on histopathology data. |
| Neural Style Transfer | A clean reimplementation of the classic algorithm — because sometimes you just want Van Gogh to paint your cat. |
To build AI that is robust, interpretable, and capable of end-to-end clinical reasoning. I am interested in systems that connect the dots between pixels, biology, and text.
The best place to find me (and see my latest updates) is on LinkedIn.