Build your own AI-powered VTuber assistant ✨
YAE-AI is your gateway to self-hosting powerful AI VTubers with style. Whether you're streaming, coding, or just vibing with your AI waifu, this project lets you run vLLMs (very Large Language Models) locally using Docker, while keeping the flexibility to switch between providers like ChatGPT, Claude, and Gemini.
No cap, this is the ultimate foundation for your AI companion dreams. 💜✨
🎭 Pro tip: Perfect for VTuber streams, dev assistants, or just having a based AI friend who gets your references.
- 🐳 Docker-powered vLLM hosting — Clean, isolated, professional setup
- 🔄 Multi-provider support — ChatGPT, Claude, Gemini? We got you
- 🎯 Character roleplay ready — Built for immersive VTuber interactions
- ⚡ UV package manager — Lightning-fast Python dependency handling
- 🎨 Fully customizable — Make your AI assistant uniquely yours
Before we pop off, make sure you've got these installed:
| Tool | Description | Installation |
|---|---|---|
| Python 3.10+ | The language of the gods 🐍 | Download |
| UV | Next-gen Python package manager | Install UV |
| Docker | Container platform for vLLMs 🐳 | Get Docker |
| Git | For cloning the repo | Install Git |
⚠️ Important: Make sure Docker Desktop is running before you start!
Let's get this bread with these quick commands:
git clone https://github.com/OuOSama/YAE-AI.git yae-ai
cd yae-aiuv syncThis command pulls all the Python packages you need. UV is built different — it's fast AF. ⚡
uv run src/utils/save.pyThis script fetches your AI model from Hugging Face and saves it locally. Patience is key here, bestie — models can be thicc. 📦
docker compose up -dYour vLLM server is now running in the background. We're so back. 🎯
Time to bring your AI to life! Here's how you interact with your model:
from openai import OpenAI
# Initialize the client
client = OpenAI(
api_key="mykey", # Can be any string for local vLLM
base_url="http://localhost:8000/v1"
)
# Create a character-driven response
response = client.chat.completions.create(
model="/model",
messages=[
{
"role": "system",
"content": (
"You are a charming, witty, and slightly mischievous kitsune "
"named Yae Miko, the Guuji of the Grand Narukami Shrine from "
"Genshin Impact. Your responses must be entirely in character. "
"Never break character or reveal that you are an AI. "
"Respond naturally in English with elegance and playful wit."
)
},
{
"role": "user",
"content": "Where is Raiden?"
}
],
temperature=0.2,
max_tokens=256
)
print(response.choices[0].message.content)| Parameter | Purpose | Recommended Values |
|---|---|---|
model |
Model path in Docker | "/model" |
temperature |
Response creativity | 0.2-0.4 (focused) 0.7-1.0 (creative) |
max_tokens |
Response length limit | 256-1024 |
top_p |
Nucleus sampling | 0.9-1.0 |
frequency_penalty |
Reduce repetition | 0.0-0.5 |
system_prompt = """
You are Kitsune-chan, a playful fox VTuber who loves coding streams and anime.
You speak with Gen Z energy, use developer slang naturally, and occasionally
reference anime or gaming culture. You're supportive but also tease viewers
with wit. Never break character or mention being an AI.
"""system_prompt = """
You are a senior dev with VTuber aesthetics. Explain code concepts clearly,
use analogies from anime/games when helpful, and keep responses encouraging.
You're the cool senpai who makes programming feel less scary.
"""# Check if Docker is running
docker ps
# Restart containers if needed
docker compose restart
# View logs for debugging
docker compose logs -f-
Issue: Model download fails
- Fix: Check your internet connection and Hugging Face access
- Fix: Verify you have enough disk space (models are large!)
-
Issue: vLLM server won't start
- Fix: Ensure ports 8000-8001 aren't already in use
- Fix: Check Docker has enough RAM allocated (8GB+ recommended)
Want to make YAE-AI even more bussin'? We'd love your contributions!
- 🐛 Report bugs via GitHub Issues
- 💡 Suggest features or improvements
- 🔧 Submit pull requests with enhancements
- 🎨 Share your character system prompts!
# Fork the repo, then clone your fork
git clone https://github.com/YOUR-USERNAME/YAE-AI.git
cd YAE-AI
# Create a feature branch
git checkout -b feature/your-cool-feature
# Make changes, commit, and push
git add .
git commit -m "feat: add cool feature"
git push origin feature/your-cool-featureyae-ai/
├── src/
│ ├── utils/
│ │ └── save.py # Model downloader
│ └── main.py # Main application
├── docker-compose.yml # vLLM container config
├── pyproject.toml # UV dependencies
├── README.md # You are here!
└── LICENSE # MIT License
- Voice synthesis integration (TTS)
- WebUI for easier configuration
- Multi-language support
- Fine-tuning scripts for custom models
- Live2D integration for visual VTuber avatar
- Streaming platform integrations (Twitch, YouTube)
Shoutout to the legends who made this possible:
- vLLM Team for the amazing inference engine
- Astral for creating UV
- The VTuber community for endless inspiration
- All contributors and supporters ✨
This project is licensed under the MIT License — see the LICENSE file for details.
You're free to use, modify, and distribute this project. Just don't be cringe about it. 💜
Created with 💜 by @OuOSama
Part of the YAE ecosystem:
- YAE-AI — You are here!
- YAE-BACKEND — Backend services
- YAE-BOT — Yae Discord Bot
If this project helped you, consider:
- ⭐ Starring the repo
- 🔄 Sharing with friends
- ☕ Buying me a coffee (if you're feeling generous!)
Made with 💜 for the VTuber and developer community
Stay based, stay creative ✨