PromptMasterPro makes AI accessible in Ethiopian languages by turning natural, rough prompts (text or voice) into clear, AI-ready prompts in English and the selected local language.
Key idea: Speak or write in Amharic, Tigrigna, Afan Oromo, or English → app improves and returns a stronger prompt in both English and your language.
Why it exists: Many users know what they want but struggle to express it clearly in English. PromptMasterPro removes that friction so students, creators, and professionals can get better results from AI tools.
Supported input: text and voice recordings
Languages: Amharic, English, Tigrigna, Afan Oromo
Demo
Below are inline demo assets from the project so you can preview the app flow directly in the README.
Screenshots:
Project structure (high level)
- Backend: backend — Node.js + TypeScript, Prisma ORM, audio upload + AI services
- Mobile: mobile — Expo React Native app, voice recording, UI for prompt editing and results
Highlights & features
- Multilingual input (text + voice)
- Automatic transcription & translation
- Prompt improvement: returns optimized prompt in English and the user’s language
- Lightweight mobile UI designed for quick prompt iteration
Quick Start — Prerequisites
- Node.js (LTS)
- npm or yarn
- For mobile: Expo CLI (
npm install -g expo-cli) or usenpx expoif you prefer
Backend — Quick start
- Open a terminal and install:
cd backend
npm install- Set environment variables (copy
.env.exampleif present) and run migrations:
# set env variables (DB URL, API keys)
npm run prisma:migrate # or the project-specific migration command- Start development server:
npm run devCheck the API documentation and routes in backend/API_DOCUMENTATION.txt.
Mobile — Quick start
- Install dependencies and start Expo:
cd mobile
npm install
npx expo start- Open on a simulator or physical device using the Expo client. The app entry is in
app/.
Notable source locations
- Backend controllers: backend/src/controllers
- Backend services: backend/src/services
- Mobile app entry & screens: mobile/app
- Mobile native components: mobile/components
Design & UX
- Focus on simple flows: record → preview → improve → copy/export
- Demo screens and flows are in the included video and screenshots (linked above)
Contributing
- Fork the repo and open a branch for your work:
feature/<short-desc> - Create small, focused PRs with a clear description and screenshots or video when UI changes
- If adding third-party services (speech-to-text, TTS, AI APIs), add configuration notes to
backend/README.mdor the project docs
Play Store / Release notes
- The mobile app is prepared for Play Store packaging via Expo. Before release, verify API keys, privacy policy, and audio permissions. Consider integrating CI to automate builds.
Privacy
- Audio and prompt data may be sent to third-party AI/transcription services—document the services used and the retention policy before publishing.
License
- Add a
LICENSEfile or choose a license (MIT, Apache-2.0, etc.) and include it here.
If you'd like, I can:
- add a short
backend/README.mdandmobile/README.mdwith expanded environment variables and local dev tips, - generate a polished GitHub release description, or
- prepare a minimal
CONTRIBUTING.mdandLICENSEfile (specify preferred license).
Created using your demo assets; tell me which next step you prefer.


