This project implements a secure OpenAI API proxy using Netlify Functions with rate limiting (5 requests per IP per hour).
- ✅ Securely proxies requests to OpenAI API
- 🔒 Keeps your API key secure on the server side
- ⏱️ Rate limiting: 5 requests per IP address per hour
- 📊 Returns rate limit information in response headers
- 🛡️ Error handling and validation
├── netlify/
│ └── functions/
│ ├── openai-proxy.ts # Netlify function that proxies OpenAI requests
│ └── README.md # Detailed function documentation
├── src/
│ ├── components/
│ │ └── OpenAIExample.tsx # Example React component using the proxy
│ └── utils/
│ └── openaiService.ts # Client utility for interacting with the proxy
├── netlify.toml # Netlify configuration
└── .env.example # Template for environment variables
Create a .env file based on the .env.example template:
OPENAI_API_KEY=your_openai_api_key_here
For production deployment, add your OpenAI API key to your Netlify environment variables:
- Go to your Netlify site dashboard
- Navigate to Site settings > Environment variables
- Add a new variable:
- Key:
OPENAI_API_KEY - Value: Your OpenAI API key
- Key:
Install dependencies:
npm installStart the development server:
npm run devThis will start both the Vite development server and the Netlify Functions development server.
Deploy to Netlify:
npm run build
netlify deploy --prodImport the OpenAIService utility in your components:
import OpenAIService from '../utils/openaiService';
// Simple completion
const response = await OpenAIService.getCompletion('Your prompt here');
// Advanced usage
const { data, rateLimit } = await OpenAIService.createChatCompletion({
model: 'gpt-3.5-turbo',
messages: [{ role: 'user', content: 'Your prompt here' }],
});The project includes an OpenAIExample component that demonstrates how to use the OpenAI proxy. You can import and use this component in your application:
import OpenAIExample from './components/OpenAIExample';
function App() {
return (
<div className="App">
<OpenAIExample />
</div>
);
}The proxy implements rate limiting of 5 requests per IP address per hour. This is done using an in-memory store in the Netlify function.
Rate limit information is returned in the response headers:
X-RateLimit-Limit: Maximum number of requests allowed per hour (5)X-RateLimit-Remaining: Number of requests remaining in the current windowX-RateLimit-Reset: ISO timestamp when the rate limit will reset
For more details, see the function documentation.
- Your OpenAI API key is stored securely as an environment variable and never exposed to the client
- All requests are validated before being forwarded to OpenAI
- Rate limiting helps prevent abuse and excessive costs
To modify the rate limit settings, edit the constants in netlify/functions/openai-proxy.ts:
const RATE_LIMIT = 5; // Number of requests per window
const RATE_LIMIT_WINDOW = 60 * 60 * 1000; // Window size in milliseconds (1 hour)For production use with high traffic, consider implementing a more persistent rate limiting solution using Redis or DynamoDB.
This project is developed and maintained by Opace Digital Agency, a Birmingham-based web design and development agency specializing in modern web solutions.
- Web Design & Development - Professional, responsive websites
- Next.js & React Development - Modern web applications
- Frontend Development - Cutting-edge user interfaces
- WordPress Development - Custom themes and plugins
- E-commerce Solutions - Scalable online stores
- 🌐 Website: opace.agency
- 📧 Services: Web Design & Development
- 💼 GitHub: @OpaceDigitalAgency
- 📍 Location: Birmingham, UK