161 lines
2.6 KiB
Markdown
161 lines
2.6 KiB
Markdown
# 🧪 Mock OpenAI Chat API Server
|
|
|
|
This project simulates the OpenAI `/v1/chat/completions` API using a local FastAPI server.
|
|
It logs all incoming requests and returns a dummy response using `"Lorem Ipsum..."` content for debugging and development purposes.
|
|
|
|
---
|
|
|
|
## ✅ Features
|
|
|
|
- Local server that mimics OpenAI ChatCompletion endpoint
|
|
- Logs full request (headers + body) to `requests_log.json`
|
|
- Returns a dummy OpenAI-style response
|
|
- Plug-in replacement for `openai.ChatCompletion.create(...)`
|
|
|
|
---
|
|
|
|
## ⚙️ Requirements
|
|
|
|
- Python 3.7+
|
|
- FastAPI
|
|
- Uvicorn
|
|
|
|
---
|
|
|
|
## 📦 Installation & Setup
|
|
|
|
### 1. Clone or copy this project into a local folder
|
|
|
|
### 2. Create and activate a virtual environment
|
|
|
|
```bash
|
|
python -m venv venv
|
|
```
|
|
|
|
Then activate it:
|
|
|
|
- **Windows:**
|
|
|
|
```bash
|
|
venv\Scripts\activate
|
|
```
|
|
|
|
- **macOS/Linux:**
|
|
|
|
```bash
|
|
source venv/bin/activate
|
|
```
|
|
|
|
### 3. Install dependencies
|
|
|
|
```bash
|
|
pip install fastapi uvicorn
|
|
```
|
|
|
|
---
|
|
|
|
## 🚀 How to Start the Server
|
|
|
|
Use this command to launch the mock API server:
|
|
|
|
```bash
|
|
python -m uvicorn mock_openai_server:app --reload --port 8000
|
|
```
|
|
|
|
It will be available at:
|
|
|
|
```
|
|
http://localhost:8000/v1/chat/completions
|
|
```
|
|
|
|
---
|
|
|
|
## 🧪 How to Use with OpenAI Python SDK
|
|
|
|
Update your client to point to the mock server:
|
|
|
|
```python
|
|
import openai
|
|
|
|
openai.api_key = "sk-dummy"
|
|
openai.api_base = "http://localhost:8000"
|
|
|
|
response = openai.ChatCompletion.create(
|
|
model="gpt-4",
|
|
messages=[
|
|
{"role": "user", "content": "Hello!"}
|
|
]
|
|
)
|
|
|
|
print(response)
|
|
```
|
|
|
|
---
|
|
|
|
## 📄 Example Dummy Response
|
|
|
|
```text
|
|
ID: chatcmpl-mock123
|
|
Model: gpt-4
|
|
Created: <current timestamp>
|
|
Role: assistant
|
|
Message: Lorem ipsum dolor sit amet, consectetur adipiscing elit.
|
|
Finish reason: stop
|
|
Tokens used: prompt=10, completion=12, total=22
|
|
```
|
|
|
|
---
|
|
|
|
## 📝 Output Logs
|
|
|
|
All requests and responses are logged to `requests_log.json`.
|
|
Each log entry contains:
|
|
|
|
- Timestamp
|
|
- Headers (sent by client)
|
|
- JSON body (messages, model, etc.)
|
|
- Dummy response
|
|
|
|
You can use this log to debug what your app sends to OpenAI.
|
|
|
|
---
|
|
|
|
## 📂 Project Structure
|
|
|
|
```
|
|
.
|
|
├── mock_openai_server.py # FastAPI mock server
|
|
├── requests_log.json # Saved request/response logs
|
|
└── README.md # Documentation
|
|
```
|
|
|
|
---
|
|
|
|
## 🧰 Optional: Save Installed Packages
|
|
|
|
To track your environment:
|
|
|
|
```bash
|
|
pip freeze > requirements.txt
|
|
```
|
|
|
|
Later, you can restore it with:
|
|
|
|
```bash
|
|
pip install -r requirements.txt
|
|
```
|
|
|
|
---
|
|
|
|
## 📌 Notes
|
|
|
|
- No actual API calls are made to OpenAI
|
|
- Great for debugging payload formats, headers, and SDK integrations
|
|
- For local development only
|
|
|
|
---
|
|
|
|
## 🪪 License
|
|
|
|
MIT License
|