A modern web interface for interacting with the AutoGLM API (z1 AutoGLM ). This project provides both a web-based UI and a command-line interface for sending tasks to AutoGLM and receiving real-time responses.
Before running, please set your AutoGLM API Token first:
Windows:
set AUTOGLM_AUTOGLM_API_TOKEN=your_tokenLinux/Mac:
export AUTOGLM_AUTOGLM_API_TOKEN=your_tokenpython run.pypython main.pyThen open your browser and visit: http://localhost:8000
python auto.py- Open http://localhost:8000 in your browser
- Enter your task in the input box
- Click the "Send" button
- View the returned results
- Enter your task directly after running
- Press Enter to send
- Type
quitto exit
-
Connection Failed / 连接失败
- Check if API Token is set correctly
- Check network connection
- View error messages in console output
-
Port Already in Use / 端口被占用
- Change port number:
set AUTOGLM_PORT=8001
- Change port number:
-
Need Debug Information / 需要调试信息
- Enable debug mode:
set AUTOGLM_DEBUG=true
- Enable debug mode:
- Web Interface: Clean, responsive web UI for interacting with AutoGLM
- CLI Client: Command-line interface for direct WebSocket communication
- WebSocket Support: Real-time bidirectional communication with AutoGLM API
- Configuration Management: Environment-based configuration with
.envsupport - Logging: Structured logging for debugging and monitoring
- Auto-Reconnection: Automatic reconnection with exponential backoff
- Error Handling: Comprehensive error handling and graceful degradation
- Python 3.8 or higher
- An AutoGLM API token from Zhipu AI
git clone https://github.com/yourusername/AutoGLMUI.git
cd AutoGLMUI
pip install -e .pip install -e ".[dev]"Create a .env file in the project root:
cp .env.example .envEdit .env with your configuration:
# AutoGLM API Configuration
AUTOGLM_AUTOGLM_API_TOKEN=your_autoglm_api_token_here
# Server Configuration
AUTOGLM_HOST=127.0.0.1
AUTOGLM_PORT=8000
AUTOGLM_DEBUG=false
# WebSocket Configuration
AUTOGLM_WEBSOCKET_TIMEOUT=30
AUTOGLM_MAX_RECONNECT_ATTEMPTS=5
# Logging Configuration
AUTOGLM_LOG_LEVEL=INFOStart the web server:
python main.pyOr using the development server with hot reload:
AUTOGLM_DEBUG=true python main.pyOpen your browser and navigate to http://localhost:8000
Run the CLI client:
python auto.pyOr with a custom token:
python auto.py --token your_api_tokenYou can also configure the application using environment variables:
export AUTOGLM_AUTOGLM_API_TOKEN=your_token
export AUTOGLM_HOST=0.0.0.0
export AUTOGLM_PORT=8080
python main.pyGET /- Main web interfacePOST /api/send-task- Send a task to AutoGLMGET /api/status- Get WebSocket connection statusGET /api/responses- Get recent responsesGET /health- Health check endpoint
The project uses Black for code formatting and isort for import sorting:
black .
isort .mypy src/flake8 src/pytestInstall pre-commit hooks:
pre-commit installAutoGLMUI/
├── src/
│ ├── __init__.py
│ ├── app.py # FastAPI application
│ ├── config.py # Configuration management
│ ├── logging_config.py # Logging setup
│ ├── websocket_client.py # WebSocket client
│ └── dependencies.py # Dependency injection
├── static/
│ └── style.css # CSS styles
├── templates/
│ └── index.html # Web interface template
├── main.py # Web server entry point
├── auto.py # CLI client
├── pyproject.toml # Project configuration
├── .env.example # Environment template
└── README.md # This file
- Never commit your API token to version control
- Use environment variables or a
.envfile for sensitive configuration - The web interface is intended for local development use only
- Consider adding authentication for production deployment
- Fork the repository
- Create a feature branch:
git checkout -b feature-name - Make your changes and ensure they pass tests
- Format your code:
black . && isort . - Commit your changes:
git commit -am 'Add some feature' - Push to the branch:
git push origin feature-name - Submit a pull request
This project is licensed under the MIT License - see the LICENSE file for details.
For issues and questions:
- Create an issue on GitHub
- Check the documentation
- Join our community discussions
- Zhipu AI for the AutoGLM API
- FastAPI for the web framework
- WebSocket-Client for WebSocket support