Skip to content

whoekage/devops_agent

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

DevOps Automation Project

An automation project for DevOps processes using LLM-based agents. Currently focused on local LLM deployment using Ollama.

Features

  • 🤖 LLM Integration for intelligent automation (using Ollama)
  • 📨 Notifications support via Telegram and Slack
  • 🔄 Asynchronous task processing
  • 📝 Logging and monitoring

Requirements

  • Python 3.8+
  • Ollama for local LLM

Installation

  1. Install Ollama:
# For macOS or Linux
curl -fsSL https://ollama.ai/install.sh | sh
  1. Clone the repository:
git clone git@github.com:whoekage/devops_agent.git
cd devops_agent
  1. Create virtual environment:
python -m venv venv
source venv/bin/activate  # For Linux/MacOS
# or
.\venv\Scripts\activate  # For Windows
  1. Install dependencies:
pip install -r requirements.txt
  1. Copy the example configuration file and set it up:
cp config.yaml.example config.yaml
  1. Edit config.yaml and specify the required parameters:
  • Ollama model configuration
  • Telegram and Slack tokens
  • Other parameters as needed

Running

  1. Start Ollama and pull the required model:
# Start Ollama service
ollama serve

# In another terminal, pull the model (e.g., mistral)
ollama pull mistral
  1. Activate virtual environment (if not already activated):
source venv/bin/activate  # For Linux/MacOS
# or
.\venv\Scripts\activate  # For Windows
  1. Run the main application:
python main.py

Project Structure

.
├── agents/             # Agents for various tasks
│   └── k8s_agent.py   # Kubernetes management agent
├── core/              # Application core
│   ├── llm.py         # LLM integration
│   ├── database.py    # Database operations
│   └── notifications.py # Notification system
├── workers/           # Worker processes
│   ├── llm_worker.py  # LLM task processor
│   └── notification_worker.py # Notification processor
├── utils/             # Utility functions
│   └── logger.py      # Logging
├── config.yaml        # Application configuration
├── main.py           # Application entry point
└── requirements.txt   # Project dependencies

Configuration

Main settings are stored in config.yaml. Configuration example:

llm:
  type: "ollama"
  model: "mistral"
  host: "http://localhost:11434"
  temperature: 0.7
  max_tokens: 2000

notifications:
  telegram:
    token: your_telegram_token
    chat_id: your_chat_id
  slack:
    token: your_slack_token
    channel: #notifications

database:
  path: history.db

Development

To contribute to development:

  1. Create a fork of the repository
  2. Create a branch for new functionality
  3. Make changes and create a pull request

TODO

  • Implement core notification functionality:
    • Add Telegram bot commands and interactions
    • Implement Slack app integration and slash commands
    • Add message templates and formatting
    • Implement notification priorities and rules
  • Add support for multiple LLM providers (OpenAI, Anthropic)
  • Implement model switching functionality
  • Add Kubernetes integration
  • Create comprehensive test suite
  • Add metrics collection and monitoring
  • Implement rate limiting and queue management
  • Add support for more notification channels (Discord, Email)
  • Create web interface for management
  • Add documentation for API endpoints
  • Implement backup and restore functionality
  • Add support for custom agent development
  • Create examples and use cases
  • Add CI/CD pipeline configuration

License

This project is distributed under the MIT License. See LICENSE file for details.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages