Skip to content

Alby11/TelemetryStack

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

TelemetryStack: AWS + Azure Hybrid Deployment (Beginner Tutorial)

This README walks you through creating a sim-racing telemetry solution spanning AWS EC2 and Azure Docker Swarm, using Ansible for automation and Ansible Vault for secure credential storage. The guide is written in a beginner-friendly manner, ensuring no steps are missed.

Table of Contents


Overview

We will:

  • Create two VMs on AWS:
    1. Telemetry Collector (collects sim-racing data and pushes to Azure Blob).
    2. Dashboard VM (placeholder or other tasks you might want on AWS).
  • Create a Docker Swarm on Azure with 2 nodes (1 manager, 1 worker) to deploy Prometheus and Grafana.
  • Use Ansible from Fedora to automate creation, configuration, and management of these resources.
  • Use Ansible Vault to securely store AWS credentials (and optionally Azure secrets).

When finished, you’ll have:

  • Two AWS EC2 instances running your collector or other supporting components.
  • A Docker Swarm on Azure hosting the monitoring stack (Prometheus + Grafana).
  • Telemetry data stored in Azure Blob Storage.

Local Environment Setup (Fedora)

  1. System Updates:

    sudo dnf update -y
  2. Install Ansible:

    sudo dnf install ansible -y
  3. Install AWS CLI:

    sudo dnf install awscli -y

    Afterwards:

    aws configure

    Provide your AWS Access Key, Secret Key, default region, output format.

  4. Install Azure CLI:

    sudo rpm -Uvh https://packages.microsoft.com/config/centos/8/packages-microsoft-prod.rpm
    sudo dnf install azure-cli -y

    Then login:

    az login

    Follow the on-screen prompts to authenticate your Azure account.

  5. (Optional) Git for version control:

    sudo dnf install git -y

Ansible Vault Basics

We’ll use Ansible Vault to securely store AWS credentials or other secrets. Let’s create a file named vault.yml (untracked in Git or carefully managed) and encrypt it.

  1. Create vault.yml:

    aws_access_key_id: "AKIAxxxxxx"
    aws_secret_access_key: "xxxxxxxxxxxxxxxxxxxxxxxxxx"

    Replace with real credentials (or see if you prefer environment variables).

  2. Encrypt vault.yml:

    ansible-vault encrypt vault.yml

    You’ll be asked for a vault password. Remember it.

  3. Using vault.yml in your playbooks:

    vars_files:
      - vault.yml

    or

    ansible-playbook myplaybook.yml --ask-vault-pass

    Each time you run the playbook, you’ll be prompted for the vault password to decrypt vault.yml.


AWS EC2 Deployment

Create an AWS Key Pair (Optional)

If you need a new key pair to SSH into your future EC2 instances:

aws ec2 create-key-pair --key-name MyKeyPair --query 'KeyMaterial' --output text > MyKeyPair.pem
chmod 400 MyKeyPair.pem

(If you already have a key pair in AWS, skip this step.)

Securely Store AWS Credentials

In your vault.yml (already encrypted), place:

aws_access_key_id: "AKIAxxxxxx"
aws_secret_access_key: "xxxxxxxxxxxxxxxxxxxx"
aws_region: "us-east-1"
key_name: "MyKeyPair"
instance_type: "t3.micro"
ami_id: "ami-0a91cd140a1fc148a"

(any variable you want to keep hidden or just want to store in a single place)

Ansible Playbook to Launch EC2 Instances

Create a file named create_ec2.yml:

---
- name: Create Telemetry Collector and Dashboard on AWS
  hosts: localhost
  gather_facts: false

  vars_files:
    - vault.yml # Encrypted. Make sure to run with --ask-vault-pass

  tasks:
    - name: Launch Telemetry Collector VM
      amazon.aws.ec2:
        aws_access_key: "{{ aws_access_key_id }}"
        aws_secret_key: "{{ aws_secret_access_key }}"
        region: "{{ aws_region }}"
        key_name: "{{ key_name }}"
        instance_type: "{{ instance_type }}"
        image: "{{ ami_id }}"
        wait: true
        instance_tags:
          Name: "TelemetryCollector"
      register: collector_ec2

    - name: Launch Telemetry Dashboard VM
      amazon.aws.ec2:
        aws_access_key: "{{ aws_access_key_id }}"
        aws_secret_key: "{{ aws_secret_access_key }}"
        region: "{{ aws_region }}"
        key_name: "{{ key_name }}"
        instance_type: "{{ instance_type }}"
        image: "{{ ami_id }}"
        wait: true
        instance_tags:
          Name: "TelemetryDashboard"
      register: dashboard_ec2

    - name: Print collector public IP
      debug:
        msg: "{{ collector_ec2.instances[0].public_ip }}"

    - name: Print dashboard public IP
      debug:
        msg: "{{ dashboard_ec2.instances[0].public_ip }}"

Run it:

ansible-playbook -i localhost create_ec2.yml --ask-vault-pass

You’ll see the newly created EC2 instances’ public IP addresses in the debug output.

Inventory Updates

After creation, create or update your inventory file (aws_inventory.ini) with the IP addresses:

[collector_aws]
collector.aws ansible_host=<Collector_Public_IP> ansible_user=ubuntu ansible_ssh_private_key_file=~/.ssh/MyKeyPair.pem

[dashboard_aws]
dashboard.aws ansible_host=<Dashboard_Public_IP> ansible_user=ubuntu ansible_ssh_private_key_file=~/.ssh/MyKeyPair.pem

Configure Telemetry Collector & Dashboard on EC2

  1. Example Playbook (ec2_configure.yml):

    ---
    - name: Configure Collector & Dashboard on AWS
      hosts: collector_aws
      become: yes
      tasks:
        - name: Install Docker
          apt:
            name: ["docker.io", "docker-compose"]
            state: present
            update_cache: yes
    
        - name: Start Docker
          service:
            name: docker
            state: started
            enabled: yes
    
        - name: Pull and run telemetry collector
          shell: |
            docker run -d \
              --name telemetry_collector \
              -p 5000:5000 \
              myregistry/telemetry-collector:latest
    
    - name: Configure Dashboard
      hosts: dashboard_aws
      become: yes
      tasks:
        - name: Install Docker
          apt:
            name: ["docker.io", "docker-compose"]
            state: present
            update_cache: yes
    
        - name: Start Docker
          service:
            name: docker
            state: started
            enabled: yes
    
        - name: Placeholder container
          shell: |
            docker run -d \
              --name dashboard_placeholder \
              -p 8080:80 \
              nginx:alpine
  2. Run:

    ansible-playbook -i aws_inventory.ini ec2_configure.yml

Azure Deployment (Docker Swarm + Blob Storage)

Create Azure VMs for Swarm

# Resource group
az group create --name TelemetryStackRG --location eastus

# Manager VM
az vm create \
  --resource-group TelemetryStackRG \
  --name swarm-manager \
  --image UbuntuLTS \
  --admin-username azureuser \
  --generate-ssh-keys

# Worker VM
az vm create \
  --resource-group TelemetryStackRG \
  --name swarm-worker \
  --image UbuntuLTS \
  --admin-username azureuser \
  --generate-ssh-keys

Initialize Docker Swarm

  1. SSH into the manager:

    az vm list-ip-addresses --resource-group TelemetryStackRG --output table
    # copy manager's Public IP
    ssh azureuser@<ManagerPublicIP>
  2. Install Docker:

    sudo apt-get update
    sudo apt-get install -y docker.io
    sudo systemctl enable docker
    sudo systemctl start docker
    sudo docker swarm init --advertise-addr <ManagerPublicIP>

    Copy the docker swarm join token that appears.

  3. Worker:

    ssh azureuser@<WorkerPublicIP>
    sudo apt-get update
    sudo apt-get install -y docker.io
    sudo systemctl enable docker
    sudo systemctl start docker
    # Use the join token from manager
    sudo docker swarm join --token <token> <ManagerPublicIP>:2377

Deploy Prometheus + Grafana via Stack

  1. On manager node (swarm-manager):

    cat <<EOF > prometheus.yml
    global:
      scrape_interval: 15s
    scrape_configs:
      - job_name: telemetry_collector
        static_configs:
          - targets: ['<Collector_Public_IP>:5000']
    EOF
  2. Create telemetry-stack.yml:

    cat <<EOF > telemetry-stack.yml
    version: '3.7'
    services:
      prometheus:
        image: prom/prometheus:latest
        ports:
          - '9090:9090'
        configs:
          - source: prometheus_config
            target: /etc/prometheus/prometheus.yml
        deploy:
          replicas: 1
          placement:
            constraints: [node.role == manager]
    
      grafana:
        image: grafana/grafana:latest
        ports:
          - '3000:3000'
        deploy:
          replicas: 1
    configs:
      prometheus_config:
        file: ./prometheus.yml
    EOF
  3. Deploy Stack:

    docker stack deploy -c telemetry-stack.yml telemetrystack
    docker service ls
    docker stack ps telemetrystack
    • Confirm prometheus and grafana are running.

Azure Blob Storage Setup

  1. Create Storage Account (if not done):

    az storage account create \
      --name telemetrydatastorage \
      --resource-group TelemetryStackRG \
      --location eastus \
      --sku Standard_LRS
  2. Create Container:

    az storage container create \
      --name telemetrydata \
      --account-name telemetrydatastorage
  3. Generate SAS Token (Optional):

    az storage container generate-sas \
      --name telemetrydata \
      --account-name telemetrydatastorage \
      --permissions rwdl \
      --expiry 2025-01-01 \
      --output tsv

    This token can be used in your Telemetry Collector for direct upload to Blob.


IAM and Networking Considerations

  1. AWS:

    • Use a dedicated IAM user or role for EC2 creation.
    • Use a minimal policy for S3 or if not used, just the EC2 policy.
  2. Azure:

    • Keep your resource group and VMs locked down with az network nsg.
    • Docker Swarm manager might require inbound on 2377, 3000, 9090.
    • Use SAS tokens or service principals for Blob storage with minimal privileges.
  3. Security Groups & NSGs:

    • On AWS, open only essential ports (22 for SSH, 5000 if your collector needs inbound traffic, etc.).
    • On Azure, open 22 (SSH), 3000 (Grafana), 9090 (Prometheus).

Repository Structure

TelemetryStack/
├── ansible/
│   ├── create_ec2.yml
│   ├── ec2_configure.yml
│   ├── aws_inventory.ini
│   └── vault.yml  # Encrypted with ansible-vault
├── docker/
│   ├── telemetry_collector/
│   │   ├── Dockerfile
│   │   └── collector.py
│   └── swarm/
│       ├── telemetry-stack.yml
│       └── prometheus.yml
├── diagrams/
│   └── architecture.png
└── README.md

Architecture Diagram

Fedora Laptop (AWS + Azure CLI, Ansible, Vault)
             |
             |--------------------- (AWS) 2 x EC2 Instances
             |                         ├── Telemetry Collector VM
             |                         └── Dashboard VM (placeholder)
             |
             |--------------------- (Azure) 2 x VMs in Docker Swarm
                                     ├── Manager Node (Prometheus + Grafana)
                                     └── Worker Node
Azure Blob Storage <---- Telemetry Data Pushed from Collector

Final Steps and Demonstration

  1. Validate:

    • Check AWS console for your 2 EC2 VMs.
    • Check Azure portal for your manager and worker VMs, verify Docker Swarm status.
  2. Telemetry Flow:

    • Confirm your collector container can push data to Azure Blob (test with a simple script).
    • Confirm Prometheus and Grafana are accessible from the manager node’s public IP on port 3000 (Grafana) and 9090 (Prometheus).
    • Adjust firewall rules as needed.
  3. Live Demo:

    • Show how an ansible-playbook call can quickly spin up or remove resources, and how the collector or dashboards update instantly.
    • Emphasize your best practices for secrets (Ansible Vault) and minimal open ports.

References and Further Reading


Congratulations! You now have a multi-cloud, multi-node sim-racing telemetry solution:

  • AWS handles the core collector VMs.
  • Azure hosts a Docker Swarm for your monitoring stack.
  • Telemetry data is secured in Azure Blob Storage.
  • All orchestrated by Ansible from your Fedora laptop with secure credential storage in Ansible Vault.

Feel free to adapt or extend for your particular environment, or swap services to suit your needs. Happy sim-racing telemetry!

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published