This README walks you through creating a sim-racing telemetry solution spanning AWS EC2 and Azure Docker Swarm, using Ansible for automation and Ansible Vault for secure credential storage. The guide is written in a beginner-friendly manner, ensuring no steps are missed.
- TelemetryStack: AWS + Azure Hybrid Deployment (Beginner Tutorial)
We will:
- Create two VMs on AWS:
- Telemetry Collector (collects sim-racing data and pushes to Azure Blob).
- Dashboard VM (placeholder or other tasks you might want on AWS).
- Create a Docker Swarm on Azure with 2 nodes (1 manager, 1 worker) to deploy Prometheus and Grafana.
- Use Ansible from Fedora to automate creation, configuration, and management of these resources.
- Use Ansible Vault to securely store AWS credentials (and optionally Azure secrets).
When finished, you’ll have:
- Two AWS EC2 instances running your collector or other supporting components.
- A Docker Swarm on Azure hosting the monitoring stack (Prometheus + Grafana).
- Telemetry data stored in Azure Blob Storage.
-
System Updates:
sudo dnf update -y
-
Install Ansible:
sudo dnf install ansible -y
-
Install AWS CLI:
sudo dnf install awscli -y
Afterwards:
aws configure
Provide your AWS Access Key, Secret Key, default region, output format.
-
Install Azure CLI:
sudo rpm -Uvh https://packages.microsoft.com/config/centos/8/packages-microsoft-prod.rpm sudo dnf install azure-cli -y
Then login:
az login
Follow the on-screen prompts to authenticate your Azure account.
-
(Optional) Git for version control:
sudo dnf install git -y
We’ll use Ansible Vault to securely store AWS credentials or other secrets. Let’s create a file named vault.yml (untracked in Git or carefully managed) and encrypt it.
-
Create
vault.yml:aws_access_key_id: "AKIAxxxxxx" aws_secret_access_key: "xxxxxxxxxxxxxxxxxxxxxxxxxx"
Replace with real credentials (or see if you prefer environment variables).
-
Encrypt
vault.yml:ansible-vault encrypt vault.yml
You’ll be asked for a vault password. Remember it.
-
Using
vault.ymlin your playbooks:vars_files: - vault.yml
or
ansible-playbook myplaybook.yml --ask-vault-pass
Each time you run the playbook, you’ll be prompted for the vault password to decrypt
vault.yml.
If you need a new key pair to SSH into your future EC2 instances:
aws ec2 create-key-pair --key-name MyKeyPair --query 'KeyMaterial' --output text > MyKeyPair.pem
chmod 400 MyKeyPair.pem(If you already have a key pair in AWS, skip this step.)
In your vault.yml (already encrypted), place:
aws_access_key_id: "AKIAxxxxxx"
aws_secret_access_key: "xxxxxxxxxxxxxxxxxxxx"
aws_region: "us-east-1"
key_name: "MyKeyPair"
instance_type: "t3.micro"
ami_id: "ami-0a91cd140a1fc148a"(any variable you want to keep hidden or just want to store in a single place)
Create a file named create_ec2.yml:
---
- name: Create Telemetry Collector and Dashboard on AWS
hosts: localhost
gather_facts: false
vars_files:
- vault.yml # Encrypted. Make sure to run with --ask-vault-pass
tasks:
- name: Launch Telemetry Collector VM
amazon.aws.ec2:
aws_access_key: "{{ aws_access_key_id }}"
aws_secret_key: "{{ aws_secret_access_key }}"
region: "{{ aws_region }}"
key_name: "{{ key_name }}"
instance_type: "{{ instance_type }}"
image: "{{ ami_id }}"
wait: true
instance_tags:
Name: "TelemetryCollector"
register: collector_ec2
- name: Launch Telemetry Dashboard VM
amazon.aws.ec2:
aws_access_key: "{{ aws_access_key_id }}"
aws_secret_key: "{{ aws_secret_access_key }}"
region: "{{ aws_region }}"
key_name: "{{ key_name }}"
instance_type: "{{ instance_type }}"
image: "{{ ami_id }}"
wait: true
instance_tags:
Name: "TelemetryDashboard"
register: dashboard_ec2
- name: Print collector public IP
debug:
msg: "{{ collector_ec2.instances[0].public_ip }}"
- name: Print dashboard public IP
debug:
msg: "{{ dashboard_ec2.instances[0].public_ip }}"Run it:
ansible-playbook -i localhost create_ec2.yml --ask-vault-passYou’ll see the newly created EC2 instances’ public IP addresses in the debug output.
After creation, create or update your inventory file (aws_inventory.ini) with the IP addresses:
[collector_aws]
collector.aws ansible_host=<Collector_Public_IP> ansible_user=ubuntu ansible_ssh_private_key_file=~/.ssh/MyKeyPair.pem
[dashboard_aws]
dashboard.aws ansible_host=<Dashboard_Public_IP> ansible_user=ubuntu ansible_ssh_private_key_file=~/.ssh/MyKeyPair.pem-
Example Playbook (
ec2_configure.yml):--- - name: Configure Collector & Dashboard on AWS hosts: collector_aws become: yes tasks: - name: Install Docker apt: name: ["docker.io", "docker-compose"] state: present update_cache: yes - name: Start Docker service: name: docker state: started enabled: yes - name: Pull and run telemetry collector shell: | docker run -d \ --name telemetry_collector \ -p 5000:5000 \ myregistry/telemetry-collector:latest - name: Configure Dashboard hosts: dashboard_aws become: yes tasks: - name: Install Docker apt: name: ["docker.io", "docker-compose"] state: present update_cache: yes - name: Start Docker service: name: docker state: started enabled: yes - name: Placeholder container shell: | docker run -d \ --name dashboard_placeholder \ -p 8080:80 \ nginx:alpine
-
Run:
ansible-playbook -i aws_inventory.ini ec2_configure.yml
# Resource group
az group create --name TelemetryStackRG --location eastus
# Manager VM
az vm create \
--resource-group TelemetryStackRG \
--name swarm-manager \
--image UbuntuLTS \
--admin-username azureuser \
--generate-ssh-keys
# Worker VM
az vm create \
--resource-group TelemetryStackRG \
--name swarm-worker \
--image UbuntuLTS \
--admin-username azureuser \
--generate-ssh-keys-
SSH into the manager:
az vm list-ip-addresses --resource-group TelemetryStackRG --output table # copy manager's Public IP ssh azureuser@<ManagerPublicIP>
-
Install Docker:
sudo apt-get update sudo apt-get install -y docker.io sudo systemctl enable docker sudo systemctl start docker sudo docker swarm init --advertise-addr <ManagerPublicIP>
Copy the
docker swarm jointoken that appears. -
Worker:
ssh azureuser@<WorkerPublicIP> sudo apt-get update sudo apt-get install -y docker.io sudo systemctl enable docker sudo systemctl start docker # Use the join token from manager sudo docker swarm join --token <token> <ManagerPublicIP>:2377
-
On manager node (swarm-manager):
cat <<EOF > prometheus.yml global: scrape_interval: 15s scrape_configs: - job_name: telemetry_collector static_configs: - targets: ['<Collector_Public_IP>:5000'] EOF
-
Create
telemetry-stack.yml:cat <<EOF > telemetry-stack.yml version: '3.7' services: prometheus: image: prom/prometheus:latest ports: - '9090:9090' configs: - source: prometheus_config target: /etc/prometheus/prometheus.yml deploy: replicas: 1 placement: constraints: [node.role == manager] grafana: image: grafana/grafana:latest ports: - '3000:3000' deploy: replicas: 1 configs: prometheus_config: file: ./prometheus.yml EOF
-
Deploy Stack:
docker stack deploy -c telemetry-stack.yml telemetrystack docker service ls docker stack ps telemetrystack
- Confirm
prometheusandgrafanaare running.
- Confirm
-
Create Storage Account (if not done):
az storage account create \ --name telemetrydatastorage \ --resource-group TelemetryStackRG \ --location eastus \ --sku Standard_LRS
-
Create Container:
az storage container create \ --name telemetrydata \ --account-name telemetrydatastorage
-
Generate SAS Token (Optional):
az storage container generate-sas \ --name telemetrydata \ --account-name telemetrydatastorage \ --permissions rwdl \ --expiry 2025-01-01 \ --output tsv
This token can be used in your Telemetry Collector for direct upload to Blob.
-
AWS:
- Use a dedicated IAM user or role for EC2 creation.
- Use a minimal policy for S3 or if not used, just the EC2 policy.
-
Azure:
- Keep your resource group and VMs locked down with
az network nsg. - Docker Swarm manager might require inbound on 2377, 3000, 9090.
- Use SAS tokens or service principals for Blob storage with minimal privileges.
- Keep your resource group and VMs locked down with
-
Security Groups & NSGs:
- On AWS, open only essential ports (22 for SSH, 5000 if your collector needs inbound traffic, etc.).
- On Azure, open 22 (SSH), 3000 (Grafana), 9090 (Prometheus).
TelemetryStack/
├── ansible/
│ ├── create_ec2.yml
│ ├── ec2_configure.yml
│ ├── aws_inventory.ini
│ └── vault.yml # Encrypted with ansible-vault
├── docker/
│ ├── telemetry_collector/
│ │ ├── Dockerfile
│ │ └── collector.py
│ └── swarm/
│ ├── telemetry-stack.yml
│ └── prometheus.yml
├── diagrams/
│ └── architecture.png
└── README.mdFedora Laptop (AWS + Azure CLI, Ansible, Vault)
|
|--------------------- (AWS) 2 x EC2 Instances
| ├── Telemetry Collector VM
| └── Dashboard VM (placeholder)
|
|--------------------- (Azure) 2 x VMs in Docker Swarm
├── Manager Node (Prometheus + Grafana)
└── Worker Node
Azure Blob Storage <---- Telemetry Data Pushed from Collector
-
Validate:
- Check AWS console for your 2 EC2 VMs.
- Check Azure portal for your manager and worker VMs, verify Docker Swarm status.
-
Telemetry Flow:
- Confirm your collector container can push data to Azure Blob (test with a simple script).
- Confirm Prometheus and Grafana are accessible from the manager node’s public IP on port 3000 (Grafana) and 9090 (Prometheus).
- Adjust firewall rules as needed.
-
Live Demo:
- Show how an
ansible-playbookcall can quickly spin up or remove resources, and how the collector or dashboards update instantly. - Emphasize your best practices for secrets (Ansible Vault) and minimal open ports.
- Show how an
- Ansible Documentation
- amazon.aws.ec2 Module Docs
- Azure CLI Docs
- Docker Swarm Overview
- Ansible Vault Usage
Congratulations! You now have a multi-cloud, multi-node sim-racing telemetry solution:
- AWS handles the core collector VMs.
- Azure hosts a Docker Swarm for your monitoring stack.
- Telemetry data is secured in Azure Blob Storage.
- All orchestrated by Ansible from your Fedora laptop with secure credential storage in Ansible Vault.
Feel free to adapt or extend for your particular environment, or swap services to suit your needs. Happy sim-racing telemetry!