-
Notifications
You must be signed in to change notification settings - Fork 0
Description
Comprehensive E2E Encrypted Multi-Tenant Network Architecture
This document synthesizes our complete plan for building a secure, end-to-end encrypted, multi-tenant overlay network using VyOS, WireGuard, VXLAN, OSPF, L3VPN, and other technologies. The architecture implements a Unix philosophy-aligned approach with modular components that can be composed together while maintaining separation of concerns.
Architecture Overview
graph TB
subgraph Physical["Physical Infrastructure"]
direction TB
DC1["Datacenter 1<br>5.254.54.0/26"]
DC2["Datacenter 2<br>5.254.43.160/27"]
CloudExt["Cloud Extensions<br>Dynamic"]
end
subgraph Hypervisor["Hypervisor Layer"]
direction TB
ArchLinux["Arch Linux OS"]
OVS["Open vSwitch<br>Hardware Offload"]
SRIOV["SR-IOV<br>Virtual Functions"]
SystemdVMSpawn["systemd-vmspawn"]
end
subgraph Router["Virtual Router Layer"]
direction TB
VyOSVMs["VyOS VMs"]
WireGuard["WireGuard Mesh<br>172.27.0.0/20"]
VXLAN["VXLAN Tunnels"]
OSPF["OSPF Areas"]
BGP["BGP EVPN"]
L3VPN["L3VPN (VRF)"]
end
subgraph Tenant["Tenant Layer"]
direction TB
TenantVMs["Tenant VMs"]
ManagedServices["Managed Services"]
K8S["Kubernetes Clusters"]
Backups["Backup Systems"]
end
Physical --> Hypervisor
Hypervisor --> Router
Router --> Tenant
Network Addressing Schema
graph LR
subgraph PublicSpace["Public Address Space"]
DC1Public["DC1: 5.254.54.0/26"]
DC2Public["DC2: 5.254.43.160/27"]
DC2Additional["DC2 Additional: 5.254.43.208/29"]
end
subgraph ManagementSpace["Management Networks"]
ControlPlane["Control Plane: 172.27.0.0/20"]
BackboneNetwork["Backbone: 172.16.0.0/20"]
end
subgraph TenantSpace["Tenant Address Space"]
CGNATBase["Base: 100.64.0.0/10"]
WireGuardOverlay["WireGuard: 100.64.0.0/16"]
TenantNetworks["Tenant Networks: 100.65.0.0/16"]
TenantServices["Services: 100.80.0.0/16"]
MigrationSpace["Migration: 100.96.0.0/16"]
end
Implementation Plan
1. Physical Infrastructure Setup
The physical infrastructure consists of:
-
Datacenter 1:
- Public Block: 5.254.54.0/26 (62 usable IPs)
- Networking: 4x Intel X710 (10G) + 2x Mellanox CX4 (25G)
- Management: IPMI via dedicated 1GbE NIC
-
Datacenter 2:
- Public Block: 5.254.43.160/27 (30 usable IPs)
- Additional Block: 5.254.43.208/29 (6 usable IPs)
- Networking: 4x Intel X710 (10G) + 2x Mellanox CX4 (25G)
- Management: IPMI via dedicated 1GbE NIC
2. Hypervisor Layer Configuration
Each bare metal server runs:
- Arch Linux operating system
- Open vSwitch with hardware offloading
- SR-IOV configuration for network cards
- systemd-vmspawn for VM deployment
NIC Configuration:
#!/bin/bash
# Configure Intel X710 NIC with SR-IOV
for i in {0..3}; do
echo 7 > /sys/class/net/enp${i}s0/device/sriov_numvfs
ip link set enp${i}s0 up
done
# Configure Mellanox CX4 NIC with SR-IOV
for i in {4..7}; do
echo 7 > /sys/class/net/enp${i}s0/device/sriov_numvfs
ip link set enp${i}s0 up
done
# Configure LACP Bond for Intel NICs
cat > /etc/systemd/network/10-bond0.netdev << EOF
[NetDev]
Name=bond0
Kind=bond
[Bond]
Mode=802.3ad
LACPTransmitRate=fast
MIIMonitorSec=1s
UpDelaySec=2s
DownDelaySec=2s
EOF
# Configure LACP Bond for Mellanox NICs
cat > /etc/systemd/network/20-bond1.netdev << EOF
[NetDev]
Name=bond1
Kind=bond
[Bond]
Mode=802.3ad
LACPTransmitRate=fast
MIIMonitorSec=1s
UpDelaySec=2s
DownDelaySec=2s
EOF
# Configure OVS with hardware offload
cat > /etc/openvswitch/ovs-setup.sh << 'EOF'
#!/bin/bash
ovs-vsctl --may-exist add-br br0
ovs-vsctl set Open_vSwitch . other_config:hw-offload=true
ovs-vsctl add-port br0 bond0
ovs-vsctl add-port br0 bond1
EOF
chmod +x /etc/openvswitch/ovs-setup.sh3. VyOS VM Deployment Using mkosi and systemd-vmspawn
Create a base VyOS image using mkosi:
#!/bin/bash
# Create mkosi configuration
cat > mkosi.default << EOF
[Distribution]
Distribution=vyos
Release=current
[Output]
Format=disk
Output=vyos-base.img
Size=2G
[Partitions]
RootSize=2G
EOF
# Build the image
mkosi
# Create systemd-vmspawn service template
cat > /etc/systemd/system/vyos@.service << EOF
[Unit]
Description=VyOS VM %i
After=network.target
[Service]
Type=notify
ExecStart=/usr/bin/systemd-vmspawn -i /var/lib/machines/vyos-base.img --network-veth -n vyos-%i
ExecStop=/usr/bin/machinectl poweroff vyos-%i
KillMode=mixed
Restart=on-failure
TimeoutStartSec=180
[Install]
WantedBy=multi-user.target
EOF4. WireGuard Control Plane Configuration
The secure management and control plane runs over WireGuard:
# VyOS WireGuard Configuration Template
cat > vyos-wireguard-template.config << EOF
# WireGuard Management Interface
set interfaces wireguard wg0 address '172.27.X.Y/32'
set interfaces wireguard wg0 description 'Secure Control Plane'
set interfaces wireguard wg0 peer ${PEER_ID} allowed-ips '172.27.0.0/20'
set interfaces wireguard wg0 peer ${PEER_ID} persistent-keepalive '25'
set interfaces wireguard wg0 port '51820'
set interfaces wireguard wg0 private-key '${PRIVATE_KEY}'
EOF5. BGP EVPN and L3VPN Configuration
The backbone network runs BGP EVPN for control plane and VXLAN for data plane:
# BGP EVPN Configuration Template
cat > vyos-bgp-evpn-template.config << EOF
# BGP System Configuration
set protocols bgp system-as '65000'
set protocols bgp parameters router-id '${ROUTER_ID}'
# EVPN Configuration
set protocols bgp neighbor ${PEER_IP} remote-as '65000'
set protocols bgp neighbor ${PEER_IP} update-source 'lo'
set protocols bgp neighbor ${PEER_IP} address-family l2vpn-evpn activate
set protocols bgp l2vpn-evpn advertise-all-vni
# L3VPN Configuration
set vrf name ${TENANT_VRF} table '${VRF_TABLE_ID}'
set vrf name ${TENANT_VRF} protocols bgp address-family ipv4-unicast route-target vpn export '65000:${TENANT_ID}'
set vrf name ${TENANT_VRF} protocols bgp address-family ipv4-unicast route-target vpn import '65000:${TENANT_ID}'
EOF6. VXLAN Tunnel Configuration
VXLAN provides the data plane for multi-tenant isolation:
# VXLAN Configuration Template
cat > vyos-vxlan-template.config << EOF
# VXLAN Interface
set interfaces vxlan vxlan${VNI} vni '${VNI}'
set interfaces vxlan vxlan${VNI} remote '${REMOTE_VTEP}'
set interfaces vxlan vxlan${VNI} source-address '${LOCAL_VTEP}'
set interfaces vxlan vxlan${VNI} mtu '9000'
# Associate VXLAN with VRF
set interfaces vxlan vxlan${VNI} vrf '${TENANT_VRF}'
EOF7. High Availability Configuration with VRRP
Implement HA gateways using VRRP:
# VRRP Configuration Template
cat > vyos-vrrp-template.config << EOF
# VRRP Instance
set high-availability vrrp group ${GROUP_ID} interface '${INTERFACE}'
set high-availability vrrp group ${GROUP_ID} virtual-address '${VIRTUAL_IP}'
set high-availability vrrp group ${GROUP_ID} vrid '${VRID}'
set high-availability vrrp group ${GROUP_ID} priority '${PRIORITY}'
EOF8. Tenant Provisioning Automation
Automate tenant onboarding and provisioning with cloud-init:
# cloud-init Template for Tenant Provisioning
#cloud-config
vyos_config_commands:
# Create Tenant VRF
- set vrf name ${TENANT_VRF} table '${VRF_TABLE_ID}'
# Configure VXLAN for Tenant
- set interfaces vxlan vxlan${VNI} vni '${VNI}'
- set interfaces vxlan vxlan${VNI} vrf '${TENANT_VRF}'
# Configure BGP for Tenant
- set vrf name ${TENANT_VRF} protocols bgp address-family ipv4-unicast route-target vpn export '65000:${TENANT_ID}'
- set vrf name ${TENANT_VRF} protocols bgp address-family ipv4-unicast route-target vpn import '65000:${TENANT_ID}'
# Configure WireGuard for Tenant
- set interfaces wireguard wg${TENANT_ID} address '100.64.${TENANT_ID}.1/24'
- set interfaces wireguard wg${TENANT_ID} vrf '${TENANT_VRF}'Deployment Workflow
The deployment of this network architecture follows these stages:
-
Infrastructure Initialization
- Deploy bare metal servers
- Configure SR-IOV and OVS
- Set up management network
-
Control Plane Deployment
- Deploy VyOS VMs using systemd-vmspawn
- Configure WireGuard mesh
- Establish BGP sessions
-
Tenant Network Provisioning
- Create tenant VRFs
- Configure VXLAN tunnels
- Set up L3VPN isolation
-
Service Integration
- Deploy tenant VMs
- Configure managed services
- Implement backup systems
API Integration
VyOS provides a rich API for automation:
#!/bin/bash
# VyOS API Authentication
API_KEY="your-api-key"
VYOS_HOST="10.0.0.1"
# Create Tenant VRF
curl -k -X POST \
"https://${VYOS_HOST}/configure" \
-H "X-API-Key: ${API_KEY}" \
-d '{
"op": "set",
"path": ["vrf", "name", "customer-1", "table", "1000"]
}'
# Configure VXLAN
curl -k -X POST \
"https://${VYOS_HOST}/configure" \
-H "X-API-Key: ${API_KEY}" \
-d '{
"op": "set",
"path": ["interfaces", "vxlan", "vxlan10000", "vni", "10000"]
}'
# Commit Changes
curl -k -X POST \
"https://${VYOS_HOST}/configure" \
-H "X-API-Key: ${API_KEY}" \
-d '{"op": "commit"}'Real-time Monitoring
The network includes comprehensive monitoring using VyOS's built-in capabilities:
#!/bin/bash
# Monitor BGP Sessions
curl -k -X GET \
"https://${VYOS_HOST}/show/bgp/summary/json" \
-H "X-API-Key: ${API_KEY}"
# Monitor VXLAN Status
curl -k -X GET \
"https://${VYOS_HOST}/show/interfaces/vxlan/json" \
-H "X-API-Key: ${API_KEY}"
# Monitor VRF Routing Tables
curl -k -X GET \
"https://${VYOS_HOST}/show/ip/route/vrf/all/json" \
-H "X-API-Key: ${API_KEY}"Key Resources and References
-
VyOS L3VPN Documentation
-
WireGuard Configuration
-
VRF and Routing
-
Automation and API
Next Steps and Enhancements
-
Implement CI/CD Pipeline
- Develop GitOps workflows for network configuration
- Implement configuration validation
- Create automated testing framework
-
Extend Cloud Provider Integration
- Add AWS VPC integration
- Add Azure VNET integration
- Add GCP VPC integration
-
Enhance Security Features
- Implement key rotation automation
- Deploy IDS/IPS capabilities
- Implement traffic analysis
-
Improve Tenant Self-Service
- Develop tenant portal
- Implement API for tenant management
- Create documentation system
Conclusion
This architecture provides a robust, secure, and scalable network overlay that:
- Follows Unix philosophy principles of modular, composable components
- Implements end-to-end encryption with WireGuard
- Enables secure multi-tenancy through VRF isolation
- Supports dynamic scaling to cloud providers
- Leverages automation for deployment and management
By combining the strengths of VyOS, WireGuard, EVPN, and L3VPN technologies, this design creates a network infrastructure that balances security, performance, and operational simplicity.