Skip to content

The Application Layer of a full-lifecycle DevOps portfolio. A Feature Flags API (Flask/Mongo) featuring multi-stage Docker builds, decoupled CI/CD, and GitOps-driven deployment automation.

License

Notifications You must be signed in to change notification settings

shaarron/feature-flags-app

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

81 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Feature Flags App

A lightweight Feature Flags App built with Python (Flask), MongoDB and a frontend using HTML, CSS, and JavaScript.

This app lets you create, update, toggle, and delete feature flags across multiple environments (development, staging, production).

Repositories Structure

This project is split into three repositories, each with a specific role in the deployment and delivery workflow:

  1. Application Repository (feature-flags-app) <--Current Repo

    • Contains the Feature Flags API & UI
    • Contains the GitHub Actions workflows to build the feature-flags-app image & sync frontend s3 bucket
  2. Infrastructure Repository (feature-flags-infrastructure)

    • Contains Terraform code for provisioning the required AWS resources (VPC, EKS, S3, CloudFront, Route53, etc.).
    • Contains the GitHub Actions workflow to apply the terraform
  3. Resources Repository (feature-flags-resources)

    • Holds Helm charts and Argo CD Applications that define the Kubernetes manifests.
    • Implements GitOps: Argo CD watches this repo and syncs changes to the EKS cluster.

Table of Contents

Architecture

Service Architecture

API:

  • Acts as the core API server for managing feature flags.
  • Provides endpoints for creating, updating, toggling, and deleting feature flags.
  • Includes environment-specific configurations for dev, staging, prod.

MongoDB:

  • Serves as the persistent storage for feature flags.

Frontend:

  • Static, Stored in S3 and served via CloudFront.

Full Flow Architecture

feature-flags-full-architecture

VPC Architecture - High Availability

feature-flags-full-architecture

Branching Strategy

The project follows a GitOps-based branching strategy where branches map directly to deployment environments with specific versioning rules:

  • dev branch → Deploys dev-<SHA> tags automatically to the Development environment via direct commit.

  • staging branch → Deploys rc-<SHA> tags automatically to the Staging environment via direct commit.

  • main branch → Promotes semantic versions (e.g., v1.0.0) to Production by triggering a Pull Request for manual approval.

Each environment updates its respective helm configuration in the feature-flags-resources repository, ensuring continuous integration and delivery.

branches-diagram

User Interface (UI) Demo

feature-flags-ui

Github Actions

This GitHub Actions workflow automates testing and building of the Feature Flags API Docker image.

This GitHub Actions workflow automates testing, versioning and publishing of the Feature Flags API Docker image to AWS Elastic Container Registry (ECR) & GitHub Container Registry (GHCR).

This workflow detects changes in frontend dir (on push to /frontend), syncs the changes to the s3 bucket (that holds those static files) and invalidates the CloudFront cache.

This reusable workflow provides automated testing for the Feature Flags API, It includes:

  • Unit Test: Runs the Python unit test suite located in api/tests, validating the API logic and mocking database interactions.
  • E2E Test: Runs the full Docker Compose stack (docker-compose.local.yaml) with MongoDB and Nginx, then validates that the API returns properly structured feature flag data with required fields.

This workflow is designed to be called from other workflows using the workflow_call trigger.

Required Secrets & Variables

To run these workflows, configure the following secrets and variables in your GitHub repository settings.

Required Secrets

Navigate to Settings → Secrets and variables → Actions → Repository secrets:

Secret Name Description Used In
OIDC_AWS_ROLE_ARN AWS IAM role ARN for OIDC authentication ci.yaml, cd.yaml, s3-frontend-sync.yaml

Note: GITHUB_TOKEN is automatically provided by GitHub Actions and doesn't need manual configuration.

Required Variables

Navigate to Settings → Secrets and variables → Actions → Repository variables:

Variable Name Description Example Value Used In
AWS_REGION AWS region where resources are deployed ap-south-1 ci.yaml, cd.yaml, s3-frontend-sync.yaml
ECR_REGISTRY AWS ECR registry URL 888432181118.dkr.ecr.ap-south-1.amazonaws.com ci.yaml, cd.yaml
ECR_REPO ECR repository name feature-flags-api ci.yaml, cd.yaml
S3_FRONTEND_BUCKET_URL S3 bucket URL for frontend files s3://your-bucket-name s3-frontend-sync.yaml
CLOUDFRONT_DISTRIBUTION_ID CloudFront Distribution ID for cache invalidation E1XXXXXX s3-frontend-sync.yaml

AWS OIDC Configuration

Both workflows rely on AWS OIDC for secure, keyless authentication. You have two options to set this up:

  1. Via Terraform:
    Run the Terraform code located in the /oidc directory of the feature-flags-infrastructure repository. This will automatically provision the necessary AWS Identity Provider and IAM Roles.

  2. Manually:
    Follow the official GitHub Actions & AWS OIDC Guide

Observability

The application comes with pre-configured dashboards (available in the feature-flags-resources Repository) to provide immediate insight into the application health.

Monitoring

The Grafana dashboard monitors the Feature Flags API by combining Flask application metrics with Kubernetes resource statistics to visualize traffic, performance, and system health. It tracks critical indicators such as HTTP request rates, 5xx error spikes, and p95 response latency, alongside pod CPU and memory usage to ensure optimal application stability.

grafana-dashboard-demo

Logging

The Kibana dashboard provides a centralized view of log data to track high-level log volume, service activity, and error trends. It highlights 5xx errors and application exceptions, offering breakdowns by service to help identify noisy components and analyze error distributions across the infrastructure.

kibana-dashboard-demo

Running locally

Docker Compose

Docker Compose Architecture

The docker-compose.yml file orchestrates the following services:

  • app (Flask API):

    • Runs the Flask application inside a Python-based Docker container.
    • Exposes port 5000 internally for communication with Nginx.
    • Connects to MongoDB for persistence.
  • Mongodb:

    • Stores feature flag configurations.
    • Runs on port 27017 inside the container.
    • Persists data on a mounted Docker volume (db-data).
  • Nginx:

    • Acts as a reverse proxy for the Flask API.
    • Listens on port 80 of the host for external access.
    • Routes incoming requests to the Flask backend on port 5000.
    • Serves the static UI assets (index.html, app.js)
  • Networks:

    • app-network: Connects NginxAPI.
    • db-network: Connects APIMongoDB.
  • Volumes:

    • db-data: Ensures MongoDB data is persisted across container restarts.
    • Config mounts (./nginx.conf, ./static, ./templates) are shared with the Nginx service for configuration and static asset serving.

docker-compose-architecture

Running Instructions

git clone https://github.com/shaarron/feature-flags-app.git

cd feature-flags-app


# Create environment file (optional - defaults will be used if not provided)
cat > .env << EOF
MONGO_INITDB_ROOT_USERNAME=
MONGO_INITDB_ROOT_PASSWORD=
EOF

# Start all services
docker compose -f docker-compose.local.yaml up -d

The application will be available at:

Running app manually (Python)

Note: The application requires a running MongoDB instance.

git clone https://github.com/shaarron/feature-flags-app.git
cd feature-flags-app

# Create and activate virtual environment
python3 -m venv venv
source venv/bin/activate 

# Install dependencies
pip install -r requirements.txt

# Start MongoDB (if not running elsewhere)
docker run -d -p 27017:27017 mongo:6.0

# Run the application
cd api
python app.py

The application will be available at http://localhost:5000

API Documentation

Endpoints

1. Create a Feature Flag

POST /flags 
Request Body:
{
  "name": "dark-mode",
  "description": "Enable dark theme for all users",
  "environments": {
    "development": true,
    "staging": true,
    "production": false
  }
}

**Response (201) **

{
  "_id": "abc123",
  "name": "dark-mode",
  "description": "Enable dark theme for all users",
  "environments": {...}
}

2. Get All Flags

GET /flags?environment=staging

Retrieves all feature flags, with an enabled field for the selected environment.

Response

[
  {
    "_id": "abc123",
    "name": "dark-mode",
    "description": "Enable dark theme for all users",
    "environments": {...},
    "enabled": true
  }
]

3. Get a Single Flag

GET /flags/<id>

Fetches a specific feature flag by ID.

Response

{
  "_id": "abc123",
  "name": "dark-mode",
  "description": "Enable dark theme for all users",
  "environments": {...}
}

4. Update a Flag

PUT /flags/<id>

Updates name, description, or environment states.

Request Body (partial update allowed):

{
  "description": "Enable dark mode toggle for users"
}

Response

{
  "_id": "abc123",
  "name": "dark-mode",
  "description": "Enable dark mode toggle for users",
  "environments": {...}
}

5. Delete a Flag

DELETE /flags/<id>

Deletes a feature flag.

Response (204):

{
  "message": "Feature flag deleted"
}

6. Toggle a Flag

POST /flags/<id>/toggle

Toggles a flag’s enabled state in a given environment.

Request Body:

{
  "environment": "production"
}

Response (200)

{
  "_id": "abc123",
  "name": "dark-mode",
  "enabled": true
}

About

The Application Layer of a full-lifecycle DevOps portfolio. A Feature Flags API (Flask/Mongo) featuring multi-stage Docker builds, decoupled CI/CD, and GitOps-driven deployment automation.

Topics

Resources

License

Stars

Watchers

Forks

Packages