This project demonstrates the implementation of a CI/CD pipeline for a Node.js-based ticketing application
Implemented a robust CI/CD pipeline for the Demo-CRM application (a ticketing system written in Node.js). The pipeline integrates:
- [ArgoCD for automated deployment]
- [Docker for containerization]
- [Helm charts for Kubernetes deployment]
The application is deployed onto an AWS EKS cluster, with infrastructure provisioned and managed using Terraform, ensuring scalability, high availability, and automated infrastructure management.
in the daigram we below we can see our infrastructure
- we can see that we work with AWS cloud service
- inside my AWS user i created VPC under Mumbai's region (ap-south-1), with CIDR = 10.0.0.0/16
- to this VPC i created 1 public subnet in AZ= ap-south-1a" with exit to network (mean that no needed for NAT or any athoer solution)
- also in aws region (same level as VPC) i created ECR to upload the image for my application (the image push when running my CI/CD in my application repo)
- inside the subnet i created EKS cluster with mudle from my infrastructure repo
- then i created 1 worker node with EC2 instance t3a.large
- and inside this node i created 2 demo-crm pods and 3 mongoDB pods
- also i used Nginx ingress controler that i installed with helm chart to send request that recive from ALB to application pod
- i created ALB that recive request from user and send it to Nginx ingress controller
| Category | Technologies |
|---|---|
| Infrastructure | EKS, AWS |
| Containerization | Docker and Docker compose |
| CI/CD | github action |
| Security | Use Secrets and diffrent networks |
| Application | Node.js |
| Database | MongoDB |
Requirements for building and running the project:
- Demo-crm Application
- AWS user to create ECR and EKS
- Terraform modle to create EKS
- docker and docker compose installed
- Helm to install helm charts
Follow these instructions to set up the project locally and deploy it to your cloud environment.
-
After creating the cluster, run this command to set up
kubeconfigwith AWS:aws eks update-kubeconfig --region ap-south-1 --name idan-raviv-cluster
-
Pull MongoDB Helm chart from Artifact Hub:
helm pull oci://registry-1.docker.io/bitnamicharts/mongodb --version 16.4.5
-
Unzip the Helm chart:
tar -xvzf mongodb-16.4.5.tgz
-
Apply the PVC and PV:
kubectl apply -f storageclass.yaml
-
Add the Bitnami Helm repo:
helm repo add bitnami https://charts.bitnami.com/bitnami
-
Install the chart locally after editing the
values.yaml:helm install my-mongodb ./mongodb
-
Retrieve the root password and store it in the
MONGODB_ROOT_PASSWORDenvironment variable:export MONGODB_ROOT_PASSWORD=$(kubectl get secret --namespace default my-mongodb -o jsonpath="{.data.mongodb-root-password}" | base64 -d)
-
Connect to MongoDB:
kubectl run --namespace default my-mongodb-client --rm --tty -i --restart='Never' --env="MONGODB_ROOT_PASSWORD=$MONGODB_ROOT_PASSWORD" --image docker.io/bitnami/mongodb:7.0.4-debian-11-r2 -- bash
-
Let the MongoDB pods know each other:
mongosh admin --host "my-mongodb-0.my-mongodb-headless.default.svc.cluster.local:27017,my-mongodb-1.my-mongodb-headless.default.svc.cluster.local:27017" --authenticationDatabase admin -u root -p $MONGODB_ROOT_PASSWORD
-
Print the MongoDB URI (for updating
secret.yaml):echo -n "mongodb://root:${MONGODB_ROOT_PASSWORD}@my-mongodb-0.my-mongodb-headless.default.svc.cluster.local:27017,my-mongodb-1.my-mongodb-headless.default.svc.cluster.local:27017" | base64
-
Apply Kubernetes YAML files:
kubectl apply -f config.yaml kubectl apply -f deployment.yaml kubectl apply -f secret.yaml kubectl apply -f service.yaml
-
Add the ingress-nginx Helm repository:
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
-
Install ingress-nginx Helm chart (creates the ingress controller):
helm install -n ingress --create-namespace controller ingress-nginx/ingress-nginx --set controller.ingressClassResource.default=true
-
Check the service to get the external IP:
kubectl get service -n ingress
-
Create and apply
ingress.yaml:kubectl apply -f ingress.yaml
-
Check the ingress external IP or DNS:
kubectl get ingress
-
Access the service via the DNS/IP on port
80.
-
Add ArgoCD Helm repository:
helm repo add argo https://argoproj.github.io/argo-helm
-
Install ArgoCD Helm chart:
helm install my-argo-cd argo/argo-cd --namespace argocd --create-namespace
-
Port-forward ArgoCD service to port 8080 (in background):
kubectl port-forward service/my-argo-cd-argocd-server -n argocd 8080:443 & -
Port-forward helper commands:
fgto move the job to foregroundkill %1to stop the job
-
Get the ArgoCD admin password:
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d; echo
-
Your GitHub repo URL:
git@github.com:idrr1993/mydemo-example-apps.git -
To get your SSH private key:
cat ~/.ssh/id_rsa -
If SSH fails, connect the repo via HTTPS using a GitHub token:
- Go to GitHub → Settings → Developer settings → Personal access tokens
- Generate a classic token with repo and read:packages permissions
- In the ArgoCD UI:
- Go to Settings > Repositories
- Choose Connect repo via HTTPS
- Fill in:
- Project:
default - Repository URL:
https://github.com/<user name/<repo name>.git - Username: your GitHub username
- Password: your GitHub token
- Project:
- Click Connect and check if status is Synced or OutOfSync
git init
git remote add origin git@github.com:<user>/<mydemo-example-apps>.git
git remote -v
git add .
git commit -m "Initial commit"
git branch -M main # Ensure branch is named 'main'
git push -u origin mainProject Link: https://github.com/idrr1993/GitOps
