- Instance ID: i-00d8a7ab13c96d8c2 (aws-ec2-sudhir)
- Public IP: 18.60.184.7
- Private IP: 172.31.11.186
- OS: Ubuntu
- SSH User: ubuntu
- AWS EC2 private key file (aws-ec2-key-sk.pem)
- SSH client installed on your local machine
Before connecting, ensure your private key has the correct permissions:
chmod 400 aws-ec2-key-sk.pemUse the following command to connect to your EC2 instance:
ssh -i aws-ec2-key-sk.pem ubuntu@18.60.184.7Alternatively, if you're connecting from within the same VPC or have VPN access:
ssh -i aws-ec2-key-sk.pem ubuntu@172.31.11.186To use GUI applications like gedit, connect with X11 forwarding enabled:
ssh -X -i aws-ec2-key-sk.pem ubuntu@18.60.184.7Note: You'll need an X11 server installed on your local machine:
- On macOS: Install XQuartz using Homebrew:
Or download from https://www.xquartz.org/
brew install --cask xquartz
- On Windows: Install an X server like Xming or use WSL2 with WSLg
After installing XQuartz on macOS, restart your Mac or log out and back in, then start XQuartz from your Applications folder.
After connecting with X11 forwarding, you can install and use GUI applications:
# Install GUI applications
sudo apt update
sudo apt install gedit
# Launch GUI applications
gedit filename.txt- Never share your private key file
- Never commit private keys to version control
- Regularly rotate your SSH keys
- Consider using AWS Systems Manager Session Manager for keyless access
If you're seeing the message "Unit nginx.service could not be found", it means Nginx is NOT installed on your EC2 instance.
Your website cannot load because no web server is running.
You need to install a web server such as Nginx or Apache (Nginx recommended).
Below are the steps.
Run:
sudo apt update
sudo apt install nginx -ysudo systemctl start nginx
sudo systemctl enable nginxCheck status:
sudo systemctl status nginxShould show: active (running)
Go to:
EC2 Console → Instances → Select your instance → Security → Click Security Group → Inbound rules
Add/verify these rules:
| Type | Port | Source |
|---|---|---|
| HTTP | 80 | 0.0.0.0/0 |
| HTTPS | 443 | 0.0.0.0/0 |
Since your instance uses auto-assigned IP, check:
EC2 → Instances → Public IPv4
It will look something like: 18.xx.xx.xx
Open this in browser:
http://YOUR_PUBLIC_IP
You should see:
Congratulations! If you're seeing the "Welcome to Nginx!" page at http://18.60.184.7, your web server is successfully installed and running.
Perfect! You already have your new HTML file at:
/home/ubuntu/sudhir/sudhirkumar.html
To make Nginx serve this page instead of the default, follow these steps:
sudo mv /var/www/html/index.nginx-debian.html /var/www/html/index.nginx-debian.html.bakThis keeps a backup in case you want the default page later.
sudo cp /home/ubuntu/sudhir/sudhirkumar.html /var/www/html/index.html- Nginx looks for
/var/www/html/index.htmlby default. - We renamed your file to
index.htmlfor Nginx to serve automatically.
sudo chown www-data:www-data /var/www/html/index.html
sudo chmod 644 /var/www/html/index.htmlThis ensures Nginx can read the file.
sudo systemctl restart nginxOpen:
http://YOUR_PUBLIC_IP
You should now see your sudhirkumar.html page served by Nginx.
-
Place your website files in the default directory:
sudo nano /var/www/html/index.html
-
Create a custom server block for your domain:
sudo nano /etc/nginx/sites-available/your-domain
Add configuration:
server { listen 80; server_name your-domain.com; root /var/www/your-domain; index index.html index.htm; location / { try_files $uri $uri/ =404; } }
-
Enable your site:
sudo ln -s /etc/nginx/sites-available/your-domain /etc/nginx/sites-enabled/ sudo nginx -t # Test configuration sudo systemctl reload nginx # Reload Nginx
-
Host Your Portfolio Website (sudhirkumar.in): To replace the default Nginx welcome page with your portfolio:
a. Create a directory for your website:
sudo mkdir -p /var/www/sudhirkumar.in/html sudo chown -R $USER:$USER /var/www/sudhirkumar.in/html sudo chmod -R 755 /var/www/sudhirkumar.in
b. Create a simple index.html or copy your portfolio files:
nano /var/www/sudhirkumar.in/html/index.html
c. Create a server block for your domain:
sudo nano /etc/nginx/sites-available/sudhirkumar.in
Add the following configuration:
server { listen 80; listen [::]:80; root /var/www/sudhirkumar.in/html; index index.html index.htm index.nginx-debian.html; server_name sudhirkumar.in www.sudhirkumar.in; location / { try_files $uri $uri/ =404; } }
d. Enable the server block:
sudo ln -s /etc/nginx/sites-available/sudhirkumar.in /etc/nginx/sites-enabled/
e. Test and reload Nginx:
sudo nginx -t sudo systemctl reload nginx
-
Set up SSL (recommended): Use Let's Encrypt with Certbot for free SSL certificates.
SSL encryption is essential for securing your website and protecting user data. Let's Encrypt provides free SSL certificates that are trusted by all major browsers.
Before setting up SSL, ensure you have:
- Your domain name (sudhirkumar.in) pointing to your EC2 instance's public IP (18.60.184.7)
- Nginx properly configured with your domain
- Port 443 open in your security group
-
Install Certbot and the Nginx plugin:
sudo apt update sudo apt install certbot python3-certbot-nginx -y
-
Obtain and install the certificate:
sudo certbot --nginx -d sudhirkumar.in -d www.sudhirkumar.in
-
Follow the prompts:
- Enter your email for important notifications
- Agree to the terms of service
- Choose whether to redirect HTTP traffic to HTTPS (recommended: Yes)
Let's Encrypt certificates expire every 90 days. Set up automatic renewal:
-
Test automatic renewal:
sudo certbot renew --dry-run
-
The certbot package automatically creates a cron job for renewal, but you can verify it exists:
sudo systemctl status certbot.timer
- List certificates:
sudo certbot certificates - Revoke a certificate:
sudo certbot revoke --cert-path /etc/letsencrypt/live/sudhirkumar.in/fullchain.pem - Delete a certificate:
sudo certbot delete --cert-name sudhirkumar.in
If you encounter issues:
- Ensure your domain (sudhirkumar.in) points to the correct IP address (18.60.184.7)
- Verify ports 80 and 443 are accessible (required for HTTP challenge)
- Check Nginx configuration:
sudo nginx -t - Review Certbot logs: `sudo tail -f /var/log/letsencrypt/letsencrypt.log
Before you can use AWS services like S3 and RDS, you need to configure your AWS CLI with your credentials.
-
Configure AWS CLI using your CSV credentials:
aws configure
Enter the following information from your CSV file:
- AWS Access Key ID: (from your CSV)
- AWS Secret Access Key: (from your CSV)
- Default region name: us-east-1 (or your preferred region)
- Default output format: json
-
Verify your configuration:
aws sts get-caller-identity
This should return information about your AWS account.
-
Check available regions (optional):
aws ec2 describe-regions --output table
You can also manually configure your credentials file:
-
Create the AWS credentials directory:
mkdir -p ~/.aws -
Create/edit the credentials file:
nano ~/.aws/credentialsAdd your credentials:
[default] aws_access_key_id = YOUR_ACCESS_KEY_ID aws_secret_access_key = YOUR_SECRET_ACCESS_KEY
-
Create/edit the config file:
nano ~/.aws/configAdd your configuration:
[default] region = us-east-1 output = json
-
List your S3 buckets:
aws s3 ls
-
List your RDS instances:
aws rds describe-db-instances
-
List your EC2 instances:
aws ec2 describe-instances
Even without extensive AWS permissions, you can set up SSL certificates directly on your EC2 instance using Let's Encrypt.
-
Update package list and install Certbot:
sudo apt update sudo apt install certbot python3-certbot-nginx -y
-
Stop Nginx temporarily (required for standalone verification):
sudo systemctl stop nginx
-
Obtain SSL certificate:
sudo certbot certonly --standalone -d your-domain.com -d www.your-domain.com
Replace
your-domain.comwith your actual domain name. -
Start Nginx again:
sudo systemctl start nginx
-
Edit your Nginx configuration:
sudo nano /etc/nginx/sites-available/default
-
Add SSL server block:
server { listen 80; server_name your-domain.com www.your-domain.com; return 301 https://$server_name$request_uri; } server { listen 443 ssl; server_name your-domain.com www.your-domain.com; ssl_certificate /etc/letsencrypt/live/your-domain.com/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/your-domain.com/privkey.pem; root /var/www/html; index index.html index.htm index.nginx-debian.html; location / { try_files $uri $uri/ =404; } }
-
Test and reload Nginx:
sudo nginx -t sudo systemctl reload nginx
-
Test automatic renewal:
sudo certbot renew --dry-run
-
Create a cron job for automatic renewal:
sudo crontab -e
Add this line to run renewal twice daily:
0 12 * * * /usr/bin/certbot renew --quiet
-
Check certificate expiration:
echo | openssl s_client -connect your-domain.com:443 2>/dev/null | openssl x509 -noout -dates
-
View certificate details:
sudo openssl x509 -in /etc/letsencrypt/live/your-domain.com/cert.pem -text -noout
-
Check Certbot logs:
sudo tail -f /var/log/letsencrypt/letsencrypt.log
Amazon S3 (Simple Storage Service) is an object storage service that offers industry-leading scalability, data availability, security, and performance. You can use S3 to store static assets for your website, backups, or as a data lake for analytics.
-
Create an S3 bucket:
# Install AWS CLI if not already installed sudo apt update sudo apt install awscli -y # Configure AWS CLI with your credentials aws configure # Enter your AWS Access Key ID, Secret Access Key, region (us-east-1), and output format (json) # Create S3 bucket (replace 'your-unique-bucket-name' with a globally unique name) aws s3 mb s3://your-unique-bucket-name # Enable static website hosting aws s3 website s3://your-unique-bucket-name --index-document index.html --error-document error.html
-
Upload your website files:
# Upload files to your S3 bucket aws s3 cp /var/www/html/ s3://your-unique-bucket-name --recursive # Set public read permissions aws s3api put-bucket-policy --bucket your-unique-bucket-name --policy '{ "Version": "2012-10-17", "Statement": [ { "Sid": "PublicReadGetObject", "Effect": "Allow", "Principal": "*", "Action": "s3:GetObject", "Resource": "arn:aws:s3:::your-unique-bucket-name/*" } ] }'
-
Access your website: Your website will be available at:
http://your-unique-bucket-name.s3-website-us-east-1.amazonaws.com
You can configure Nginx to serve media assets directly from S3 while keeping your main application on EC2:
-
Modify your Nginx configuration:
sudo nano /etc/nginx/sites-available/default
Add a location block for media assets:
location /media/ { proxy_pass https://your-s3-bucket.s3.amazonaws.com/; proxy_set_header Host your-s3-bucket.s3.amazonaws.com; }
-
Test and reload Nginx:
sudo nginx -t sudo systemctl reload nginx
Set up automated backups of your EC2 instance data to S3:
-
Create a backup script:
nano ~/backup-to-s3.shAdd the following content:
#!/bin/bash DATE=$(date +%Y-%m-%d-%H-%M-%S) BACKUP_NAME="backup-$DATE.tar.gz" # Create archive of important directories tar -czf /tmp/$BACKUP_NAME /var/www/html /etc/nginx # Upload to S3 aws s3 cp /tmp/$BACKUP_NAME s3://your-backup-bucket/backups/ # Remove local backup file rm /tmp/$BACKUP_NAME echo "Backup completed: $BACKUP_NAME"
-
Make the script executable and schedule it:
chmod +x ~/backup-to-s3.sh # Add to crontab to run daily at 2 AM (crontab -l 2>/dev/null; echo "0 2 * * * /home/ubuntu/backup-to-s3.sh") | crontab -
-
Block public access unless absolutely necessary:
aws s3api put-public-access-block --bucket your-bucket-name --public-access-block-configuration BlockPublicAcls=true,IgnorePublicAcls=true,BlockPublicPolicy=true,RestrictPublicBuckets=true
-
Enable server-side encryption:
aws s3api put-bucket-encryption --bucket your-bucket-name --server-side-encryption-configuration '{ "Rules": [ { "ApplyServerSideEncryptionByDefault": { "SSEAlgorithm": "AES256" } } ] }'
-
Enable versioning for data protection:
aws s3api put-bucket-versioning --bucket your-bucket-name --versioning-configuration Status=Enabled
-
Enable logging:
aws s3api put-bucket-logging --bucket your-bucket-name --bucket-logging-status '{ "LoggingEnabled": { "TargetBucket": "your-log-bucket", "TargetPrefix": "logs/" } }'
Your MySQL database is hosted on Amazon RDS (Relational Database Service) in the AWS cloud. Here's how to view and manage it in the AWS Console:
-
Sign in to the AWS Console:
- Go to https://aws.amazon.com/console/
- Sign in with your AWS credentials
-
Navigate to RDS:
- In the AWS Console, click on "Services" in the top navigation bar
- Under "Database", click on "RDS"
-
Find Your Database Instance:
- In the left sidebar, click on "Databases"
- You should see your database instance named "mydbinstance" in the list
- The status should show as "Available"
-
View Database Details:
- Click on your database instance name ("mydbinstance") to view its details
- Here you can see:
- Endpoint: The DNS name you use to connect to your database
- Port: 3306 (MySQL default)
- Engine version: MySQL 8.0.43
- Status: Available
- Storage: 20 GB
- Instance class: db.t3.micro
- Availability zone: us-east-1a
- Security groups
- Parameter groups
- Backup information
Your database is physically hosted on AWS infrastructure in the US East (N. Virginia) region:
- Region: us-east-1 (US East - N. Virginia)
- Availability Zone: us-east-1a
- Endpoint: mydbinstance.cuz2ueeq2r1m.us-east-1.rds.amazonaws.com
- Port: 3306
- Your EC2 instance has IP addresses 18.60.184.7 (public) and 172.31.11.186 (private)
- Your RDS instance is in VPC vpc-01830a3ec6f193c0c in us-east-1a
If your EC2 instance is in a different region or VPC, you have several options:
-
Use the Public Endpoint (Recommended for learning):
- Your RDS instance is configured as
PubliclyAccessible: true - You can connect using the public endpoint from anywhere on the internet
- This is less secure but simpler for development/testing
- Your RDS instance is configured as
-
VPC Peering (Recommended for production):
- Create a VPC peering connection between your EC2 VPC and RDS VPC
- Update route tables in both VPCs
- Configure security groups appropriately
-
VPN Connection:
- Set up a VPN connection between your networks
- Route traffic through the VPN tunnel
-
AWS Transit Gateway:
- For complex network topologies with multiple VPCs
To connect from your EC2 instance to your RDS instance across different regions/VPCs:
-
Using Public Endpoint (simplest approach):
mysql -h mydbinstance.cuz2ueeq2r1m.us-east-1.rds.amazonaws.com -P 3306 -u adminuser -p
-
Security Group Configuration:
- Your RDS security group (sg-01e6aa0f0210f8cdc) needs to allow inbound connections on port 3306
- From either:
- Your EC2 instance's public IP address (18.60.184.7/32)
- A wider IP range if needed for development
-
Testing Connectivity:
- First, test basic network connectivity:
telnet mydbinstance.cuz2ueeq2r1m.us-east-1.rds.amazonaws.com 3306
- If telnet is not available:
nc -zv mydbinstance.cuz2ueeq2r1m.us-east-1.rds.amazonaws.com 3306
- First, test basic network connectivity:
To connect to your database from outside AWS (like from your local computer), you'll need:
- The endpoint: mydbinstance.cuz2ueeq2r1m.us-east-1.rds.amazonaws.com
- The port: 3306
- The master username: adminuser
- The password you specified when creating the instance
From your EC2 instance, you can connect using:
mysql -h mydbinstance.cuz2ueeq2r1m.us-east-1.rds.amazonaws.com -P 3306 -u adminuser -pYour RDS instance is currently using the default security group (sg-01e6aa0f0210f8cdc) for VPC vpc-01830a3ec6f193c0c. The current configuration only allows:
-
Inbound Traffic:
- All protocols (-1) from other instances in the same security group
- No explicit rules allowing external connections on port 3306
-
Outbound Traffic:
- All traffic (0.0.0.0/0) - unrestricted outbound access
To connect from your EC2 instance at 18.60.184.7, you need to add an inbound rule to allow MySQL traffic:
-
In the AWS Console (EC2 > Security Groups):
- Find security group sg-01e6aa0f0210f8cdc
- Add inbound rule:
- Type: MySQL/Aurora (or Custom TCP)
- Protocol: TCP
- Port Range: 3306
- Source: 18.60.184.7/32
- Description: "MySQL access from EC2 instance"
-
Alternatively, using AWS CLI (if you have permissions):
aws ec2 authorize-security-group-ingress \ --group-id sg-01e6aa0f0210f8cdc \ --protocol tcp \ --port 3306 \ --cidr 18.60.184.7/32
⚠️ Note: If you receive an "UnauthorizedOperation" error, it means your AWS user doesn't have the necessary permissions (ec2:AuthorizeSecurityGroupIngress). In this case, you'll need to ask your AWS administrator to add this rule for you.
-
Using CloudWatch Metrics:
- In the RDS console, select your database instance
- Go to the "Monitoring" tab
- View metrics like CPU utilization, memory usage, disk I/O, and network throughput
- Set up alarms for critical metrics
-
Enhanced Monitoring:
- Your instance has enhanced monitoring enabled (visible in the "Monitoring" tab)
- Provides more granular metrics at the operating system level
-
Performance Insights (if enabled):
- Helps identify performance bottlenecks
- Shows SQL statements consuming the most resources
-
Automated Backups:
- Your instance currently has backup retention set to 0 days (disabled)
- To enable automated backups:
aws rds modify-db-instance \ --db-instance-identifier mydbinstance \ --backup-retention-period 7 \ --apply-immediately
-
Manual Snapshots:
- Create a snapshot of your database at any time:
aws rds create-db-snapshot \ --db-instance-identifier mydbinstance \ --db-snapshot-identifier mydbinstance-snapshot-$(date +%Y%m%d)
- Create a snapshot of your database at any time:
-
Restoring from Snapshots:
- Restore your database from a snapshot:
aws rds restore-db-instance-from-db-snapshot \ --db-instance-identifier mydbinstance-restored \ --db-snapshot-identifier mydbinstance-snapshot-20251215
- Restore your database from a snapshot:
-
Scaling Storage:
- Increase allocated storage:
aws rds modify-db-instance \ --db-instance-identifier mydbinstance \ --allocated-storage 30 \ --apply-immediately
- Increase allocated storage:
-
Scaling Compute:
- Change the instance class:
aws rds modify-db-instance \ --db-instance-identifier mydbinstance \ --db-instance-class db.t3.small \ --apply-immediately
- Change the instance class:
-
Read Replicas (for read-heavy workloads):
- Create a read replica:
aws rds create-db-instance-read-replica \ --db-instance-identifier mydbinstance-replica \ --source-db-instance-identifier mydbinstance
- Create a read replica:
-
Regular Updates:
- Enable auto minor version upgrade:
aws rds modify-db-instance \ --db-instance-identifier mydbinstance \ --auto-minor-version-upgrade \ --apply-immediately
- Enable auto minor version upgrade:
-
Enable Encryption (for new instances):
- When creating a new encrypted instance:
aws rds create-db-instance \ --db-instance-identifier myencryptedinstance \ --db-instance-class db.t3.micro \ --engine mysql \ --master-username admin \ --master-user-password yourpassword \ --allocated-storage 20 \ --storage-encrypted
- When creating a new encrypted instance:
-
Enable Deletion Protection:
- Protect against accidental deletion:
aws rds modify-db-instance \ --db-instance-identifier mydbinstance \ --deletion-protection \ --apply-immediately
- Protect against accidental deletion:
Amazon RDS makes it easy to set up, operate, and scale a relational database in the cloud. It provides cost-efficient and resizable capacity while automating time-consuming administration tasks.
You've now created a new RDS instance in the same region and VPC as your EC2 instance, which will simplify connectivity.
Your new RDS instance has the following configuration:
- Region: ap-south-2 (Asia Pacific - Hyderabad)
- Instance Name: mydbinstance
- Engine: MySQL 8.0.43
- Instance Class: db.t3.micro
- Storage: 20 GB GP2
- Master Username: adminuser
- VPC: vpc-0df0eef60eb4881c6 (same as your EC2 instance)
- Security Group: sg-033a5beaa1e43ca65 (same as your EC2 instance)
- Subnet Group: mydb-subnet-group (created for this purpose)
- Publicly Accessible: Yes
- Backup Retention: 1 day (automatically enabled)
Your RDS instance is currently being created. You can monitor its status with:
aws rds describe-db-instances --db-instance-identifier mydbinstanceLook for the DBInstanceStatus field in the response:
creating: The instance is being provisionedavailable: The instance is ready for usemodifying: The instance is being modifiedbacking-up: The instance is being backed up
✅ Your RDS instance is now fully available and operational! The initial backup is being performed, which is normal for a new instance.
You can check the status with:
aws rds describe-db-instances --db-instance-identifier mydbinstance --query 'DBInstances[*].[DBInstanceIdentifier,DBInstanceStatus]' --output tableOnce your instance status shows as available, you can get the endpoint:
aws rds describe-db-instances --db-instance-identifier mydbinstance --query 'DBInstances[0].Endpoint.Address' --output text✅ Your RDS instance endpoint is now available!
- Endpoint: mydbinstance.c3y8ucwsg7fo.ap-south-2.rds.amazonaws.com
- Port: 3306 (default MySQL port)
Since your RDS instance and EC2 instance are in the same region and VPC, connecting to the database should be straightforward:
-
SSH into your EC2 instance:
ssh -i aws-ec2-key-sk.pem ubuntu@18.60.184.7
-
Install MySQL client on your EC2 instance (if not already installed):
sudo apt update sudo apt install mysql-client -y
-
Connect to your RDS instance:
mysql -h mydbinstance.c3y8ucwsg7fo.ap-south-2.rds.amazonaws.com -P 3306 -u adminuser -p
When prompted, enter the password you specified when creating the RDS instance:
YourSecurePassword123! -
Test the connection with a simple query:
SHOW DATABASES; CREATE DATABASE myapp; USE myapp; CREATE TABLE users (id INT AUTO_INCREMENT PRIMARY KEY, name VARCHAR(100), email VARCHAR(100)); INSERT INTO users (name, email) VALUES ('John Doe', 'john@example.com'); SELECT * FROM users;
Now that you have a working EC2-RDS setup, let's explore scaling options to handle increased traffic and ensure high availability.
A load balancer distributes incoming traffic across multiple instances, improving availability and fault tolerance.
-
Create a Security Group for the Load Balancer:
aws ec2 create-security-group \ --group-name my-alb-sg \ --description "Security group for ALB" \ --vpc-id vpc-0df0eef60eb4881c6✅ Security group created successfully with ID: sg-0278713dad1019abc
Then add inbound rules:
aws ec2 authorize-security-group-ingress \ --group-id sg-0278713dad1019abc \ --protocol tcp \ --port 80 \ --cidr 0.0.0.0/0
✅ Inbound rule added successfully to allow HTTP traffic
-
Create the Application Load Balancer:
aws elbv2 create-load-balancer \ --name my-alb \ --subnets subnet-0321c1fd1b18323a2 subnet-0d1e2e6d2208874a0 \ --security-groups sg-0278713dad1019abc \ --type application
✅ Load balancer created successfully with name: my-alb
- DNS Name: my-alb-464169252.ap-south-2.elb.amazonaws.com
- Status: provisioning
-
Create a Target Group:
aws elbv2 create-target-group \ --name my-targets \ --protocol HTTP \ --port 80 \ --vpc-id vpc-0df0eef60eb4881c6 \ --health-check-path /
✅ Target group created successfully with name: my-targets
- ARN: arn:aws:elasticloadbalancing:ap-south-2:238323584764:targetgroup/my-targets/feca944debb1eba3
-
Register Instances with the Target Group:
aws elbv2 register-targets \ --target-group-arn arn:aws:elasticloadbalancing:ap-south-2:238323584764:targetgroup/my-targets/feca944debb1eba3 \ --targets Id=i-02fd0cbd8638764ee Id=i-0e298b4fbac6540e7
✅ Instances registered successfully with the target group
-
Create a Listener:
aws elbv2 create-listener \ --load-balancer-arn arn:aws:elasticloadbalancing:ap-south-2:238323584764:loadbalancer/app/my-alb/20dc71c10e9c90eb \ --protocol HTTP \ --port 80 \ --default-actions Type=forward,TargetGroupArn=arn:aws:elasticloadbalancing:ap-south-2:238323584764:targetgroup/my-targets/feca944debb1eba3
✅ Listener created successfully
- ARN: arn:aws:elasticloadbalancing:ap-south-2:238323584764:listener/app/my-alb/20dc71c10e9c90eb/9c9cfc2637c017b1
✅ Load balancer is now active and ready to distribute traffic!
- DNS Name: my-alb-464169252.ap-south-2.elb.amazonaws.com
You can check the load balancer status with:
aws elbv2 describe-load-balancers --names my-albAnd check target health with:
aws elbv2 describe-target-health --target-group-arn arn:aws:elasticloadbalancing:ap-south-2:238323584764:targetgroup/my-targets/feca944debb1eba3-
Install a web server on your instances:
# SSH into each instance and install Nginx sudo apt update sudo apt install nginx -y sudo systemctl start nginx sudo systemctl enable nginx
-
Or modify your launch template to automatically install a web server when new instances are launched.
Once a web server is running on port 80, the targets should become healthy and the load balancer will be able to distribute traffic to your instances.
- High Availability: Distributes traffic across multiple instances
- Fault Tolerance: Automatically routes traffic away from unhealthy instances
- Scalability: Seamlessly integrates with Auto Scaling groups
- Security: Centralized SSL termination and security policies
Auto Scaling groups automatically adjust the number of EC2 instances based on demand, ensuring your application has the right amount of compute capacity.
-
Create a Launch Template (defines how instances are launched):
aws ec2 create-launch-template \ --launch-template-name my-ec2-template \ --launch-template-data '{ "ImageId": "ami-0e7938ad51d883574", "InstanceType": "t3.micro", "KeyName": "aws-ec2-key-sk", "SecurityGroupIds": ["sg-033a5beaa1e43ca65"], "UserData": "IyEvYmluL2Jhc2gKdnVtYW4gLXkgeHVhcnQ=" }'
✅ Launch template created successfully with ID: lt-0acfad935d9a11f6a
-
Create an Auto Scaling Group:
aws autoscaling create-auto-scaling-group \ --auto-scaling-group-name my-asg \ --launch-template LaunchTemplateName=my-ec2-template \ --min-size 1 \ --max-size 3 \ --desired-capacity 2 \ --availability-zones ap-south-2a ap-south-2b \ --vpc-zone-identifier "subnet-0321c1fd1b18323a2,subnet-0d1e2e6d2208874a0"✅ Auto Scaling group created successfully with name: my-asg
-
Verify Auto Scaling Group:
aws autoscaling describe-auto-scaling-groups --auto-scaling-group-names my-asg
✅ Auto Scaling group verified successfully
- Desired capacity: 2 instances
- Launched instances:
- i-02fd0cbd8638764ee (running, Public IP: 40.192.121.224)
- i-0e298b4fbac6540e7 (running, Public IP: 18.60.148.223)
- Cost Efficiency: Automatically scale down during low demand
- High Availability: Maintain desired capacity even if instances fail
- Performance: Scale up to meet increased demand
- Automation: No manual intervention required
Read replicas help distribute read traffic across multiple database instances, improving performance and availability.
-
Create a Read Replica of your existing RDS instance:
aws rds create-db-instance-read-replica \ --db-instance-identifier mydbinstance-replica \ --source-db-instance-identifier mydbinstance \ --db-instance-class db.t3.micro \ --availability-zone ap-south-2b
✅ Read replica creation initiated successfully with identifier: mydbinstance-replica
- Status: creating (may take 5-10 minutes to complete)
- Source: mydbinstance
- Availability Zone: ap-south-2b
-
Monitor Replica Status:
aws rds describe-db-instances --db-instance-identifier mydbinstance-replica
⏳ Read replica is still being created. This process typically takes 5-10 minutes to complete.
Once the replica status shows as "available", you can get its endpoint:
aws rds describe-db-instances --db-instance-identifier mydbinstance-replica --query 'DBInstances[0].Endpoint.Address' --output text
-
Direct Read Traffic: Once available, route read queries to the replica:
mysql -h mydbinstance-replica.c3y8ucwsg7fo.ap-south-2.rds.amazonaws.com -P 3306 -u adminuser -p
-
Application-Level Routing: Configure your application to:
- Send write operations to the primary instance (mydbinstance)
- Send read operations to the replica (mydbinstance-replica)
- Improved Performance: Distribute read load across multiple instances
- Better Availability: Read operations can continue even if primary is unavailable
- Geographic Distribution: Create replicas in different regions for global access
- Analytics Workloads: Offload reporting and analytics queries from the primary
-
Plan for Scaling:
- Monitor metrics to understand scaling triggers
- Test scaling operations before implementing in production
- Set appropriate minimum and maximum limits
-
Database Connection Management:
- Use connection pooling to efficiently manage database connections
- Implement retry logic for transient failures during scaling events
-
Health Checks:
- Configure proper health checks for Auto Scaling
- Monitor replica lag for RDS read replicas
-
Cost Optimization:
- Use Spot Instances in Auto Scaling groups for non-critical workloads
- Schedule scaling based on predictable traffic patterns
Once your RDS instance status shows as available, you can get the endpoint to connect to it:
aws rds describe-db-instances --db-instance-identifier mydbinstance --query 'DBInstances[0].Endpoint.Address' --output textThis will return the endpoint address, which will look something like:
mydbinstance.abcdefg1234567.us-east-1.rds.amazonaws.com
In your case, the endpoint is:
mydbinstance.cuz2ueeq2r1m.us-east-1.rds.amazonaws.com
-
Install MySQL client on your EC2 instance:
sudo apt update sudo apt install mysql-client -y
-
Connect to your RDS instance:
mysql -h mydbinstance.cuz2ueeq2r1m.us-east-1.rds.amazonaws.com -P 3306 -u adminuser -p
When prompted, enter the password you used when creating the RDS instance:
YourSecurePassword123! -
Test the connection with a simple query:
SHOW DATABASES; CREATE DATABASE myapp; USE myapp; CREATE TABLE users (id INT AUTO_INCREMENT PRIMARY KEY, name VARCHAR(100), email VARCHAR(100)); INSERT INTO users (name, email) VALUES ('John Doe', 'john@example.com'); SELECT * FROM users;
To connect to your RDS instance from your EC2 instance, you may need to modify the security group associated with your RDS instance:
-
Get your EC2 instance's security group ID:
aws ec2 describe-instances --instance-ids i-00d8a7ab13c96d8c2 --query 'Reservations[0].Instances[0].SecurityGroups[0].GroupId' --output textNote: If you encounter permission errors, you may need to ask your AWS administrator for the security group ID or find it in the EC2 Console.
-
Add inbound rule to RDS security group:
aws ec2 authorize-security-group-ingress \ --group-id sg-01e6aa0f0210f8cdc \ --protocol tcp \ --port 3306 \ --source-group YOUR_EC2_SECURITY_GROUP_ID
Replace
YOUR_EC2_SECURITY_GROUP_IDwith the security group ID of your EC2 instance.Note: If you don't have permissions to modify security groups, ask your AWS administrator to add an inbound rule allowing TCP traffic on port 3306 from your EC2 instance's security group.
If you're unable to connect to your RDS instance, check these common issues:
-
Security Group Configuration:
- Ensure the RDS security group allows inbound connections on port 3306 from your EC2 instance's security group
- Verify that your EC2 instance's security group allows outbound connections on port 3306
-
Network Connectivity:
- Confirm that both your EC2 instance and RDS instance are in the same VPC or have proper routing between them
- Check that the RDS instance is not in a private subnet without proper NAT configuration (unless connecting from within the VPC)
-
RDS Instance Status:
- Verify that the RDS instance status is "available" before attempting to connect
- Check the RDS console for any maintenance or backup activities that might affect connectivity
-
Connection String:
- Double-check the endpoint address, port, username, and password
- Ensure you're using the correct port (3306 for MySQL)
-
MySQL Client Installation:
- Confirm that the MySQL client is properly installed on your EC2 instance:
mysql --version
- Confirm that the MySQL client is properly installed on your EC2 instance:
-
Firewall Rules:
- Check if there are any firewall rules on your EC2 instance blocking outbound connections on port 3306:
sudo iptables -L
- Check if there are any firewall rules on your EC2 instance blocking outbound connections on port 3306:
If you encounter permission errors when trying to describe your RDS instances, it may be due to insufficient IAM permissions. The user who creates an RDS instance may not automatically have permissions to describe or modify it.
To resolve this issue:
-
Check your current permissions:
aws sts get-caller-identity
-
Request additional RDS permissions from your AWS administrator:
rds:DescribeDBInstancesrds:ModifyDBInstancerds:DeleteDBInstancerds:RebootDBInstance
-
Alternative: Use the AWS Console to monitor your RDS instance status and get endpoint information if CLI access is restricted.
-
Wait for the instance to be fully available before attempting connections. You can check the AWS RDS Console for status updates.
-
Install MySQL client on your EC2 instance:
sudo apt update sudo apt install mysql-client -y
-
Connect to your RDS instance:
mysql -h your-rds-endpoint.region.rds.amazonaws.com -P 3306 -u admin -p
-
Test the connection with a simple query:
SHOW DATABASES; CREATE DATABASE myapp; USE myapp; CREATE TABLE users (id INT AUTO_INCREMENT PRIMARY KEY, name VARCHAR(100), email VARCHAR(100)); INSERT INTO users (name, email) VALUES ('John Doe', 'john@example.com'); SELECT * FROM users;
-
Update your application configuration to use RDS:
# Example for a Node.js application nano ~/myapp/config/database.js
module.exports = { host: 'your-rds-endpoint.region.rds.amazonaws.com', user: 'admin', password: 'your-secure-password', database: 'myapp', port: 3306 };
-
Set up environment variables for security:
nano ~/.bashrcAdd these lines:
export DB_HOST=your-rds-endpoint.region.rds.amazonaws.com export DB_USER=admin export DB_PASSWORD=your-secure-password export DB_NAME=myapp export DB_PORT=3306
Reload your shell:
source ~/.bashrc
-
Create a dedicated security group for RDS:
aws ec2 create-security-group \ --group-name RDS-Security-Group \ --description "Security group for RDS" \ --vpc-id vpc-your-vpc-id # Allow access only from your EC2 security group aws ec2 authorize-security-group-ingress \ --group-id sg-rds-security-group-id \ --protocol tcp \ --port 3306 \ --source-group sg-ec2-security-group-id
-
Enable automatic backups:
aws rds modify-db-instance \ --db-instance-identifier mydbinstance \ --backup-retention-period 7 \ --apply-immediately
-
Enable Multi-AZ deployment for high availability:
aws rds modify-db-instance \ --db-instance-identifier mydbinstance \ --multi-az \ --apply-immediately
-
Enable Enhanced Monitoring:
aws rds modify-db-instance \ --db-instance-identifier mydbinstance \ --monitoring-interval 60 \ --monitoring-role-arn arn:aws:iam::account-id:role/rds-monitoring-role \ --apply-immediately
-
View logs:
aws rds describe-db-log-files --db-instance-identifier mydbinstance # Download log files aws rds download-db-log-file-portion \ --db-instance-identifier mydbinstance \ --log-file-name error/mysql-error-running.log \ --output text -
Performance insights:
aws rds enable-http-endpoint --resource-arn your-rds-arn
- Use IAM authentication instead of passwords when possible
- Regularly update your database engine to the latest version
- Implement read replicas for read-heavy workloads
- Use parameter groups to customize database configuration
- Enable encryption at rest and in transit
- Set up CloudWatch alarms for monitoring key metrics