Integration of Terraform with AWS Cloud

Apache web-server Automation with AWS Using Terraform Tool

Shreyas Basutkar

--

Hello World! In this article, I am going to talk about an overview of Terraform and the creation of @EC2 Instance and how @S3 and cloud front for storing static objects using Terraform. How we can deploy our static webpage using a terraform tool in just 1 click. The first thing we should know, What is Terraform?, Why we use Terraform?, How we can launch our entire webpage?

— — — — — — — — — — — — — — — — — — — — — — — — — — — — — — —

This article is related to one of the previous article of terraform Task_1:

Instead of EBS, we are using here EFS (Elastic File Storage) here.

What is Terraform?

Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions.

— — — — — — — — — — — — — — — — — — — — — — — — — — — — — — —

Why we use Terraform Tool?

In AWS if we launch any EC2 instances first we have to select the required AMI, then we have to select the instance type next, we have to configure that selected instance then we select the Elastic Block Storages(EBS)volume after we have to name that OS i.e we add tags to it, then we configure some security protocols, we have to create a key pair for OS and lastly we launch and then our instance will be launched.

Instead of this, we can launch an entire setup through Terraform Code

— — — — — — — — — — — — — — — — — — — — — — — — — — — — — — —

How we can launch our entire webpage using Terraform Tool?

There are the following steps to launch our instance areas cited or mentioned below:-

0. At very first we have to configure our profile by creating a new AWS account.

  1. Creating a security key pair and security group which allows the port 80, 22.
  2. Launch EC2 instance with the key pair and security group which we have created in step 1.
  3. Create a volume Elastic File Storage(EFS) and attach it to your instance
  4. After creating the volume(EFS) attach it to your instance.
  5. Create an S3 bucket, and copy/deploy the images from Github repo into the s3 bucket and change the permission to public readable.
  6. Create a Cloudfront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/HTML
  7. Mount the EFS and download the code from the Github repository

— — — — — — — — — — — — — — — — — — — — — — — — — — — — — — —

Step 1: Configuring the AWS Profile

### Configuring the information of provider ###

provider “aws” {
region = “ap-south-1”
profile = “awsShreyas”
}

We have given information about the provider that we are using the default “aws”, default region and profile name i.e “awsShreyas”

Step2: Creation of new profile using terraform

$ aws configure — profile awsShreyas

We have configured the new profile this is my new configured profile.

The Output of Step2:

Step 3: Creation of EC2 key-pair using terraform

This is the code for an EC2 key-pair. Therefore the key is created successfully.

resource "aws_key_pair" "task_key" {
key_name = "MyTaskKey"
public_key = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDzIc4J8R/kk2VegmZ+Hh7Ax2RGapK/7DtSX5FgNkrFXAUq1AwPhNd5/d7YdQCg/WlmJJuzwPfmsEDKGlsUpPUOopxiFwNBqYoYV5L1XHgLOZ8Q4dAVSaQM1Xm/tynkRhKt"
}

The Output of Step3:

Step 4: Creation of Security group and also allow port 80 for http protocol

### Security groups for task ###

resource “aws_security_group” “my_security_group” {
name = “shrisecurity”
description = “Allow TCP”
vpc_id = “vpc-1ef7ea76”

ingress {
description = “Lauch-Wizard-Created”
from_port = 80
to_port = 80
protocol = “tcp”
cidr_blocks = [“0.0.0.0/0”]
}

ingress {
description = “Lauch-Wizard-Created”
from_port = 22
to_port = 22
protocol = “tcp”
cidr_blocks = [“0.0.0.0/0”]
}

ingress {
description = “Lauch-Wizard-Created”
from_port = 443
to_port = 443
protocol = “tcp”
cidr_blocks = [“0.0.0.0/0”]
}

egress {
from_port = 0
to_port = 0
protocol = “-1”
cidr_blocks = [“0.0.0.0/0”]
}

tags = {
Name = “Allow TCP”
}
}

We have created the new AWS security group named Allow TCP. This will go to be allowed the HTTP, HTTPS & SSH protocols.

The Output of Step 4:-

Step 5: Creation of EC2 instance

### Creating the EC2-Instance ###

resource “aws_instance” “web1” {
key_name = “abcd”
ami = “ami-0447a12f28fddb066”
instance_type = “t2.micro”

security_groups = [ “shrisecurity” ]

connection {
type = “ssh”
user = “ec2-user”
private_key = file(“/home/shreyas/Downloads/abcd.pem”)
host = aws_instance.web1.public_ip
}

provisioner “remote-exec” {
inline = [
“sudo yum install httpd php git -y”,
“sudo systemctl restart httpd”,
“sudo systemctl enable httpd”,
]
}

tags = {
Name = “MyTerraformOS”
}
}

We have launched the EC2 instance named MyTerraformOS

The Output of Step 5:-

Step 6: Creation of EFS volume for persistent storage

### creating EBS Volume of size 1 ###

### creating EFS ###

resource “aws_efs_file_system” “myefs” {
creation_token = “myefs”
performance_mode = “generalPurpose”

tags = {
Name = “myefs1”
}
}

### Attach EFS Volume to the instance ###

resource “aws_efs_mount_target” “myefs-mount” {
file_system_id = aws_efs_file_system.myefs.id
subnet_id = “subnet-0e22a775”
security_groups = [ aws_security_group.my_security_group.id ]
}

So, now we created the new EFS volume named tera_ebs. We have also attached the volume to the EC2 instance.

The Output of step6:

Step 7: Creation of S3 bucket and uploading data(image to bucket) from github repository and updating the S3 bucket policy for it to grant access to Cloud Front

### Creating S3 bucket and Uploading pic in bucket through github.###

resource “aws_s3_bucket” “mytaskbucketshreyas24” {
bucket = “mytaskbucketshreyas24”
acl = “public-read”

tags = {
Name = “My-task-bucket”
Environment = “Dev”
}
}

### Bucket object and uploading an Image ###

resource “aws_s3_bucket_object” “object” {
bucket = aws_s3_bucket.mytaskbucketshreyas24.bucket
acl = “public-read”
key = “Shreyas.jpeg”
source = “images/Shreyas.jpeg”
depends_on=[aws_s3_bucket.mytaskbucketshreyas24,null_resource.image]

}

We have successfully created S3 Bucket named mytaskbucketshreyas24 and uploaded the image named Shreyas.jpeg on Github

The Output of step7:

Step 8: Creation of Cloudfront for faster access of images and reduce latency issues

### make cloudfront distribution ###

resource “aws_cloudfront_distribution” “my_distributiontask” {
origin {
domain_name = “${aws_s3_bucket.mytaskbucketshreyas24.bucket_regional_domain_name}”
origin_id = “${aws_s3_bucket.mytaskbucketshreyas24.id}”

custom_origin_config {
http_port = 80
https_port = 443
origin_protocol_policy = “match-viewer”
origin_ssl_protocols = [“TLSv1”, “TLSv1.1”, “TLSv1.2”]
}
}

default_root_object = “Shreyas.jpeg”
enabled = true

custom_error_response {
error_caching_min_ttl = 3000
error_code = 404
response_code = 200
response_page_path = “/index.php”
}

default_cache_behavior {
allowed_methods = [“DELETE”, “GET”, “HEAD”, “OPTIONS”, “PATCH”, “POST”, “PUT”]
cached_methods = [“GET”, “HEAD”]
target_origin_id = “${aws_s3_bucket.mytaskbucketshreyas24.id}”

forwarded_values {
query_string = false

cookies {
forward = “none”
}
}

viewer_protocol_policy = “allow-all”
min_ttl = 0
default_ttl = 3600
max_ttl = 86400
}
restrictions {
geo_restriction {
restriction_type = “whitelist”
locations = [“US”, “CA”, “GB”, “DE”,”IN”]
}
}

In this Step, we have created Cloud Front Distribution by using S3 Bucket

The Output of Step8:-

Step 9: Mounting it to /var/www/HTML folder

### Mounting to html folder ###

resource “null_resource” “nullremote3” {

depends_on = [
aws_efs_mount_target.myefs-mount
]
connection {
type = “ssh”
user = “ec2-user”
private_key = file(“/home/shreyas/Downloads/abcd.pem”)
host = aws_instance.web1.public_ip
}

provisioner “remote-exec” {
inline = [
“sudo mkfs.ext4 /dev/xvdf”,
“sudo mount /dev/xvdf /var/www/html”,
“sudo rm -rf /var/www/html/*”,
“sudo git clone https://github.com/B-Shreyas/multicloud.git /var/www/html/”,
“sudo su << EOF”,
“echo \”${aws_cloudfront_distribution.my_distributiontask.domain_name}\” >> /var/www/html/myinstanceip.txt”,
“EOF”,
“sudo systemctl restart httpd”
]
}
}

So this code as a script will mount to the /var/www/HTML and will install httpd service and install PHP. And just it goes to Github and imports the index.php file.

The Output for Step9:-

Step 10: Redirection to website through IP Adress to Google-Chrome browser

### open the website in chrome ###

resource “null_resource” “nulllocal1” {
depends_on = [
null_resource.nullremote3,
]
provisioner “local-exec” {
command = “google-chrome ${aws_instance.web1.public_ip}”
}
}

Therefore, finally, our website as well as code will be ready to launch publically in just 1 single click.

Step 11: Terraform init

$ terraform init

By this command, all required plugins will be downloaded

Step12: Terraform apply -auto-approve

$ terraform apply -auto-approve

By this command, the terraform code will be executed. It will also auto-approve the required “Yes” permission

So Finally our website is executed and deployed successfully

$ terraform destroy -auto-approve

By this terraform destroy -auto-approve command we can also destroy the whole setup in just 1 single click without asking any user input.

--

--