Multi Cloud Automation Coding withTerraform

22 minutes, 42 seconds Read
1 0
Read Time:23 Minute, 37 Second
  1. Introduction

     

Terraform is an open-source infrastructure as Code tool developed by HashiCorp. It is used to define and provision the complete infrastructure using an easy-to-learn declarative language.

It is an infrastructure provisioning tool where you can store your cloud infrastructure setup as codes. It’s very similar to tools such as CloudFormation, which you would use to automate your AWS infrastructure, but you can only use that on AWS. With Terraform, you can use it on other cloud platforms as well.

Terraform is HashiCorp’s infrastructure as code tool. It lets you define resources and infrastructure in human-readable, declarative configuration files, and manages your infrastructure’s lifecycle. Using Terraform has several advantages over manually managing your infrastructure:

  • Terraform can manage infrastructure on multiple cloud platforms.
  • The human-readable configuration language helps you write infrastructure code quickly.
  • Terraform’s state allows you to track resource changes throughout your deployments.
  • You can commit your configurations to version control to safely collaborate on infrastructure.

 

  1. Why Terraform?

 

  • Manage any infrastructure

Find providers for many of the platforms and services you already use in the Terraform Registry. You can also write your own. Terraform takes an immutable approach to infrastructure, reducing the complexity of upgrading or modifying your services and infrastructure.

  • Track your infrastructure

Terraform generates a plan and prompts you for your approval before modifying your infrastructure. It also keeps track of your real infrastructure in a state file, which acts as a source of truth for your environment. Terraform uses the state file to determine the changes to make to your infrastructure so that it will match your configuration.

  • Automate changes

Terraform configuration files are declarative, meaning that they describe the end state of your infrastructure. You do not need to write step-by-step instructions to create resources because Terraform handles the underlying logic. Terraform builds a resource graph to determine resource dependencies and creates or modifies non-dependent resources in parallel. This allows Terraform to provision resources efficiently.

  • Standardize configurations

Terraform supports reusable configuration components called modules that define configurable collections of infrastructure, saving time and encouraging best practices. You can use publicly available modules from the Terraform Registry, or write your own.

  • Collaborate

Since your configuration is written in a file, you can commit it to a Version Control System (VCS) and use Terraform Cloud to efficiently manage Terraform workflows across teams. Terraform Cloud runs Terraform in a consistent, reliable environment and provides secure access to shared state and secret data, role-based access controls, a private registry for sharing both modules and providers, and more.

 

  1. Core Terraform Components

     

HashiCorp and the Terraform community have already written more than 1700 providers to manage thousands of different types of resources and services, and this number continues to grow. You can find all publicly available providers on the Terraform Registry, including Amazon Web Services (AWS), Azure, Google Cloud Platform (GCP), Kubernetes, Helm, GitHub, Splunk, DataDog, and many more.

The core Terraform workflow consists of three stages:

  • Write:You define resources, which may be across multiple cloud providers and services. For example, you might create a configuration to deploy an application on virtual machines in a Virtual Private Cloud (VPC) network with security groups and a load balancer.
  • Plan:Terraform creates an execution plan describing the infrastructure it will create, update, or destroy based on the existing infrastructure and your configuration.
  • Apply:On approval, Terraform performs the proposed operations in the correct order, respecting any resource dependencies. For example, if you update the properties of a VPC and change the number of virtual machines in that VPC, Terraform will recreate the VPC before scaling the virtual machines.

 

  1. Fundamental Concepts

     

  • Variables: Also used as input-variables, it is key-value pair used by Terraform modules to allow customization.
  • Provider: It is a plugin to interact with APIs of service and access its related resources.
  • Module: It is a folder with Terraform templates where all the configurations are defined
  • State: It consists of cached information about the infrastructure managed by Terraform and the related configurations.
  • Resources: It refers to a block of one or more infrastructure objects (compute instances, virtual networks, etc.), which are used in configuring and managing the infrastructure.
  • Data Source: It is implemented by providers to return information on external objects to terraform.
  • Output Values: These are return values of a terraform module that can be used by other configurations.
  1. Terraform syntax, internals, and patterns

     

The syntax of Terraform configurations is called HashiCorp Configuration Language (HCL). It is meant to strike a balance between human readable and editable as well as being machine-friendly. For machine-friendliness, Terraform can also read JSON configurations. For general Terraform configurations, however, we recommend using the HCL Terraform syntax.

 

# An AMI

variable “ami” {

  description = “the AMI to use”

} 

/* A multi

   line comment. */

resource “aws_instance””web” {

  ami               = “${var.ami}”

  count             = 2

  source_dest_check = false

 

  connection {

    user = “root”

  }

}

 

Basic bullet point reference:

  • Single line comments start with #
  • Multi-line comments are wrapped with /* and */
  • Values are assigned with the syntax of key = value (whitespace doesn’t matter). The value can be any primitive (string, number, boolean), a list, or a map.
  • Strings are in double-quotes.
  • Strings can interpolate other values using syntax wrapped in ${}, such as ${var.foo}. The full syntax for interpolation is documented here.
  • Multiline strings can use shell-style “here doc” syntax, with the string starting with a marker like <<EOF and then the string ending with EOF on a line of its own. The lines of the string and the end marker must not be indented.
  • Numbers are assumed to be base 10. If you prefix a number with 0x, it is treated as a hexadecimal number.
  • Boolean values: true, false.
  • Lists of primitive types can be made with square brackets ([]). Example: [“foo”, “bar”, “baz”].
  • Maps can be made with braces ({}) and colons (:): { “foo”: “bar”, “bar”: “baz” }. Quotes may be omitted on keys, unless the key starts with a number, in which case quotes are required. Commas are required between key/value pairs for single line maps. A newline between key/value pairs is sufficient in multi-line maps.

A resource address is a string that identifies zero or more resource instances in your overall configuration.

An address is made up of two parts:

  • [module path][resource spec]

In some contexts Terraform might allow for an incomplete resource address that only refers to a module as a whole, or that omits the index for a multi-instance resource. In those cases, the meaning depends on the context, so you’ll need to refer to the documentation for the specific feature you are using which parses resource addresses.

Module path

A module path addresses a module within the tree of modules. It takes the form:

  • module_name[module index]
  1. module- Module keyword indicating a child module (non-root). Multiple module keywords in a path indicate nesting.
  2. module_name- User-defined name of the module.
  3. [module index]- (Optional) Index to select an instance from a module call that has multiple instances, surrounded by square bracket characters ([ and ]).

An address without a resource spec, i.e. module.foo applies to every resource within the module if a single module, or all instances of a module if a module has multiple instances. To address all resources of a particular module instance, include the module index in the address, such as module.foo[0].

If the module path is omitted, the address applies to the root module.

Terraform Configuration Files

Configuration files are a set of files used to describe infrastructure in Terraform and have the file extensions .tf and .tf.json. Terraform uses a declarative model for defining infrastructure. Configuration files let you write a configuration that declares your desired state. Configuration files are made up of resources with settings and values representing the desired state of your infrastructure.

A Terraform configuration is made up of one or more files in a directory, provider binaries, plan files, and state files once Terraform has run the configuration.

  1. Configuration file (*.tf files):Here we declare the provider and resources to be deployed along with the type of resource and all resources specific settings
  2. Variable declaration file (variables.tf or variables.tf.json):Here we declare the input variables required to provision resources
  3. Variable definition files (terraform.tfvars):Here we assign values to the input variables
  4. State file (terraform.tfstate):a state file is created once after Terraform is run. It stores state about our managed infrastructure.
  5. Provisioning resources with Terraform on AWS: With Terraform installed, you’re ready to create your first infrastructure.

You will provision an Amazon Machine Image (AMI) on Amazon Web Services (AWS) in this tutorial since AMIs are widely used.


Prerequisites

To follow these steps you will need:

  1. An AWS account
  2. The AWS CLI installed
  3. Your AWS credentials are configured locally.

With your account created and the CLI installed to configure the AWS CLI.

$ aws configure

 

Write configuration

The set of files used to describe infrastructure in Terraform is known as a Terraform configuration. You’ll write your first configuration now to launch a single AWS EC2 instance.

Each configuration should be in its own directory. Create a directory for the new configuration.

$ mkdir learn-terraform-aws-instance

Change into the directory.


$ cd learn-terraform-aws-instance

Create a file for the configuration code.


$ touch example.tf

Paste the configuration below into example.tf and save it. Terraform loads all files in the working directory that end in .tf.

provider “aws” {
profile = “default”
region  = “us-east-1”
}resource “aws_instance””example” {
ami           = “ami-12345678”
instance_type = “t2.micro”
}

This is a complete configuration that Terraform is ready to apply.

Initialize the directory

When you create a new configuration — or check out an existing configuration from version control — you need to initialize the directory with terraform init.

Terraform uses a plugin-based architecture to support hundreds of infrastructure and service providers. Initializing a configuration directory downloads and installs providers used in the configuration, which in this case is the aws provider. Subsequent commands will use local settings and data during initialization.

Initialize the directory.

Format and validate the configuration

We recommend using consistent formatting in files and modules written by different teams. The terraform fmt the command automatically updates configurations in the current directory for easy readability and consistency.

Format your configuration. Terraform will return the names of the files it formatted. In this case, your configuration file was already formatted correctly, so Terraform won’t return any file names.


$ terraform fmt

If you are copying configuration snippets or just want to make sure your configuration is syntactically valid and internally consistent, the built-in terraform validate the command will check and report errors within modules, attribute names, and value types.

Validate your configuration. If your configuration is valid, Terraform will return a success message.

Create infrastructure

In the same directory as the example.tf the file you created, run terraform apply. You should see an output similar to the one shown below, though we’ve truncated some of the output to save space.

 

  1. Creating AWS compute instances

 

#AWS Instance

resource “aws_instance””web” {
ami           = data.aws_ami.ubuntu.id
instance_type = “t3.micro”
tags = {    Name = “HelloWorld”
}
}

Argument Reference

The following arguments are supported:

 

  • ami– (Optional) AMI to use for the instance. Required unless launch_template is specified and the Launch Template specifes an AMI. If an AMI is specified in the Launch Template, setting ami will override the AMI specified in the Launch Template.
  • associate_public_ip_address- (Optional) Whether to associate a public IP address with an instance in a VPC.
  • availability_zone- (Optional) AZ to start the instance in.
  • capacity_reservation_specification- (Optional) Describes an instance’s Capacity Reservation targeting option.

Import

Instances can be imported using the id, e.g.,

$ terraform import aws_instance.web i-12345678

 

  1. Creating AWS databases

     

Provides an RDS instance resource. A DB instance is an isolated database environment in the cloud. A DB instance can contain multiple user-created databases.

Changes to a DB instance can occur when you manually change a parameter, such as allocated_storage, and are reflected in the next maintenance window. Because of this, Terraform may report a difference in its planning phase because a modification has not yet taken place. You can use the apply_immediately flag to instruct the service to apply the change immediately.

 

resource “aws_db_instance””default” {
allocated_storage    = 10
engine               = “mysql”
engine_version       = “5.7”
instance_class       = “db.t3.micro”
name                 = “mydb”
username             = “foo”
password             = “foobarbaz”
parameter_group_name = “default.mysql5.7”
skip_final_snapshot  = true}

Storage Autoscaling

To enable Storage Autoscaling with instances that support the feature, define the max_allocated_storage argument higher than the allocated_storage argument. Terraform will automatically hide differences with the allocated_storage argument value if autoscaling occurs.

resource “aws_db_instance””example” {
# … other configuration …

allocated_storage     = 50
max_allocated_storage = 100
}

 

  1. Creating Elastic IP

     

An Elastic IP address is a reserved public IP address that you can assign to any EC2 instance in a particular region, until you choose to release it. To allocate an Elastic IP address to your account in a particular region.

 

Single EIP associated with an instance

resource “aws_eip””lb” {
instance = aws_instance.web.id
vpc      = true
}

Multiple EIPs associated with a single network interface

resource “aws_network_interface””multi-ip” {
subnet_id   = aws_subnet.main.id
private_ips = [“10.0.0.10”, “10.0.0.11”]}
resource “aws_eip””one” {
vpc                       = true
network_interface         = aws_network_interface.multi-ip.id
associate_with_private_ip = “10.0.0.10”}
resource “aws_eip””two” {
vpc                       = true
network_interface         = aws_network_interface.multi-ip.id
associate_with_private_ip = “10.0.0.11”
}

  1. Attaching Elastic IP with Instances

     

Attaching an EIP to an Instance with a pre-assigned private ip (VPC Only)

resource “aws_vpc””default” {
cidr_block           = “10.0.0.0/16”
enable_dns_hostnames = true}
resource “aws_internet_gateway””gw” {
vpc_id = aws_vpc.default.id}
resource “aws_subnet””tf_test_subnet” {
vpc_id                  = aws_vpc.default.id
cidr_block              = “10.0.0.0/24”
map_public_ip_on_launch = true

depends_on = [aws_internet_gateway.gw]}
resource “aws_instance””foo” {
# us-west-2
ami           = “ami-5189a661”
instance_type = “t2.micro”

private_ip = “10.0.0.12”
subnet_id  = aws_subnet.tf_test_subnet.id}
resource “aws_eip””bar” {
vpc = true

instance                  = aws_instance.foo.id
associate_with_private_ip = “10.0.0.12”
depends_on                = [aws_internet_gateway.gw]
}

 

  1. Variables and Resource References

     

Reusability is one of the major benefits of Infrastructure as Code. In Terraform, we can use variables to make our configurations more dynamic. This means we are no longer hard coding every value into the configuration.

Input Variables

Terraform input variables are used as parameters to input values at run time to customize our deployments. Input terraform variables can be defined in the main.tf configuration file but it is a best practice to define them in a separate variables.tf file to provide better readability and organization.

A variable is defined by using a variable block with a label. The label is the name of the variable and must be unique among all the variables in the same configuration.

The variable declaration can optionally include three arguments:

  • description:briefly explain the purpose of the variable and what kind of value is expected.
  • type:specifies the type of value such as string, number, bool, map, list, etc.
  • default:If present, the variable is considered to be optional and if no value is set, the default value is used.

<RESOURCE TYPE>.<NAME> represents a managed resource of the given type and name.

The value of a resource reference can vary, depending on whether the resource uses count or for_each:

  • If the resource doesn’t use count or for_each, the reference’s value is an object. The resource’s attributes are elements of the object, and you can access them using dot or square bracket notation.
  • If the resource has the count argument set, the reference’s value is a list of objects representing its instances.
  • If the resource has the for_each argument set, the reference’s value is a map of objects representing its instances.

Any named value that does not match another pattern listed below will be interpreted by Terraform as a reference to a managed resource.

  1. Creating Security Groups

     

An AWS security group acts as a virtual firewall for your EC2 instances to control incoming and outgoing traffic. Both inbound and outbound rules control the flow of traffic to and traffic from your instance, respectively.

 

resource “aws_security_group””allow_tls” {
name        = “allow_tls”
description = “Allow TLS inbound traffic”
vpc_id      = aws_vpc.main.id

ingress {
description      = “TLS from VPC”
from_port        = 443
to_port          = 443
protocol         = “tcp”
cidr_blocks      = [aws_vpc.main.cidr_block]
ipv6_cidr_blocks = [aws_vpc.main.ipv6_cidr_block]
}

egress {
from_port        = 0
to_port          = 0
protocol         = “-1”
cidr_blocks      = [“0.0.0.0/0”]
ipv6_cidr_blocks = [“::/0”]
}

tags = {
Name = “allow_tls”
}
}

 

  1. Attaching Security Groups to AWS instances

     

This resource attaches a security group to an Elastic Network Interface (ENI). It can be used to attach a security group to any existing ENI, be it a secondary ENI or one attached as the primary interface on an instance.

 

The following provides a very basic example of setting up an instance (provided by instance) in the default security group, creating a security group (provided by sg) and then attaching the security group to the instance’s primary network interface via the aws_network_interface_sg_attachment resource, named sg_attachment:

data “aws_ami””ami” {
most_recent = true

filter {
name   = “name”
values = [“amzn-ami-hvm-*”]
}

owners = [“amazon”]}
resource “aws_instance””instance” {
instance_type = “t2.micro”
ami           = data.aws_ami.ami.id

tags = {
type = “terraform-test-instance”
}
}
resource “aws_security_group””sg” {
tags = {
type = “terraform-test-security-group”  }
}
resource “aws_network_interface_sg_attachment””sg_attachment” {
security_group_id    = aws_security_group.sg.id
network_interface_id = aws_instance.instance.primary_network_interface_id
}

 

In this example, instance is provided by the aws_instance data source, fetching an external instance, possibly not managed by Terraform. sg_attachment then attaches to the output instance’s network_interface_id:

data “aws_instance””instance” {
instance_id = “i-1234567890abcdef0”}
resource “aws_security_group””sg” {
tags = {
type = “terraform-test-security-group”
}
}
resource “aws_network_interface_sg_attachment””sg_attachment” {
security_group_id    = aws_security_group.sg.id
network_interface_id = data.aws_instance.instance.network_interface_id
}

 

  1. Remote Exec

     

The remote-exec provisioner invokes a script on a remote resource after it is created. This can be used to run a configuration management tool, bootstrap into a cluster, etc. To invoke a local process, see the local- exec provisioner instead. The remote-exec provisioner requires a connection and supports both ssh and winrm.

 

resource “aws_instance””web” {

# …

 

# Establishes connection to be used by all

# generic remote provisioners (i.e. file/remote-exec)

connection {

type     = “ssh”

user     = “root”

password = var.root_password

host     = self.public_ip

}

 

provisioner “remote-exec” {

inline = [

“puppet apply”,

“consul join ${aws_instance.web.private_ip}”,

]

}

}

Argument Reference

The following arguments are supported:

  • inline- This is a list of command strings. They are executed in the order they are provided. This cannot be provided with script or scripts.
  • script- This is a path (relative or absolute) to a local script that will be copied to the remote resource and then executed. This cannot be provided with inline or scripts.
  • scripts- This is a list of paths (relative or absolute) to local scripts that will be copied to the remote resource and then executed. They are executed in the order they are provided. This cannot be provided with inline or script.
  1. End to End Infrastructure and Configuration Automation with Terraform

 

  • Create the key and security group which allow the port 80.
  • Launch EC2 instance
  • In this Ec2 instance use the key and security group which we have created
  • Launch one Volume (EBS) and mount that volume into /var/www/html
  • Developer have uploded the code into github repository also the repository has some images
  • Copy the github repository code into /var/www/html
  • Create S3 bucket, and copy/deploy the images from github repository into the s3 bucket and change the permission to public readable
  • Create a Cloudfront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html
  • Create snapshot of ebs.

Starting to do terraform Cloud Infrastructure Automation!

  • First we write the Website code and push it to the remote Github repository then write the .tf code where it clone the website code to the remote AWS instanse and more…
  • Website Code which we want to deploy
  • We will write terraform code to create the Cloud Infrastructure

Website Code

<!DOCTYPE html><html>
<head>
<title>
Slack Invitation Link – hrshmistry
</title>

<!– Style to create button –>
<style>
.SLK {
background-color: white;
border: 3px solid black;
color: green;
text-align: center;
display: inline-block;
font-size: 80px;
cursor: pointer;
}
</style></head>
<body >
<center style=”font-size:70px;color:red;font-family:’Courier New'”>Hybrid Multi-Cloud</center>
<p    align =     “center”><img src=”https://s3hrsh.s3.ap-south-1.amazonaws.com/cloud.jpg” width=”300″ height=”300″></p>
<center style=”font-size:40px;color:blue;font-family:’Courier New'”>Your Personal Slack Invitation Link</center>

<!– Adding link to the button on the onclick event –><center>
<button  class=”SLK”
onclick=”window.location.href = ‘https://join.slack.com/t/hybridmulti-cloud/shared_invite/zt-etnyk2vm-gngGCm2hnk1VbOPR9nGpnw’;”>
JOIN NOW
</button></center></body>
</html>

Terraform Code

  • Provider

#Describing Providerprovider “aws” {
region     = “ap-south-1”
profile    = “harsh”
}

  • Variables

#Creating Variable for AMI Idvariable “ami_id” {
type    = string
default = “ami-0447a12f28fddb066”}
#Creating Variable for AMI Typevariable “ami_type” {
type    = string
default = “t2.micro”}
#Creating Variable for keyvariable “EC2_Key” {
type    = string
default = “Task1Key”

}

  • Key-Pair

#Creating tls_private_key using RSA algorithm resource “tls_private_key””tls_key” {  algorithm = “RSA”
rsa_bits  = 4096}
#Generating Key-Value Pairresource “aws_key_pair””generated_key” {
depends_on = [
tls_private_key.tls_key
]

key_name   = var.EC2_Key
public_key = tls_private_key.tls_key.public_key_openssh}
#Saving Private Key PEM Fileresource “local_file””key-file” {
depends_on = [
tls_private_key.tls_key
]
content  = tls_private_key.tls_key.private_key_pem
filename = var.EC2_Key
}

  • Security-group

resource “aws_security_group””firewall” {
depends_on = [
aws_key_pair.generated_key
]

name         = “firewall”
description  = “allows ssh and httpd protocol”

#Adding Rules to Security Group
ingress {
description = “SSH Port”
from_port   = 22
to_port     = 22
protocol    = “tcp”
cidr_blocks = [“0.0.0.0/0”]
}

ingress {
description = “HTTPD Port”
from_port   = 80
to_port     = 80
protocol    = “tcp”
cidr_blocks = [“0.0.0.0/0”]
}

egress {
from_port   = 0
to_port     = 0
protocol    = “-1”
cidr_blocks = [“0.0.0.0/0”]
}

egress {
from_port   = 0
to_port     = 0
protocol    = “-1”
cidr_blocks = [“0.0.0.0/0”]
}

tags = {
Name = “security-group-1”
}
}

  • Launch EC2 instance

resource “aws_instance””autos” {
depends_on = [
aws_security_group.firewall
]

ami           = var.ami_id
instance_type = var.ami_type
key_name      = var.EC2_Key
security_groups = [“${aws_security_group.firewall.name}”]

connection {
type     = “ssh”
user     = “ec2-user”
private_key = tls_private_key.tls_key.private_key_pem
host     = aws_instance.autos.public_ip
}

provisioner “remote-exec” {
inline = [
“sudo yum install httpd php git -y”,
“sudo systemctl restart httpd”,
“sudo systemctl enable httpd”
]
}

tags = {
Name = “autos”    env  = “Production”
}
}

  • EBS Volume

#Creating EBS volume and attaching it to EC2 Instance.resource “aws_ebs_volume””ebs” {
availability_zone = aws_instance.autos.availability_zone
size              = 1

tags = {
Name = “autos_ebs”
}
}
/*

variable “volume_name” {
type    = string
default = “dh”
}
*/
resource “aws_volume_attachment””ebs_att” {
device_name = “/dev/sdh”
volume_id   = aws_ebs_volume.ebs.id
instance_id = aws_instance.autos.id
force_detach = true}
output “autos_public_ip” {
value = aws_instance.autos.public_ip}
resource “null_resource””print_public_ip” {
provisioner “local-exec” {
command = “echo ${aws_instance.autos.public_ip} > autos_public_ip.txt”
}
}

  • Mounting the Volume in EC2 Instance and Cloning GitHub

resource “null_resource””mount_ebs_volume” {
depends_on = [
aws_volume_attachment.ebs_att
]

connection {
type     = “ssh”
user     = “ec2-user”
private_key = tls_private_key.tls_key.private_key_pem
host     = aws_instance.autos.public_ip
}

provisioner “remote-exec” {
inline = [
“sudo mkfs.ext4 /dev/xvdh”,
“sudo mount /dev/xvdh /var/www/html”,
“sudo rm -rf /var/www/html”,
“sudo git clone https://github.com/hrshmistry/Code-Cloud.git /var/www/html/”
]
}
}

  • Creating S3 bucket

resource “aws_s3_bucket””S3” {
bucket = “autos-s3-bucket”
acl    = “public-read”}
#Putting Objects in S3 Bucketresource “aws_s3_bucket_object””S3_Object” {
depends_on = [
aws_s3_bucket.S3
]
bucket = aws_s3_bucket.S3.bucket
key    = “Cloud.JPG”
source = “D:/LW/Hybrid-Multi-Cloud/Terraform/tera/task/Cloud.JPG”
acl    = “public-read”
}

  • Creating CloudFront with S3 Bucket Origin

locals {
S3_Origin_Id = aws_s3_bucket.S3.id}
resource “aws_cloudfront_distribution””CloudFront” {
depends_on = [
aws_s3_bucket_object.S3_Object
]

origin {
domain_name = aws_s3_bucket.S3.bucket_regional_domain_name
origin_id   = aws_s3_bucket.S3.id
# OR origin_id   = local.S3_Origin_Id
}

enabled             = true
is_ipv6_enabled     = true
comment             = “S3 Web Distribution”

default_cache_behavior {
allowed_methods  = [“DELETE”, “GET”, “HEAD”, “OPTIONS”, “PATCH”, “POST”, “PUT”]
cached_methods   = [“GET”, “HEAD”]
target_origin_id = aws_s3_bucket.S3.id
# OR origin_id   = local.S3_Origin_Id

forwarded_values {
query_string = false

cookies {
forward = “none”
}
}

viewer_protocol_policy = “allow-all”
min_ttl                = 0
default_ttl            = 3600
max_ttl                = 86400
}

# Cache behavior with precedence 0
ordered_cache_behavior {
path_pattern     = “/content/immutable/*”
allowed_methods  = [“GET”, “HEAD”, “OPTIONS”]
cached_methods   = [“GET”, “HEAD”, “OPTIONS”]
target_origin_id = aws_s3_bucket.S3.id
# OR origin_id   = local.S3_Origin_Id

forwarded_values {
query_string = false
headers      = [“Origin”]
cookies {
forward = “none”
}
}

min_ttl                = 0
default_ttl            = 86400
max_ttl                = 31536000
compress               = true
viewer_protocol_policy = “redirect-to-https”
}

# Cache behavior with precedence 1
ordered_cache_behavior {
path_pattern     = “/content/*”
allowed_methods  = [“GET”, “HEAD”, “OPTIONS”]
cached_methods   = [“GET”, “HEAD”]
target_origin_id = aws_s3_bucket.S3.id
# OR origin_id   = local.S3_Origin_Id

forwarded_values {
query_string = false
cookies {
forward = “none”
}
}

min_ttl                = 0
default_ttl            = 3600
max_ttl                = 86400
compress               = true
viewer_protocol_policy = “redirect-to-https”
}

price_class = “PriceClass_200”

restrictions {
geo_restriction {
restriction_type = “whitelist”
locations        = [“IN”]
}
}

tags = {
Name        = “CF Distribution”
Environment = “Production”
}

viewer_certificate {
cloudfront_default_certificate = true
}
retain_on_delete = true}

  • Changing the html code and adding the image url in that

resource “null_resource””CF_URL”  {
depends_on = [
aws_cloudfront_distribution.CloudFront
]

connection {
type     = “ssh”
user     = “ec2-user”
private_key = tls_private_key.tls_key.private_key_pem
host     = aws_instance.autos.public_ip
}

provisioner “remote-exec” {
inline = [“echo ‘<p align = ‘center’>'”,
“echo ‘<img
src=’https://${aws_cloudfront_distribution.CloudFront.domain_name}/Cloud.JPG’ width=’100′ height=’100′>’ | sudo tee -a /var/www/html/Slack.html”,”echo ‘</p>'”
]
}}

  • Creating EBS snapshot volume

resource “aws_ebs_snapshot””ebs_snapshot” {
depends_on = [
null_resource.CF_URL
]

volume_id = aws_ebs_volume.ebs.id

tags = {
Name = “ebs_snap”
}}

  • accessing the infrastructure

resource “null_resource””web-server-site-on-browser” {
depends_on = [
null_resource.CF_URL
]
provisioner “local-exec” {
command = “brave ${aws_instance.autos.public_ip}/Slack.html”
}}

  • After completing the terraform code, to deploy the whole Cloud Infrastructure only one single command is needed!

terraform apply -auto-approve

  • To destroy entire Cloud Infrastructure,

terraform destroy -auto-approve

So here we perfectly completed the End To End AWS Cloud Infrastructure Automation Through Terraform by HashiCorp!

 

 

If you’re searching for a demanding and rewarding career. Whether you’ve worked in Terraform or are new to the field, the Terraform deep dive Training is what you need to learn how to succeed. From the basic to the most advanced techniques, we cover everything. Are you ready to deploy applications and infrastructure in Terraformed way – then look no further ! 

Happy
Happy
0 %
Sad
Sad
0 %
Excited
Excited
0 %
Sleepy
Sleepy
0 %
Angry
Angry
0 %
Surprise
Surprise
0 %

Average Rating

5 Star
0%
4 Star
0%
3 Star
0%
2 Star
0%
1 Star
0%

Leave a Reply

Your email address will not be published.

X

Cart

Cart

Your Cart is Empty

Back To Shop