Need of Automating Cloud infrastructure using HashiCorp
What is automation cloud infrastructure?
A auto provisioning instance with their Operating System, hardware resource, region , creating of storage, auto creating for code pipeline, route 53 domain, Load balancer, auto-scalar, auto creation of kubernetes environment, auto creation of database instance etc is know as automation cloud infrastructure
By automating the provisioning, configuration, and management of your cloud-based infrastructure, your organization can free up time and resources for mission-critical innovation instead of routine maintenance.
Infrastructure automation is the process of scripting environments. By scripting environments, you can apply the same configuration to a single node or to thousands.
Infrastructure automation brings agility to both development and operations because any authorized team member can modify the scripts while applying good development practices — such as automated testing and versioning — to your infrastructure.
What is Need and Advantage?
The need of cloud automation is because today’s fast-paced business environment often requires complex IT systems and infrastructure in order for enterprises to be competitive and remain productive. But while IT has evolved to streamline and simplify a variety of enterprise tasks, managing those systems remains a challenge. That’s why many companies are turning to cloud automation tools to help them optimize and manage their IT infrastructure.
The Advantage of this, it is a smart way to ensure that enterprises can respond quickly in a rapidly changing environment while reaping the benefits of advances in IT technology.
Which are the tools?
There are many cloud infrastructure automation tools available to help speed the process. In addition, a number of related tools may also be helpful. While no single tool is right for every situation, this guide can help you consider the various benefits offered and choose the ones that best serve your cloud infrastructure needs, whether your migration targets public, private, or hybrid cloud architectures.
- AWS CloudFormation
HashiCorp allows to embrace the challenge of transition from single to multi-cloud architecture providing better flexibility and lower risks.
It offer streamlined processes to provision, secure, connect, and run any infrastructure for any application. The dynamic and distributed infrastructure enables better security options as multiple components can be hosted across wide array of clouds.
A good number of world’s well known brands and companies take advantage of the infrastructure automation services from HashiCorp. Hashicorp products enable collaboration and automation in IT. They enable users to define infrastructure as code, from development, test and security through to deployment logic and infrastructure configurations for operations. The products support numerous cloud and private infrastructure types.
The HashiCorp product suite includes Nomad, Vagrant, Packer, Terraform, Vault, Consul, Sentinel and Serf.
Today, I am going to talk about terraform.
Terraform. Terraform is an infrastructure as code tool that automates infrastructure provisioning on cloud platforms, including AWS, Google Compute Engine, OpenStack and Azure.
Terraform enables users to collaborate and share configurations and modules, monitor infrastructure history and reuse configurations. Configurations are stored in version control, or can be packaged as a module where it can be shared and collaborated on. The same configurations can be used in multiple environments, such as for staging and production.
Terraform enables users to define a data center infrastructure in a configuration language to then create an execution model for a cloud-based infrastructure. Users can convert APIs to declarative files to share amongst a team.
Pre-request need for installing Terraform
The pre-request for installing terraform is not so complicated. You can download from here
And after download you will see that terraform. so after downloading Terraform, unzip the package. Terraform runs as a single binary named terraform. The single binary file you can see like below image.
The final step is to make sure that the terraform binary is available on the PATH. You need to set the system PATH system variable if you are using windows, linux or mac.
Verifying the Installation
After installing Terraform, verify the installation worked by opening a new terminal session and checking that , terraform is available. By executing terraform you should see help output similar to this
If you get an error that terraform could not be found, your PATH environment variable was not set up properly. Please go back and ensure that your PATH variable contains the directory where Terraform was installed.
Automate Build Infrastructure
With Terraform installed, let’s dive right into it and start creating some infrastructure. We’ll build infrastructure on AWS for the getting started guide since it is popular and generally understood, but Terraform can manage many providers, including multiple providers in a single configuration.
To create the configuration, you will have to create json file and save with extension example.tf file in the directory where no other *.tf file should be there. Since, terraform loads all of them.
Let’s take an example if want to create EC2 instance
One from Manual and other in an automated
So If I choose building Manually Infrastructure, then What I will do.
- I will acces AWS management console with my web browser.
- Provide credentials
- After login, I will go to service EC2
- Select the image, network VPC, instance type, security, pem key
- And Finally one instance will launch
Where as if I choose automate the Infrastructure, then I just need to write configuration in json
And save file with extension name yourfilename.tf in seprate directory and only I will have to run.
Here is I have created terraform and written codes in json
In this json code here
provider is the cloud vendor name like aws or google cloud etc
resource block defines a resource that exists within the infrastructure. A resource might be a physical component such as an EC2 instance. The resource block has two strings before opening the block: the resource type and the resource name. In our example, the resource type is “aws_instance” and the name is “webserver” The prefix of the type maps to the provider. In our case “aws_instance” automatically tells Terraform that it is managed by the “aws” provider.
Within the resource block itself is configuration for that resource. This is dependent on each resource provider and is fully documented within our providers reference. For our EC2 instance, we specify an AMI for Ubuntu, and request a “t2.micro” instance so we qualify under the free tier.
The first command to run for a new configuration is – terraform init, which initializes various local settings and data that will be used by subsequent commands. Terraform uses a plugin based architecture to support the numerous infrastructure and service providers available. The terraform init command will automatically download and install any Provider binary for the providers in use within the configuration, which in this case is just the aws provider:
This command is a convenient way to check whether the execution plan for a set of changes matches your expectations without making any changes to real resources or to the state. For example terraform plan might be run before committing a change to version control, to create confidence that it will behave as expected.
In the same directory as the yourfilename.tf file you created, run terraform apply. You should see output similar to below, though we’ve truncated some of the output to save space:
This output shows the execution plan, describing which actions Terraform will take in order to change real infrastructure to match the configuration.
If terraform apply failed with an error, read the error message and fix the error that occurred. At this stage, it is likely to be a syntax error in the configuration.
If the plan was created successfully,Terraform will now pause and wait for approval before proceeding. If anything in the plan seems incorrect or dangerous, it is safe to abort here with no changes made to your infrastructure. In this case the plan looks acceptable, so type yes at the confirmation prompt to proceed.
Executing the plan will take a few minutes since Terraform waits for the EC2 instance to become available:
After this, Terraform is all done! You can go to the EC2 console to see the created EC2 instance. You can see all other details including the Public IP address by using the command
Now you can able to access the this EC2 instance with the IP address and key you have mention in terraform json code.
For example, In my json code you can see that, I have mention
Key_name = “My-key”
Resources can be destroyed using the terraform destroy command, which is similar to terraform apply but it behaves as if all of the resources have been removed from the configuration.
Answer yes to execute this plan and destroy the infrastructure:
Just like with apply, Terraform determines the order in which things must be destroyed. In this case there was only one resource, so no ordering was necessary. In more complicated cases with multiple resources, Terraform will destroy them in a suitable order to respect dependencies, as we’ll see later in this guide.
As we have seen with writing terraform code in json, I have auto-provision EC2 instance.
Similarly, we can create auto-provision so many of EC2 instances and auto-provision other service like route53, S3, load balancer, auto-scalling, releation database, docker, kuberntes,Code-Pipeline to help CI/CD etc.
Terraform concentrates more on server provisioning. When the whole cloud infrastructure is treated as a code and all the parameters are combined in a configuration file, all the members of the team can easily collaborate on them as they would do on any other code.
Terraform supports multi-cloud orchestration such as AWS, Azure, OpenStack, etc as well as on-premises deployments. This is really helpful when we use two different resources from two different cloud providers at the same time.
When Chef, Salt, Puppet, and Ansible runs any software updates on servers, this can quite often lead to a configuration drift when differences in the configuration lead to bugs that can lead to security breaches. Terraform addresses the issue by utilizing an immutable infrastructure approach where every configuration change leads to a separate configuration snapshot which means deployment of a new server and de-provisioning the old one. This way, updating the development environment goes very smooth and bug-free and even returning to the old configuration is as easy as rolling back to a specific version.
Being an open-source tool, Terraform has a strong community of developers all around the world. It will not be the fall of other tools like Ansible, Chef or Puppet but Terraform will take a right place in the DevOps Toolkit.
Author : Rishabh Gupta