SyntaxHighlighter JS

Showing posts with label aws. Show all posts
Showing posts with label aws. Show all posts

2021-07-02

Packer and package installs

Problem: 
When using Packer to generate an Ubuntu AWS AMI, I use a shell script provisioner to install and update packages via apt-get. When executing apt commands in the script, I was getting the errors such as: 
"Package has no installation candidate"
"Unable to locate package" 

 Solution: 
The error in the shell script was that I was executing apt-get commands before the AWS cloud-init completed.

To check that cloud-init finished before running apt-get, modify the script via

while [ ! -f  /var/lib/cloud/instance/boot-finished ];
do
    echo "Waiting for cloud-init to finish ..."
    sleep 2
done

apt-get update -y
apt-get install -y package

2017-07-09

AWS terraform: Use DynamoDB locking

Please see the previous post on how to set up terraform to use a remote AWS S3 bucket to store the terraform.tfstate file (http://www.javajirawat.com/2017/07/aws-terraform-use-s3-remote-tfstate-file.html) . This example continues from the S3 bucket article.

Remote terraform state file allows multiple terraform servers to manage the same resources. However if two people try to modify the same terraform state at the same time, it may lead to corruption and errors. Terraform can be configured to use AWS DynamoDB to lock the state file and prevent concurrent edits.

AWS Dynamo DB is a cloud NoSQL key-value database.

Setup:

Create an AWS DynamoDB with terraform to lock the terraform.tfstate.

provider "aws" {
  region = "us-east-1"
}

resource "aws_dynamodb_table" "dynamodb-terraform-lock-example" {
  name = "terraform-lock-example"
  hash_key = "LockID"
  read_capacity = 5
  write_capacity = 5

  attribute {
    name = "LockID"
    type = "S"
  }

  tags {
    Name = "Terraform Lock Table Example"
    Org = "JavaJirawat"
  }
}

You can name the DynamoDB table to anything you wish. The hash_key must be a String attribute named LockID


2.) Execute main.tf to create the DynamoDB table on AWS
      Run the command

      terraform apply

      The AWS account that executes terraform needs AmazonDynamoDBFullAccess permission in the region you are creating the database table
https://console.aws.amazon.com/iam/home?region=us-east-1#/policies/arn:aws:iam::aws:policy/AmazonDynamoDBFullAccess

Usage:

Here is an example of using the DynamoDB table we just created to lock a terraform.tfstate for a AWS EC2 resource.

1.) Create terraform main.tf for AWS EC2 server with a S3 backend to store the terraform.tfstate file and a DynamoDB table to lock it.

terraform {
  backend "s3" {
    bucket = "terraform-s3-tfstate-example"
    region = "us-east-1"
    key = "example/ec2-with-locking/terraform.tfstate"
    dynamodb_table = "terraform-lock-example"
    encrypt = true
  }
}

provider "aws" {
  region = "us-east-1"
}

# Amazon Linux AMI
resource "aws_instance" "ec2-with-locking-example" {
    count = 1
    ami = "ami-a4c7edb2"
    instance_type = "t2.micro"
    
    lifecycle {
      create_before_destroy = true
    }

    tags {
      Name = "Example for DynamoDB lock"
      Org = "JavaJirawat"
    }      
}

The dynamodb_table value must match the name of the DynamoDB table we created.

2.) Initialize the terraform S3 backend
     Run the command

     terraform init

     Type in "yes" for any prompt.

3.) Execute main.tf to create the EC2 server on AWS
      Run the command

      terraform apply

      The AWS account that executes terraform needs AmazonEC2FullAccess permission in the region you are creating the EC2 server
https://console.aws.amazon.com/iam/home?region=us-east-1#/policies/arn:aws:iam::aws:policy/AmazonEC2FullAccess

AWS Terraform: Use S3 remote tfstate file

Terraform uses an text file called terraform.tfstate to store the state of the infrastructure it manages.  (https://www.terraform.io/docs/state/) . If multiple terraform servers manage the same resources, this file needs to be remotely accessible. To prevent accidental deletion or corruption, terraform.tfstate should be versioned.

Amazon S3 (Simple Storage Service) fulfills the above requirements. S3 is a cloud file storage service; basically the AWS version of Dropbox.

Setup:

Create an AWS S3 bucket with terraform to store terraform.tfstate
1.) Create terraform main.tf for AWS S3 bucket.
     See https://github.com/juttayaya/devops/blob/master/hashicorp/terraform/s3-tfstate-example/s3/main.tf

provider "aws" {
  region = "us-east-1"
}

resource "aws_s3_bucket" "s3-tfstate-example" {
    bucket = "terraform-s3-tfstate-example"
    acl = "private"

    versioning {
      enabled = true
    }

    lifecycle {
      prevent_destroy = true
    }

    tags {
      Name = "Terraform S3 tfstate Example"
      Org = "JavaJirawat"
    }      
}

The above configuration turns on S3 versioning so you can query for the history of infrastructure changes. The prevent_destroy = true guards against accidental deletion.

2.) Execute main.tf to create the S3 bucket on AWS
      Run the command

      terraform apply

      The AWS account that executes terraform needs AmazonS3FullAccess permission in the region you are creating the S3 bucket
https://console.aws.amazon.com/iam/home?region=us-east-1#/policies/arn:aws:iam::aws:policy/AmazonS3FullAccess

Usage:

Here is an example of using the S3 bucket we just created to store a terraform.tfstate for a AWS EC2 resource.

1.) Create terraform main.tf for AWS EC2 server with a S3 backend to store the terraform.tfstate file.
     See https://github.com/juttayaya/devops/blob/master/hashicorp/terraform/s3-tfstate-example/ec2/main.tf

terraform {
  backend "s3" {
    bucket = "terraform-s3-tfstate-example"
    region = "us-east-1"
    key = "example/ec2/terraform.tfstate"
    encrypt = true
  }
}

provider "aws" {
  region = "us-east-1"
}

# Amazon Linux AMI
resource "aws_instance" "ec2-example" {
    count = 1
    ami = "ami-a4c7edb2"
    instance_type = "t2.micro"
    
    lifecycle {
      create_before_destroy = true
    }

    tags {
      Name = "Example for S3 tfstate"
      Org = "JavaJirawat"
    }      
}

The terraform backend bucket name and region must match the S3 bucket name and region we created. The key is the full folder path and filename to store the terraform.tfstate file

2.) Initialize the terraform S3 backend
     Run the command

     terraform init

     Type in "yes" for any prompt.

3.) Execute main.tf to create the EC2 server on AWS
      Run the command

      terraform apply

      The AWS account that executes terraform needs AmazonEC2FullAccess permission in the region you are creating the EC2 server
https://console.aws.amazon.com/iam/home?region=us-east-1#/policies/arn:aws:iam::aws:policy/AmazonEC2FullAccess