Introduction
Astro is a new static site generator that is gaining popularity. It is designed to be fast, flexible, and developer-friendly. In this article, I will show you how to host an Astro blog in AWS using Terraform.
Why Terraform?
Terraform is an open-source infrastructure as code software tool that provides a consistent CLI workflow to manage hundreds of cloud services. It codifies APIs into declarative configuration files, which can be shared among team members, treated as code, edited, reviewed, and versioned.
Terraform is a great choice for managing infrastructure in AWS because it allows you to define and provision infrastructure using a simple, human-readable language. It also allows you to manage infrastructure as code, which means you can version your infrastructure and apply changes in a consistent and repeatable manner.
Prerequisites
Before we get started, you will need to have the following prerequisites:
- An AWS account and the AWS CLI installed on your local machine
- Terraform installed on your local machine
- An Astro blog (we’ll generate one using the Astro CLI)
- A domain name
Step 1: Create an Astro Blog
First, let’s create an Astro blog. If you don’t already have one, you can use the Astro CLI to generate a new blog. To do this, run the following command in your terminal:
npm create astro@latest
This will create a new Astro blog in the current directory. You can then navigate to the new blog directory and start the development server by running:
npm run dev
You can find a full guide on how to create an Astro blog in the official Astro documentation.
Step 2: Initialize a Terraform Configuration
The terraform component of our project will create the following resources:
- An S3 bucket to store the static assets of the blog with appropriate configurations
- A CloudFront distribution to serve the static assets from the S3 bucket
- An ACM certificate to enable HTTPS for the CloudFront distribution
- A Route 53 hosted zone to manage the domain name
- A Route 53 record set to point the domain name to the CloudFront distribution
We will not cover standing up a terraform backend. For this example, we will use the local backend. In a production environment, you would want to use a remote backend such as S3 or GCS.
Create a new directory for the Terraform configuration and create a new file called main.tf
with the following content:
locals {
bucket_name = "my-astro-blog"
domain_name = "example.com"
}
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.30"
}
}
}
# ACM certificate can only be created in certain regions. In this example, we will use us-east-1
provider "aws" {
region = "us-east-1"
}
resource "aws_s3_bucket" "main" {
bucket = local.bucket_name
}
resource "aws_s3_bucket_ownership_controls" "main" {
bucket = aws_s3_bucket.main.id
rule {
object_ownership = "BucketOwnerPreferred"
}
}
resource "aws_s3_bucket_public_access_block" "main" {
bucket = aws_s3_bucket.main.id
block_public_acls = false
block_public_policy = false
ignore_public_acls = false
restrict_public_buckets = false
}
resource "aws_s3_bucket_acl" "main" {
bucket = aws_s3_bucket.main.id
acl = "public-read"
depends_on = [
aws_s3_bucket_ownership_controls.main,
aws_s3_bucket_public_access_block.main,
]
}
resource "aws_s3_bucket_website_configuration" "main" {
bucket = aws_s3_bucket.main.id
index_document {
suffix = "index.html"
}
error_document {
key = "index.html"
}
}
resource "aws_s3_bucket_cors_configuration" "main" {
bucket = aws_s3_bucket.main.id
cors_rule {
allowed_methods = ["GET"]
allowed_origins = ["*"]
}
}
resource "aws_s3_bucket_policy" "open_read" {
bucket = aws_s3_bucket.main.id
policy = data.aws_iam_policy_document.main.json
}
data "aws_iam_policy_document" "main" {
statement {
sid = "PublicReadGetObject"
actions = ["s3:GetObject"]
resources = ["${aws_s3_bucket.main.arn}/*"]
principals {
type = "AWS"
identifiers = ["*"]
}
}
}
resource "aws_cloudfront_distribution" "main" {
origin {
domain_name = aws_s3_bucket.main.bucket_regional_domain_name
origin_id = aws_s3_bucket.main.id
custom_origin_config {
http_port = 80
https_port = 443
origin_protocol_policy = "http-only"
origin_ssl_protocols = ["TLSv1.2"]
}
}
enabled = true
is_ipv6_enabled = true
comment = "CDN for my website"
default_root_object = "index.html"
aliases = [local.domain_name]
default_cache_behavior {
allowed_methods = ["GET"]
cached_methods = ["GET"]
target_origin_id = aws_s3_bucket.main.id
forwarded_values {
query_string = false
cookies {
forward = "none"
}
}
function_association {
event_type = "viewer-request"
function_arn = aws_cloudfront_function.main.arn
}
viewer_protocol_policy = "redirect-to-https"
min_ttl = 0
default_ttl = 3600
max_ttl = 86400
}
restrictions {
geo_restriction {
restriction_type = "none"
}
}
viewer_certificate {
acm_certificate_arn = aws_acm_certificate.main.arn
ssl_support_method = "sni-only"
}
}
resource "aws_cloudfront_function" "main" {
name = "main"
runtime = "cloudfront-js-2.0"
publish = true
code = <<EOF
function handler(event) {
var request = event.request;
var uri = request.uri;
if (uri.endsWith('/')) {
request.uri += 'index.html';
}
else if (!uri.includes('.')) {
request.uri += '/index.html';
}
return request;
}
EOF
}
# DNS and CERT
resource "aws_route53_zone" "main" {
name = local.domain_name
}
resource "aws_route53_record" "main" {
zone_id = aws_route53_zone.main.zone_id
name = local.domain_name
type = "A"
alias {
name = aws_cloudfront_distribution.main.domain_name
zone_id = aws_cloudfront_distribution.main.hosted_zone_id
evaluate_target_health = false
}
}
resource "aws_acm_certificate" "main" {
domain_name = local.domain_name
subject_alternative_names = ["*.${local.domain_name}"]
validation_method = "DNS"
lifecycle {
create_before_destroy = true
}
}
resource "aws_route53_record" "cert_validation" {
for_each = {
for dvo in aws_acm_certificate.main.domain_validation_options : dvo.domain_name => {
name = dvo.resource_record_name
record = dvo.resource_record_value
type = dvo.resource_record_type
}
}
allow_overwrite = true
name = each.value.name
records = [each.value.record]
ttl = 60
type = each.value.type
zone_id = aws_route53_zone.main.zone_id
}
resource "aws_acm_certificate_validation" "main" {
certificate_arn = aws_acm_certificate.main.arn
validation_record_fqdns = [for record in aws_route53_record.cert_validation : record.fqdn]
}
output "cloudfront_distribution_id" {
value = aws_cloudfront_distribution.main.id
}
output "bucket_name" {
value = aws_s3_bucket.main.id
}
With the file created, you can initialize the Terraform configuration by running the following command in your terminal:
terraform init
Before you run the next command make sure to have your AWS credentials set up. You can do this by setting the AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
environment variables or by using the aws configure
command.
Note: This terraform will stand up a route53 hosted zone. But you will need to manually update the nameservers on your domain registrar to point to the new hosted zone. Go to the aws console and find the nameservers for your new hosted zone. Then update the nameservers on your domain registrar to point to the new hosted zone. This depends on your domain registrar, but it is usually found in the DNS settings. Once the NS records propagate, the certificate validation will complete and the CloudFront distribution will be created. If this doesnt happen fast enough the terraform apply will fail. Just rerun the command and it should work.
To apply the terraform configuration, run the following command:
terraform apply
This will create the S3 bucket, CloudFront distribution, ACM certificate, Route 53 hosted zone, and Route 53 record set. Once the configuration is applied, you will see the CloudFront distribution ID and the S3 bucket name in the output. You will use these values to deploy the Astro blog to the S3 bucket in the next step.
Step 3: Deploy the Astro Blog
Now that the Terraform configuration is applied, you can deploy the Astro blog to the S3 bucket. To do this, run the following command in your terminal at the root of your Astro blog:
# build the static site files
npm run build
# sync the files to the s3 bucket. Include the delete flag to remove any files that were removed since the last deployment
aws s3 sync dist/ s3://<bucket_name> --delete
# Invalidate the CloudFront cache to ensure your changes get published throughout the CDN
aws cloudfront create-invalidation --distribution-id <cloudfront_distribution_id> --paths "/*"
Replace <bucket_name>
with the name of the S3 bucket and <cloudfront_distribution_id>
with the ID of the CloudFront distribution. This will build the static site files, sync them to the S3 bucket, and invalidate the CloudFront cache to ensure that the latest version of the site is served.
Challenge
If you want to take this a step further, you can automate the deployment of the Astro blog using a CI/CD pipeline. You can use a service like GitHub Actions or AWS CodePipeline to automatically build and deploy the blog whenever changes are pushed to the repository.
- Learn how to set up a Terraform backend and migrate the state of your terraform to it.
- Tweak the terraform to create a AWS IAM user and role to be used by the CI/CD pipeline to deploy the blog.
- Set up a CI/CD pipeline to automatically build and deploy the blog whenever changes are pushed to the repository.
Conclusion
That’s it! You have now hosted an Astro blog in AWS using Terraform. You can now access your blog at the domain name you specified in the Terraform configuration.
This was a pretty quick dive into Astro, Terraform and AWS. I hope you found it helpful, and I recommend you check out the official documentation for each of these tools to learn more about what they can do.