I create a lot of little static websites for friends and family and want them to have some level of reliabilty and ease in updating. Back in 2015 [I moved this personal blog]({{< relref "/content/ops/2015-10-20-cloudfront-and-chills.md" >}}) to [AWS](https://aws.amazon.com/) using a mixture of [S3](https://aws.amazon.com/s3/), [CloudFront](https://aws.amazon.com/cloudfront/), and [Route53](https://aws.amazon.com/route53/). At the time there wasn't a lot of players in the easy static site hosting space, but these days I'd recommend purpose built services like [Netlify](https://www.netlify.com/). I mostly stick with my current setup to avoid the hassle of migrating so many projects or having two separate pipelines running. If you are using [Hugo](https://gohugo.io/) (or any static site generator) with AWS and want to deploy-on-commit through [GitLab](https://about.gitlab.com/) CI, read on ahead. ## AWS Configuration If you've already configured your AWS, go to the next section, or feel free to review to make sure we're on the same page. ### S3 Bucket 1. Create an S3 bucket, name it something unique. 2. Untick "Block *all* public access" and tick the acknowledgement that is required. 3. Create bucket. 4. Go to the bucket, then Properties, then click the Edit button next to Static website hosting. 5. Enable static website hosting, set Index and Error documents based on your Hugo setup. 6. Save changes. 7. Go to Permissions and [add the following as a Bucket policy](https://docs.aws.amazon.com/AmazonS3/latest/userguide/HostingWebsiteOnS3Setup.html#step4-add-bucket-policy-make-content-public) (you'll need to select Edit), make sure to save the changes: ```json { "Version": "2012-10-17", "Statement": [ { "Sid": "PublicReadGetObject", "Effect": "Allow", "Principal": "*", "Action": [ "s3:GetObject" ], "Resource": [ "arn:aws:s3:::REPLACE-WITH-BUCKET-NAME/*" ] } ] } ``` ### CloudFront 1. Go to [CloudFront](https://console.aws.amazon.com/cloudfront/v3/home?region=ap-southeast-2#/distributions). 2. Create a new distribution. 3. Set the S3 bucket as the origin. 4. Set `index.html` as the root object. 5. Save the distribution. 6. Copy the distribution name. ### Route53 1. Set up your domains in Route53. You'll have to create a new Hosted Zone for each separate domain you have (not subdomains though!). 2. You may need to update the NS records with your domain registrar if not registered through AWS. 3. Create an A record (AAAA if you want IPv6 too) and point it at your CloudFront distribution using the "alias" toggle and directing it at CloudFront. ### SSL/TLS Certificates 1. Go to AWS Certificate Manager. 2. Request a certificate. 3. Add all the domain names you wish to use. 4. Select DNS validation. 5. Once the certificate is pending, you should see all the requested DNS entries. 6. Click the button to "Create DNS records in Amazon Route53". 7. Create records. ### Linking Certificate And Distribution 1. Once the certificate is issued, go back to the CloudFront distribution you created and edit it. 2. Add all the alternate domain names you're using. 3. Add the certificate you created. 4. Save the changes. ### IAM We'll need to add an [IAM (Infrastructure & Acces Management)](https://console.aws.amazon.com/iamv2/home?#/home) user for GitLab to access the S3 bucket to upload files and the CloudFront distribution invalidate the existing caches. 1. Create a new Policy. 2. Select the JSON editor and add the following: ```json { "Version": "2012-10-17", "Statement": [ { "Sid": "writes3accessgitlabci", "Effect": "Allow", "Action": [ "s3:GetObject", "s3:PutObject", "s3:DeleteObject" ], "Resource": "arn:aws:s3:::my-new-website-hugo/*" }, { "Sid": "lists3accessgitlabci", "Effect": "Allow", "Action": "s3:ListBucket", "Resource": "arn:aws:s3:::my-new-website-hugo" }, { "Sid": "cloudfrontsaccessgitlabci", "Effect": "Allow", "Action": [ "cloudfront:CreateInvalidation", "cloudfront:GetInvalidation", "cloudfront:ListInvalidations" ], "Resource": "*" } ] } ``` 3. Save the policy.1 4. Create a new User. 5. Give it a name and select "Access key - Programmatic access" for the access type. 6. Add the created policy to the user and save them. 7. **Copy the Access key ID and the Secret access key!** ## GitLab CI Configuration You'll need to make sure you initalise a local repository for your Hugo site. Don't commit the `public/` directory, in fact you should probably use a `.gitignore` file to avoid commiting junk. You can start with something simple like the following: ``` /public Thumbs.db .DS_Store ``` Next up you'll also need to create a `.gitlab-ci.yml` file, this will contain the configuration that tells GitLab how to build and deploy your site. An example one is below: ```yaml stages: - build_html - deploy_html variables: AWS_DEFAULT_REGION: ap-southeast-2 BUCKET_NAME: your.bucket.name.here CLOUDFRONT_DIST_ID: ASDASDASDASD buildHugoSite: image: monachus/hugo stage: build_html script: - hugo artifacts: paths: - public only: - master deployToAws: image: garland/aws-cli-docker stage: deploy_html dependencies: - buildHugoSite script: - aws configure set preview.cloudfront true - aws s3 sync ./public s3://$BUCKET_NAME --delete; - aws cloudfront create-invalidation --distribution-id $CLOUDFRONT_DIST_ID --paths "/*"; only: - master ``` You'll need to replace the variable values with ones that pertain to your setup. Then commit the result. Create your repo in the GitLab web UI, add the remote as it suggests and push your local work up. To finish the CI/CD setup: 1. In the GitLab web UI for your repository, go to the Settings -> CI/CD -> Variables. 2. Add Variable called `AWS_ACCESS_KEY_ID` with the access key you got from the IAM user setup. 3. Add Variable called `AWS_SECRET_ACCESS_KEY` with the secret key you got from the IAM user setup. 4. Run the CI/CD pipeline. Your site should now be up!