There are many articles on how to host a static website using S3. Many more explain how to enable https, and plenty of them lay out how to use your own domain to host them. AWS’ documentation alone covers all of it. In fact, the base for this article is in the documentation pages.

What makes this article worth writing (and hopefully reading) is the attention to the rationale behind every step, from bucket configuration to A record creation. The idea is to learn concepts rather than to practice following steps. Details on what buttons to click and where to find them can be found on the documentation.

Disclaimer: some steps can or will incur costs. Many of the steps are optional in hosting a website, such as logging, having a custom domain, or using a CDN.

01. Register custom domain

Route 53 offers both domain registration and DNS hosting services. Domain registration means you pay (usually a yearly fee) for the domain name, eg. domain-name.com. AWS will use a partner registrar to register your information as the owner of that domain. The other thing it will do is keep track of the authoritative name servers for that domain, ie. the servers known to be in charge of knowing what IP to translate your domain name to.

That is where their DNS hosting service comes in. For a monthly fee, Route 53 provides you with servers that you can select to be your domain’s authoritative name servers. Usually those will be automatically generated by Route 53 and configured into your domain registration. They’re called Hosted Zones and come in sets of four servers.

02. Create and validate certificate

You can do this step with the AWS Certificate Manager (ACM) service. When you create your certificate, it is a good idea to add both your apex domain name and a wildcard for all subdomains, eg. mydomain.com and *.mydomain.com, so that all your pages are covered by the certificate. The certificate is nothing more than proof that the person who owns the domain is providing the content. In order to earn the requested certificate from the certificate authority (CA), you need to prove that you control the domain in question.

The way to do that with ACM is easy: they provide you a subdomain name (eg. _e7a8d1a9c01ec67ee57e0941f2b43a39) and an AWS page that the subdomain should point to (eg. _e87d0bf50c515881ed3a703a6216e409.wrnxprcrrv.acm-validations.aws.) for each domain name in the certificate, which usually are the apex and the wildcard names mentioned above. All that is needed is to create the equivalent CNAME records. ACM has a button that automatically does the record creation. Within seconds, your certificate should be issued and ready to be used.

03. Create and configure hosting bucket

It is strongly encouraged that the hosting bucket have the same name as the domain, specially if you’re not using CloudFront. The reason for that is that S3 buckets don’t have unique IP addresses. Their IP isn’t even static. It just uses an IP from a vast pool of AWS IP addresses. Therefore, when a request is made to a bucket, it arrives to one of the many public AWS IP addresses, and a host header (the domain name in the URL) is extracted, and AWS uses that value to locate the bucket. There is no extra logic behind that. So if AWS fields a request to a bucket website addressed to jenkins.mydomain.com/status, it will look for bucket jenkins.mydomain.com. If the request is for my-example.s3-website.us-east-1.amazonaws.com/, it will look for bucket my-example.

On the other hand, using CloudFront would work because it rewrites the host header. The CloudFront distribution has an origins setting where the bucket, ELB, or website serving the content can be listed. Another workaround to mismatching bucket and domain names is to set up a same-region EC2 reverse proxy. One workaround that does not work is to create a CNAME record from mydomain.com to my-example.s3-website.us-east-1.amazonaws.com/. The mydomain.com header is kept in the request through redirections, so when the request arrives to AWS for the server responsible for my-example.s3-website.us-east-1.amazonaws.com/, it won’t serve anything because it doesn’t have anything to serve for requests to mydomain.com.

Important: when using CloudFront, website users contact the CDN and the CDN contacts the bucket for the files, so the bucket should block all public access. CloudFront has a feature called origin access identities that can be created and referred to in the bucket policy to give the service read access. If CloudFront is not being used, then public access needs to be allowed in the bucket permissions.

04. Create and configure logging bucket

The bucket should be in the same region as the hosting bucket for cost reasons. The main detail for this step is the permission configuration. By default, only the account owner can write to the bucket, so object-level ACLs are disabled. CloudFront can add a bucket policy entry to give itself permission to write logs, but you must manually enable ACLs for it to work.

05. Add content to hosting bucket

This part is highly depends on personal preferences and objectives. You can add a personal project using CSS and Javascript, you can add a whole Jekyll blog, or you can simply add an index.html page for testing.

If you’re going the Jekyll route, or any other route where you manipulate your links, be very wary. In my case, changing the permalink of pages caused all requested pages other than the root page to not be found. But the worst part was that I did not get a Page Not Found error. Instead, I got an Access Denied error, making it much harder to find the problem. To summarize, I had to create a CloudFront Function that added index.html to all requests that ended with a /. You can read more about that here.

06. Create and configure CDN distribution

By default, CloudFront only knows how to serve requests to its domain name, which should be something like https://xxxxxxxxxxxxxx.cloudfront.net. In order to let it know it is okay to serve requests that are originally sent to your custom domain (mydomain.com). Step 3 dives a bit deeper on this topic. Step 7 will not work if your alternate domain names are not properly set.

This step is where the certificate created in step 2 comes into play as well. You can cover your alternate domain names with your SSL certificate so that you can redirect all requests to HTTPS.

A quick note on origin domain: it is best practice to use the domain name version that includes the region of your bucket. The global domain of the bucket works for most regions, but not all.

07. Create alias records

If you followed the previous steps correctly, this step should be simple. Go to the hosted zone of your custom domain and create an alias A record to your CloudFront distribution. If your CloudFront distribution doesn’t show up as an option when you create the record via Route 53, chances are that you did not properly configure your distribution’s alternate domain names to use your custom domain name. Due to reasons touched upon on steps 3 and 6, Route 53 is configured to not allow the creation of aliases to CloudFront distributions that do not have the domain (or subdomain) as an alternate domain name.