Using Azure Static Websites
Over the years I've chronicled the technical changes to this blog and its hosting provider. Years ago, I moved it from Drupal to Middleman to cut down on the maintenance. Later I containerized my Ruby environment to get rid of RVM madness
For a fun weekend project, I moved from S3 to the brand new (still in preview) Static Websites service from Azure.
Essentially, I moved the hosted portion of my blog, but didn't actually change the underlying blog software. I'm still using Middleman to generate the blog.
In this post, I'll cover the first two steps of my move:
- Move my blog's files from S3 to Azure Storage
- Use Azure Static Websites to serve the site
In the future, I want to add a couple new features, though:
- Add Azure CDN to speed things up
- Finally start using SSL for this site
So I will cover those in a follow-up post.
Step 1: Setting Up Azure Storage
Really, the process of "migration" is more about uploading a bunch of static files, then redirecting DNS. Because Middleman can generate the entire site from my source code (which lives in a private BitBucket repo), there's no data migration necessary. I don't have to get data out of S3. I can just upload a fresh copy.
So to kick things off, I logged into the Azure web portal and then created a new storage account. And I turned on the Static Website feature.
Rather than repeat the exact steps I did, I'll point you straight to the (frequently updated) official documentation. That team does a stellar job of keeping up with changes, and for a preview service that is important. The entire process documented there seriously took me only a couple of minutes.
A few quick notes:
- When the docs tell you to enter
index.html
, which seems to override the default ofindex.html
, you do actually need to do that. I suspect that will be fixed in the future. - I did not create a
$web
container at this point. I let the tooling do it for me later.
Step 2: Upload My Site
I have a handy Makefile
that I use to do various blog tasks. It looks like this:
APPROOT=/usr/src/myapp
UPLOAD_FLAGS ?= -o table
.PHONY: post
post: TITLE ?= Untitled
post: COMMAND = bundle exec middleman article "$(TITLE)"
post: dockerize
.PHONY: build
build: COMMAND = bundle exec middleman build
build: dockerize
.PHONY: serve
serve: EFLAGS = -p 4567:4567
serve: COMMAND = bundle exec middleman serve
serve: dockerize
.PHONY: docker-build
docker-build:
docker build -t $(IMAGE) .
.PHONY: dockerize
dockerize:
docker run -it --rm --name $(NAME) -v "$(CURDIR)":$(APPROOT) -w $(APPROOT) $(EFLAGS) $(IMAGE) $(COMMAND)
.PHONY: dist
dist:
# Code to send this to AWS
In a nutshell:
post
creates a new postserve
starts a local testing serverbuild
generates a static version of the sitedist
sends the static site to S3
To this Makefile
I just added a new target:
.PHONY: upload
upload:
az storage blob upload-batch -d '$$web' -s build/ $(UPLOAD_FLAGS)
(UPLOAD_FLAGS
just gives me a way to override the flags from the command line, like $ UPLOAD_FLAGS="--dry-run" make upload
.)
The new command does a bulk upload of my static site (az storage blob upload-batch
) sending it to the destination (-d
) container named $web
(note that we escaped this for Make by doing $$
). And it reads the sources from the build/
folder.
To authenticate az
to my account, I set the env var AZURE_STORAGE_CONNECTION_STRING
.
Now running make build upload
builds my site from source, then uploads it to my new Azure static website.
The first time I ran the upload, it created the $web
container, but it seemed to take about three minutes to get everything synced. In particular mapping the index.html
file to the document root seemed to take a bit. But from there, everything worked as expected.
Where Next?
At this point, I can hit my Technosophos blog at the URL provided by Azure. There are two possible routes to go from here:
- I could set up Azure's DNS service to point directly to this endpoint. This process is actually pretty easy. But that's not what I want to do.
- I would like to set up Azure's CDN service to cache my blog, then add an SSL certificate on the CDN service (something not supported by static websites yet) so that the blog will be fully TLS.
That second option is what I am exploring now, and will document in a future post.