Static Website in Azure via Infrastructure as Code
THIS IS AN UNSUPPORTED DEMO No support for this is provided and it may vanish at any time.
This is a work in progress. Not all steps are IaC yet, and potentially there will be steps where IaC is infeasible or impossible.
I track TODO items in this document below, but in a production Azure DevOps environment, these would be tracked as Work Items on an Azure Board instead. I use this document so that the work plans are publicly visible without the need for an Azure DevOps account.
URGENT TODOs
NONE.
Regular TODOs
Add multiple environments (e.g. prod at a minimum) to show a multi-stage release.
Add a second repo to the release to show how multiple artifacts from multiple build pipelines get combined into a release.
Set up tests that certain milestones have been reached. Make these configurable so that others following this doc can use them. Divide work into roughly 2 hour chunks with tests at those intervals.
Document the pipeline creation process.
Plan for the 2027 removal of the Microsoft CDN
- SSL cert is provided by the CDN in this existing method
- SSL certs for custom domains are not supported by bare storage accounts
- Microsoft Front Door is the recommended CDN replacement but it's $35/month flat rate vs. very small charges for small CDN sites
- Microsoft Static Web Apps free tier may work well
Change from the legacy Service Principal shared secret-based authentication into Azure to GitHub-to-Azure OIDC-based authentication.
Move the images to an "images" directory instead of the same level as README.md. This makes it easier for me to find just the Markdown files when editing.
Current checkpoints
These are also visible in the contents to the right and should be labelled "Checkpoint" followed by a description.
- Checkpoint 1: Test software is installed (included in the Pre-Work document.)
- Checkpoint 2: Local Git repo and Visual Studio Code
- Checkpoint 3: Azure account created
- Checkpoint 4: Azure DevOps organization
- Checkpoint 5: Code is in your Azure DevOps repo
- TODO: Checkpoint 6 and 7: Find good spots in the mess of IaC implementations
- Checkpoint 8: Document site is visible on the Internet
- Checkpoint 9: Your document is visible on your custom URL
Prerequisites
This is intended for an audience with at least a passing familiarity with Azure and its concepts. An AZ-900 would be a very good idea, but covers more than is necessary to follow this process.
Overall, intermediate technical skills and a willingness to explore the unknown rather than becoming frustrated by it would be very useful here.
Goals
Start with nothing
This project is started from a PC with minimal tools installed and no pre-made code. There are examples here, but the intent is to show you how to build up from nothing, not to copy a completed example. This, by necessity, limits our modularity and we end up with an inflexible single example rather than an exploration of options.
You are encouraged to branch out from here and explore options as needed, but only after completing this linear, monolithic example. I know it will be hard. It always is for me.
End up production-ready
We'll try not to gloss over the details necessary to make a website robust enough and secure enough to run as an actual corporate website. The limitation is that this will be 100% static code which eliminates a lot of complexity involved with authentication, authorization, and data persistence.
TODOs to get to this:
- Create limited-purpose users instead of doing everything as Owner
- Owner needed for Service Principal creation
- Many other steps can be done as Contributor
Preferences on Automation
Not everything can be automated and not everything that can be automated can be done so with Bicep-based declarative syntax. Here is the order of preference for performing specific tasks:
- First choice: Bicep-based declarative automation via pipeline
- Second choice: Az CLI based procedural automation via pipeline
- Third choice: PowerShell based procedural automation via pipeline
- Fourth choice: Above done without a pipeline, same ordering
- Last choice: Manual steps via Azure Portal (non-automated)
Pre-Work: Install software we'll be using
Pre-Work: Install software we'll be using
Checkpoint 1: Test software is installed
This step is included in the above Pre-Work document.
Special Windows Git Bash Note
If you see <error> C:/Program Files/Git/
with a local path subsituted for a resource ID in Git for Windows Bash this is not something you did wrong. This is Git for Windows trying to be "helpful" and failing spectacularly. This does not happen in Bash on Linux.
Prefix your command with MSYS_NO_PATHCONV=1
to make Windows Bash not try to help for the next command.
For example, if your command was:
az ad sp create-for-rbac --name "sc_${AZUREIAC_BASENAME}_dev" --role "Contributor" --scopes /subscriptions/${SUB_ID}
You would prefix it with MSYS_NO_PATHCONV=1
which changes it to:
MSYS_NO_PATHCONV=1 az ad sp create-for-rbac --name "sc_${AZUREIAC_BASENAME}_dev" --role "Contributor" --scopes /subscriptions/${SUB_ID}
Decide on a short name to describe your environment
I lack creativity, so I'm using sbonds2023azureiac
. This will be part of the name of your DevOps organization, Azure subscription, and will be publicly visible.
It's helpful for this to be globally unique. You may need to tweak some names in various places if it's not globally unique.
It should contain only alphanumeric (ASCII) characters and no spaces. It should be under 20 characters. Characters should be lowercase. This name will be case-sensitive in some contexts and case-insensitive in others. Why bother with the complexity of remember which are which?
Since this will ultimately be your project, you can code around those restrictions if desired, but I'm putting these restrictions in place to keep things simple at the start. This is the least-common-denominator of naming in Azure.
Review naming requirements:
- globally unique across all Azure users anywhere
- under 20 characters. You might be able to get away with exactly 20 but that can cause problems on certain resource names.
- must start with a lowercase alphabetic (ASCII a-z) character
- must consist of lowercase alphabet characters or numbers. No dashes, spaces, Unicode, symbols, etc. a-z or 1-9.
Embed this base name into your Git Bash shell
There will be no output from the above. To check it you can use:
Create a place for the code
You can't have IaC without C, so the first thing we do is set up a place to put it. A README.md
file with my notes was the first thing created. You are reading the end result of those notes now.
From your Git Bash shell:
cd $HOME
mkdir git-${AZUREIAC_BASENAME}
cd git-${AZUREIAC_BASENAME}
mkdir infrastructure
cd infrastructure
git init --initial-branch=main
Set up the internal Git name and E-mail if they differ from your global defaults set earlier. (Note the absence of --global
in these commands compared with earlier.) The email should match the one used when signing up in Azure and the one associated with your Owner account.
If the E-mail address associated with your Azure login differs from your git global default, then do this from inside the git repo (e.g. with the "infrastucture" directory as your current working directory.)
The file you're reading now was created and saved as $HOME/git-sbonds2023azureiac/infrastructure/README.md
. Our first bit of "code" is in place! You are welcome and encouraged to do the same with your own notes about building your own environment as you proceed.
Open the "infrastructure" folder in Visual Studio Code
File... Open Folder. Navigate to the above just-created git repository. You probably trust the author because it was you.
(Optional) Take notes in your README.md about how you set this up
That's what you're reading now-- my notes. You can (arguably should) create your own notes, but for the purposes of publishing "something" any Markdown content will work just fine.
Checkpoint 2: Local Git repo and Visual Studio Code
If you have Visual Studio Code running with your README.md in it, and know how to get back here after a potential multi-day delay, then you're ready to proceed.
Create a new Azure account
Microsoft offers a pretty generous free allowance of $200 for the first 30 days. Because of this temptation to create a bunch of free accounts, their identity verification on Azure accounts can be a bit rough. For example, once you use a given phone number for a free account once, you can never use it to create a new free tier account later. You might be able to re-use an phone number or card to go directly to a paid account, but I have not tried that.
Open a new Azure account from https://azure.microsoft.com/en-us/free/.
Note that you WILL be upgrading your account to direct pay in order to proceed with this work. If you have used this credit card or your phone number before, ever, then you won't get the free trial and will need to immediately upgrade to direct pay. This is fine-- none of this work uses a significant amount of Azure resources and one of the first things we'll do is set up cost alerts in case something unexpected happens. However, you are expected to know enough about Azure to avoid purchasing expensive stuff. While Microsoft generally puts up roadblocks and warnings prior to expensive purchases, this is NOT GUARANTEED. In particular, if you ever find yourself needing to adjust a quota, pause and explore before doing so.
If costs seem to be getting out of control and you can't solve them any other way, you can always delete all your subscriptions and get back to this starting point. This un-does a lot of work, but it's nice to know you have a "do over" option and can limit the costs of your unexpected deep-dive into Azure resource costs.
Creating your account ultimately ends up with an Individual account of a Microsoft Customer Agreement type.
We have seen unusual behavior when re-using an account that was previously part of a free trial, such as an inability to access the Cost Management - Budgets area necessary to set spending alerts. The best approach for this exercise is to create a completely new Azure account that has never been used before. This helps ensure that you're truly starting from nothing.
Upgrade your account to a paid account
You won't be able to create an additonal subscription or access Management Groups until you change to a billed account instead of a free account. (Thanks, Uziel for finding that out for me!)
When asked what support level you want, choose the zero cost option which is currently called "Basic."
Set up a budget to alert you on any charges
Home - Cost Management - Budgets - Add New
Give your budget a name like "ZeroCost"
BE SURE TO ENTER A CORRECT E-MAIL ADDRESS as if your alerts go to nowhere, Microsoft will not care and you could be charged a surprising amount.
Create Azure Subscription
We could use the default subscription, but that doesn't reflect the reality of a "production ready" environment.
We have a Microsoft Customer Agreement type, so these are the instructions we will follow. Don't do them now, keep reading.
Log in via Az CLI
Open a browser where you can log in to Microsoft using your Azure Owner account (probably the same one you used to create your account above) and paste in the device login URL. Provide the code when requested.
The az login
command should return some JSON containing your account E-mail address.
Identify the billing account this subscription will be associated to
Chances are you only have one billing account, but this makes sure. From your local Git Bash SCM command line while logged in as your Owner ID:
You'll need the "name" field. Here's one way to get it:
AZUREIAC_BILLING_ACCOUNT_NAME=$(az billing account list --query [].name -o tsv)
echo $AZUREIAC_BILLING_ACCOUNT_NAME
Identify the "invoice section" this subscription will be billed under
This is overkill for our currently simple config, but for a large company with hundreds or thousands of departments, this can be an important part of financial management.
Part of this exercise is to give you a chance to see ALL the details. Hopefully this extra complexity is worth the chance to learn how to set up everything.
In our case we only have one invoice section, so pick that as the first one in the list of values using --query [].invoiceSections.value[0].id
:
AZUREIAC_BILLING_INVOICE_SECTION_ID=$(az billing profile list --account-name "$AZUREIAC_BILLING_ACCOUNT_NAME" --expand "InvoiceSections" \
-o tsv \
--query [].invoiceSections.value[0].id \
)
echo $AZUREIAC_BILLING_INVOICE_SECTION_ID
AZUREIAC_BILLING_INVOICE_SECTION_ID
should start with /providers/Microsoft.Billing/billingAccounts/
.
Create a new management group the subscription will be created under (takes 15 minutes)
New accounts don't start out with management groups. But to create a subscription programmatically, this is the scope that's needed. This process takes about 15 minutes to complete.
Make sure your $AZUREIAC_BASENAME
is still set.
Create a management group named after your basename:
This takes a long time (up to 15min) to complete.
Create the Subscription using a Bicep file
The file is a verbatim copy of the one from Microsoft in the document we're following (https://learn.microsoft.com/en-us/azure/cost-management-billing/manage/programmatically-create-subscription-microsoft-customer-agreement?tabs=azure-cli)
Make sure your $AZUREIAC_BASENAME
is still set.
az deployment mg create \
--name "${AZUREIAC_BASENAME}-deploy" \
--location westus \
--management-group-id "MG-${AZUREIAC_BASENAME}" \
--template-file create-new-subscription.bicep \
--parameters subscriptionAliasName="${AZUREIAC_BASENAME}" billingScope="$AZUREIAC_BILLING_INVOICE_SECTION_ID"
TODO: Investigate odd behavior where this subscription gets created under the root management group instead of the one specified above. Probably the name isn't what gets passed as the ID and it silently defaulted to the root management group.
Maybe that whole Management Group creation thing was optional.
If you see this error:
Then go back to the Azure Portal (https://portal.azure.com) and use the "Upgrade" link in the top bar of Azure to convert to a paid account. This is necessary to go beyond the one default subscription.
Register resource providers we know we'll need
These are, in theory, automatically registered as needed. In practice, that process often fails. So here is some wisdom from The Future as I tried various things and they failed because this auto-registration didn't happen.
Make sure your $AZUREIAC_BASENAME
is still set.
export AZUREIAC_BASENAME="sbonds2023azureiac"
az account set --subscription $AZUREIAC_BASENAME
az provider register --namespace 'Microsoft.CDN'
Log out and back in to refresh subscription list
Avoids this error:
az account set --name "$AZUREIAC_BASENAME"
Checkpoint 3: Azure account created
Management Group exists
Billing alerts set up
Check Home - Cost Management - Budgets
We named ours "ZeroCost" earlier and the alerts should be set up similar to this:
Check that the E-mail address you used is correct.
If you REALLY want to check, run a small VM for a few hours to trigger that $0.50 alert. Remember to turn it off.
Resource providers registered
Create Azure DevOps Organization (no IaC)
Azure DevOps organizations can't be created by code yet. Ironic. A common workaround is to create a Visual Studio Account which creates an org as part of its setup process, but that requires a $1200 Visual Studio subscription, so... nope.
This will not be the last time that we find that Azure DevOps is not especially DevOps-compatible. The irony is not lost upon its users.
DevOps Org name: same as your base name
Go to dev.azure.com and choose "Start Free".
Set it up with the same name as your BASENAME:
DevOps Project name: base name + -project
This can be done as IaC, but it's trivially easy to do this one-time task in the GUI now. For an example of doing this via IaC, see Appendix B.
Name the project as your BASENAME-project as a private project. Arguably this could/should be a public project.
Bookmark your Azure DevOps URL in the form of https://dev.azure.com/sbonds2023azureiac
(will change based on your BASENAME.) Going straight to dev.azure.com
does not provide a list of your organizations so you need to know the full URL to at least one of them.
TODO: Consider making these public by default which requires a bit more Org setup.
Create Azure DevOps infrastructure repo
We need a place to put our code, which is a Repo. Azure created a default repo for you named after the project, but let's name ours "infrastructure".
This can be done as IaC, but it's trivially easy to do this one-time task in the GUI now. For an example of doing this via IaC, see Appendix B.
Let's create a new repository:
Name it "infrastructure" and uncheck "Add a README".
Create Azure DevOps docs-mkdocs repo
Use the same process as above but name the repo "docs-mkdocs." This simulates a common team environment where different people/groups may be responsible for web content vs. the web infrastructure.
Enable the Azure DevOps organization to run one pipeline at a time (no IaC)
The default is zero which leads to ##[error]No hosted parallelism has been purchased or granted
when attempting to run a pipeline.
Free option -- Microsoft-hosted free tier (3 business day response)
Microsoft: Configure and pay for parallel jobs mentions this:
You should complete that form, requesting a parallelism increase for Private projects.
Paid option -- purchase time on Microsoft-hosted VMs (takes 30 minutes)
If you get rejected on your free grant request or you can't wait 3 business days for a reply, this is another option. One parallel job cost me $6 for about a week of hosting.
Go to Organization Settings - Pipelines - Parallel Jobs
We want to change the Microsoft-hosted Private Projects parallel jobs from 0 to 1:
We have not set up Azure DevOps billing yet, so the limit of 0 applies. Choose "Set up billing":
At the subscription dialog, choose your new subscription based on your base name.
Change the paid parallel jobs for Pipelines for private projects / MS Hosted from 0 to 1:
Click save at the bottom. Go back to Parallel jobs and confirm the setting:
Checkpoint 4: Azure DevOps organization
Azure DevOps organization created
Log in to your Azure DevOps organization URL. Note that going straight to dev.azure.com never works, you need to know your org URL.
Azure DevOps has a mostly empty repository
Check that under "Repos" you can see your repo.
Note that I have three repos here but at this phase you should only have one. You'll have three pretty soon.
Azure DevOps organization has pipeline parallelism capacity
Go to Organization Settings - Pipelines - Parallel Jobs
Set up authentication for Git
There are lots of ways to do this and they all are terrible in one way or another. They are all a balance of:
- Keep your repo secure from unauthorized access
- Avoid entering your password too many times
- Keep setup complexity to a minimum
- Avoid platform-dependent methods
Whatever method you choose, the goal is that you can run both Git CLI commands AND push code directly from Visual Studio Code.
Options:
- Git bash with ssh-agent (my preferred method. Enter your password once per reboot. Same process on all OSes.)
- Unprotected ssh key (anyone with the key can access your Git with your authority)
- OS-specific credential helper
Microsoft prefers using an OS-specific credential helper. This is nice because it's very transparent to the user, but there's no way to confirm that the credentials are being stored securely. Also, when things go wrong, such as, for example, during a password reset, it's possible to lose the entire credential cache which causes random authentication problems for months.
An unprotected ssh key is not great because anyone who gets a copy of the key now has access to everything that key protects. This is a very common way that attackers extend access to new infrastructure once they've broken in. You should avoid keeping any sort of administrative ssh key unprotected.
Which takes me to the preferred method of using Git bash + ssh-agent. It ticks these boxes:
- Keep your repo secure from unauthorized access
- Avoid entering your password too many times
- Avoid platform-dependent methods
At the expense of this one:
- Keep setup complexity to a minimum
The good news is these steps only need to be done once per reboot. While I explain each step, a full understanding is not necessary. If needed, consider it a magic litany to get Visual Studio Code to authenticate to Git.
But first, some one-time prep to create our key. This does NOT need to be done every reboot. This only needs to be done once. Ever.
Git Bash ssh key generation (one-time)
When asked for a filename, press enter to accept the default of id_rsa
. This creates two files:
id_rsa
: your secret key. Never share this or copy it.id_rsa.pub
: your public key. This is what gets added to Azure DevOps or other places. Can be shared freely.
It will prompt you for a pass phrase. Make it complicated. You'll only need to enter this once per reboot. If you use a password manager, this is a great candidate to add there while making it really nasty.
Git bash ssh-agent + start Visual Studio Code (once per reboot)
ssh-add will prompt you for your pass phrase for the key. A right-click in the Git Bash window will paste, though you will not see it echoed to your screen.
You should see a single line of text similar to:
Start Visual Studio Code via command-line from inside this bash shell. If it's already running elsewhere, exit Visual Studio Code. This is necessary so the running Visual Studio Code inherits the environment variables that ssh-agent sets and which you imported into your current shell by running . ssh-agent.bash
. Free free to inspect that file if you're curious.
This command will exit when the Visual Studio Code GUI starts.
SSH first connection warning
At some point, you will run a "git push" or other command that forms the very first SSH connection with Azure Devops. At that point, you will get a one-time message similar to:
The authenticity of host 'ssh.dev.azure.com (20.41.6.26)' can't be established.
RSA key fingerprint is SHA256:ohD8VZEXGWo6Ez8GSEJQ9WpafgLFsOfLOtGGQCQo6Og.
This key is not known by any other names
Are you sure you want to continue connecting (yes/no/[fingerprint])?
You should accept this key, which gets stored in ~/.ssh/known_hosts
to validate that this key has not changed. If it does change, you will get a very nasty message about how the host key has changed and something nasty may be happening. Unless you know why this is coming up, abort the operation and learn what happened.
Alternatives to ssh-agent git auth are fine
If you are familiar with and comfortable using another method to access Git, go ahead. You may see some references to upstream repos that are SSH-specific, but hopefully if you know enough to use an alternative method for authentication, you'll know to adapt/change those as necessary for the authentication method you chose.
If that gets too complicated or fails to work, feel free to come back and use ssh + ssh-agent as your Git credential store as described above.
TODO: Set up the new ssh key above to be permitted in Azure DevOps
TODO
User Settings:
SSH Public keys
New Key:
Suggested name: YOUR_NAME_HERE AT laptop bash
Get the public key for your id that you just created with ssh-keygen. Run this in Git Bash:
Paste that entire content starting with ssh-rsa AAAAB3...
into "Public Key Data" in the New Key dialog above.
Work around TG401019 error when creating a new repo via CLI
Azure DevOps does not allow new repositories to be created via CLI, in another case of an automation framework not allowing itself to be automated.
You must first create an empty infrastructure
repo via Azure DevOps GUI.
In my screenshot, the "infrastructure" repo already exists, but it won't exist for you yet.
Uncheck the "Add a README" option which would initialize and pre-populate the repo for you so you could pull it.
Given that Azure DevOps requires the repo to exist first, would it make sense to just pull it instead of creating a bare repo and pushing it? Probably. Are we going to do that? No. This isn't the most efficient path, this is a learning path. It's not often we get to create things from nothing, so we're making the most of it.
This process would also be useful if we were migrating/moving a repo into Azure from elsewhere.
Azure does helpfully provide instructions on pushing an existing repo:
Push the bare infrastructure repo to Azure DevOps
Make sure your $AZUREIAC_BASENAME
is still set.
cd $HOME
cd git-${AZUREIAC_BASENAME}
cd infrastructure
git remote add origin git@ssh.dev.azure.com:v3/$AZUREIAC_BASENAME/${AZUREIAC_BASENAME}-project/infrastructure
git remote -v
Check that the remote string looks correct. Sometimes that long line gets mangled when this page is formatted for PDF.
Copy in the create-new-subscription.bicep
from where you downloaded it when the subscription was created. Also create a README.md
file if one does not exist with the content "# README placeholder" or whatever Markdown you prefer.
git add README* create-new-subscription.bicep
git commit -m "Initial README and create-subscription Bicep file."
git push --set-upstream origin --all
Create a place for our static website source files: mkdocs
Since this work is intended to document how to do something, creating a self-referencing set of documents describing how to set this up makes sense.
I'll be using mkdocs since it works nicely for technical documentation and I'm already familiar with it.
The software used by the folks at GitLab is another example of creating a large doc store and processing it using different tools, but again leading to a set of static web pages.
TODO: Incorporate multiple different static web site generators publishing content to the same site. This will simulate a more "production-like" environment where there are multiple teams contributing in parallel.
Make sure your $AZUREIAC_BASENAME
is still set.
cd $HOME
cd git-${AZUREIAC_BASENAME}
mkdir docs-mkdocs
cd docs-mkdocs
git init --initial-branch=main
mkdir docs
mkdir docs/stylesheets
git remote add origin git@ssh.dev.azure.com:v3/$AZUREIAC_BASENAME/${AZUREIAC_BASENAME}-project/docs-mkdocs
Create the mkdocs config
Feel free to explore these options on your own.
Use Visual Studio Code and "Open Folder" to the above docs-mkdocs
repo.
Create the mkdocs.yml
file in the root of the "docs-mkdocs" repo with this content. Substitute appropriate values for the contact area:
site_name: Your Name's MkDocs Documentation
extra_css:
- stylesheets/extra.css
theme:
name: material
palette:
scheme: customscheme
features:
- navigation.tabs
- content.code.copy
icon:
logo: material/book
extra:
contact:
name: 'Your Name'
email: 'yourname@gmail.com'
plugins:
- search
- awesome-pages
Add this content as docs/stylesheets/extra.css
. Feel free to adjust the color.
Create docs/index.md
with content similar to this, only using your name:
# Your Name - Mkdocs static content
This is an example of a static web page created from Markdown using MkDocs.
Create docs/.pages
with this content (this tells awesome-pages how to order pages when rendered to HTML):
Create a .gitignore
and exclude the default mkdocs static HTML destination directory named "site/". This helps avoid having the output of mkdocs checked into a repo.
Optional: Test locally with mkdocs serve
If you elected to install mkdocs, you can test your local content easily with "mkdocs serve" which starts a small web server on port 8000 locally. With the above in place, browsing to http://localhost:8000/
should show something like this:
Optional: Build locally with mkdocs build
This mimics the command we'll be adding to our DevOps pipeline soon. It's always nice to make sure something works manually before automating it, since usually one can make fixes faster on a local copy vs. a pipeline.
This produces static HTML in the "site" directory.
Push your mkdocs content into Azure Devops
git add docs mkdocs.yml
git commit -m "Example MkDocs source Markdown files"
git push --set-upstream origin --all
Check that "site" did NOT make it into the push by browsing to that repo in Azure DevOps Repos.
Checkpoint 5: Code is in your Azure DevOps repo
Three repos are in Azure DevOps
Check that under "Repos" you can see all three of your repos. Check that you can switch between them using the pull-down in the top bar:
mkdocs.yml exists and is in the repo
Check your "docs-mkdocs" repo for mkdocs.yml
in the root directory.
Create a mkdocs site build pipeline
- Input: Markdown from
docs-mkdocs/docs
- Output: static HTML in
sites/
This pipeline will need to do the following:
- Install software necessary to run MkDocs on the build agent
- Run MkDocs to generate the content
- use
--site-dir
to specify the output location - use the automatic Azure DevOps var
Build.ArtifactStagingDirectory
to set that location - Copy the content to a location where we can grab it for publishing to the world
Create a pipelines folder in docs-mkdocs
Use Visual Studio Code's files interface, CLI, Windows Explorer, whatever.
Create a pipeline YAML file
Call it build-mkdocs-site.yaml
.
FUTURE: A future optimization would be to install the software to a Docker image and store that image in the Azure Container Registry for later retrieval. This makes the build process go much faster since the Docker container can be fetched and run instead of the much slower build/install process.
TODO: We use fixed versions of mkdocs and mkdocs-material to ensure consistency between runs. Having an additional pipeline which uses the latest versions for testing would be a good idea. This helps give the development team a look ahead at what breaking changes may happen in their page rendering process as versions move forward while not forcing a mad scramble when the only pipeline breaks due to one of those changes.
Push this directly to main using Visual Studio Code's Git integration or via the Git command line.
trigger: none
pool:
vmImage: ubuntu-latest
jobs:
- job: install_mkdocs
displayName: Install software necessary to run mkdocs
steps:
- task: Bash@3
displayName: Install Mkdocs
inputs:
targetType: inline
script: pip install https://github.com/mkdocs/mkdocs/archive/refs/tags/1.4.3.tar.gz
- task: Bash@3
displayName: Install Mkdocs-Material plugin
inputs:
targetType: inline
script: pip install --ignore-installed https://github.com/squidfunk/mkdocs-material/archive/refs/tags/9.1.14.tar.gz
- task: Bash@3
displayName: Install Mkdocs Awesome Pages plugin
inputs:
targetType: inline
script: pip install --ignore-installed https://github.com/lukasgeiter/mkdocs-awesome-pages-plugin/archive/refs/tags/v2.9.1.tar.gz
- task: Bash@3
displayName: Build site with mkdocs
inputs:
targetType: inline
script: mkdocs build --strict --verbose --site-dir $(Build.ArtifactStagingDirectory)
- task: PublishBuildArtifacts@1
displayName: Upload static site content to Azure DevOps
Set up a pipeline from the YAML file
From the Pipelines tab, use "Create Pipeline":
Alternatively, from the repo, the "Set up build" leads to the same spot:
Where is your code? Azure Repos Git:
Which repo? We're setting up the pipeline to build the mkdocs content, so use docs-mkdocs
:
We'll use our existing YAML we just created:
Select the main branch and the path to the pipeline you created and pushed:
It will show you a preview of the YAML. It should look like what you pushed into Git.
Run the pipeline. Hopefully it works. If not, troubleshoot the error messages.
The pipeline worked once upon a time with this config, but one of the "fun" parts of cloud computing are all the changes that happen without your knowledge. Stay flexible and learn to follow what the error messages say.
Set up access into the subscription via Service Principal
In order for a pipeline to make any changes to the configuration of Azure resources, such as deploying new ones, it needs a way to establish that it's allowed to do so. That is done via a Service Principal which is basically an Azure-only user that authenticates via password. Like all other passwords, it's important to keep those secret, which makes it tricky to configure into pipelines whose configurations are readable by many people.
Azure helps get over this by allowing us to define Service Connections which hold the secret Service Principal info and will allow appropriate tasks to be completed by pipelines.
Optionally the Service Connection can require a manual approval before each time it can be used, which may be appropriate for a connection that allows access to a production environment, for example.
These service principals tend to have broad, semi-anonymous authority, so avoid storing their secret info permanently and avoid passing it from computer to computer.
OWNER: Create service principal for main subscription dev connection
This creates one intended for our "dev" environment. Check that the subscription in az account show
matches your BASENAME.
Running this in the cloud shell as the Owner user works well but you'll need to set your subscription to the correct one:
Make sure your $AZUREIAC_BASENAME
is still set.
az account show
SUB_ID=$(az account show --query id -o tsv)
az ad sp create-for-rbac --name "sc_${AZUREIAC_BASENAME}_dev" --role "Contributor" --scopes /subscriptions/${SUB_ID}
The above produces output like:
{
"appId": "3fb17860-cd74-4ccc-8051-ff90192f0b1b",
"displayName": "sc_sbonds2023azureiac_dev",
"password": "OMITTED",
"tenant": "04e32eb1-1f37-4207-945d-c5860625403b"
}
Alternatively, a Service Principal with a less broad scope (e.g. resource group) could be created but the resource gorup would need to be created outside the bicep file. This may be more appropriate for pre-prod or prod.
Basically, we just created a virtual user who can create, change, or delete any resource in this subscription. This is dangerous stuff, but still better than using the Owner credentials in a pipeline.
(Manual) Create Azure DevOps Service Connection
Project Settings - Service Connections - New Service Connection - Azure Resource Manager - Service Principal (manual)
- Subscription Id:
71ef9743-fef3-4b44-80d3-8250a36d9439
($SUB_ID above) - Subscription Name:
sbonds2023azureiac
($AZUREIAC_BASENAME above) - Service Principal ID:
3fb17860-cd74-4ccc-8051-ff90192f0b1b
(appId from output) - Service Principal Key: omitted password field above
- Tenant ID:
04e32eb1-1f37-4207-945d-c5860625403b
(tenant from output) - Service Connection Name:
DevEnvironment
- Description:
Connects to sbonds2023azureiac for dev
- Grant access permission to all pipelines: checked
Set an approver on the Service Connection
This is optional but will demonstrate the controls possible on a Service Connection. This is more important for prod, but can be interesting to see in other environments before it gets really old and slows down progress.
Then you can turn it back off.
WORK IN PROGRESS: Create Bicep files to deploy a Storage Account via pipeline
If this seems like overkill for deploying a single resource, you're correct. The benefit from all this isn't in the deployment of a single resource. The benefit is it's written down and will be consistent from run to run without concern for normal human error. It also provides a starting point, some minimum viable product, for making future improvements.
Once a set of files is working in dev, the SAME CONFIG can be deployed for testing and there's a clear audit trail of what the exact config was, and ideally via comments, why that config was the way it was.
TODO: Get it working then include the content here:
- parameters-dev.json
- phase001.bicep
- staticStorageAccount.bicep
- deploy-phase001-transpiled-arm.yaml
- transpile-job.yaml
- deploy_phase001.yaml
For now, the contents are included below so you'll need to create appropriate files for your deployments. These must be in your "infrastructure" git repo in the right spot for the pipeline to find them.
Create directory structure
Create this structure in your "infrastructure" repo, which is where all your files for creating the infrastructure will go. You can consider this modeling the separation between a team maintaining the Azure infrastructure and a team maintaining the web content. The latter is represented by the Markdown content rendered by mkdocs.
New directory: deployment
for the Bicep files that will be executed. Subdirectory deploymet/storage
for our storage account creation Bicep.
New directory: pipelines
for the Azure DevOps pipeline YAML files. Subdirectory pipelines/templates
for re-usable portions of the pipeline definition.
Create Bicep file which creates a storage account
Filename: deployment/storage/staticStorageAccount.bicep
Contents:
@allowed([
'dev'
'test'
'preprod'
'prod'
])
param environmentName string
@allowed([
'westus' // West US (1)
'eastus' // East US (1)
'brazilsouth' // Brazil South
])
param location string = 'westus'
// Generate a globally unique 16 character name for the storage account
var storageAccountLongName = 'sa${environmentName}${uniqueString(resourceGroup().id)}'
var storageAccountName = substring(storageAccountLongName, 0, 16)
resource staticStorageAccount 'Microsoft.Storage/storageAccounts@2022-09-01' = {
name: storageAccountName
location:location
sku: {
name: 'Standard_LRS'
}
kind: 'StorageV2'
}
output staticStorageAccountName string = staticStorageAccount.name
The "allowed regions" is there just to demonstrate how restrictions could be enforced inside the Bicep code.
Create Bicep file which calls the storage account module
Filename: deployment/phase001.bicep
Contents:
targetScope = 'subscription'
param systemName string = 'sbonds-staticweb-unnamed'
param environmentName string
@allowed([
'westus' // West US (1)
'eastus' // East US (1)
'brazilsouth' // Brazil South
])
param location string
resource resourceGroup 'Microsoft.Resources/resourceGroups@2022-09-01' = {
name: '${systemName}-${environmentName}-${location}'
location: location
}
module staticStorageAccountModule 'storage/staticStorageAccount.bicep' = {
name: 'staticStorageAccount'
scope: resourceGroup
params: {
environmentName: environmentName
location: location
}
// Creates staticStorageAccountModule.outputs.staticStorageAccountName with the account name
}
output staticStorageAccountName string = staticStorageAccountModule.outputs.staticStorageAccountName
// Next step is manual: enable $web on storage account for static content
// Then another automated step to deploy the content delivery network.
Create ARM parameters file containing the config options for dev
Filename: deployment/parameters-dev.json
This name must be parameters-${environmentName}
or it won't be found. Feel free to change the systemName. In order to change the location, or environmentName the other bicep files must be adjusted to allow those new values. This helps enforce consistency on the parameters.
TODO: Enable tests on all parameter check-ins that the parameters are valid. This would result in an immediate error on check-in rather than a delayed error at deployment/change time.
Contents:
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"systemName": {
"value": "sbonds-staticweb-dev"
},
"location": {
"value": "westus"
},
"environmentName": {
"value": "dev"
}
}
}
Create pipeline modules for compiling and deploying the Bicep files
Filename: pipelines/templates/transpile-job.yaml
Contents:
parameters:
- name: artifactName
type: string
default: "arm-templates"
steps:
- task: Bash@3
displayName: "Transpile Main Bicep"
inputs:
targetType: 'inline'
script: 'az bicep build --file $(Build.SourcesDirectory)/deployment/phase001.bicep'
- task: Bash@3
displayName: "Transpile Storage Account for static content Bicep"
inputs:
targetType: 'inline'
script: 'az bicep build --file $(Build.SourcesDirectory)/deployment/storage/staticStorageAccount.bicep'
- task: CopyFiles@2
displayName: "Copy JSON files to: $(Build.ArtifactStagingDirectory)/${{parameters.artifactName}}"
inputs:
SourceFolder: "deployment"
Contents: "**/*.json"
TargetFolder: "$(Build.ArtifactStagingDirectory)/${{parameters.artifactName}}"
- task: PublishPipelineArtifact@1
displayName: "Publish Pipeline Artifact"
inputs:
targetPath: "$(Build.ArtifactStagingDirectory)/${{parameters.artifactName}}"
artifact: "${{parameters.artifactName}}"
Bicep could be deployed directly but this transpile job allows for errors to be detected sooner.
Filename: pipelines/templates/deploy-phase001-transpiled-arm.yaml
Contents:
parameters:
- name: serviceConnectionName
type: string
- name: subscriptionId
type: string
- name: environmentName
type: string
- name: artifactName
type: string
default: "arm-templates"
- name: location
type: string
steps:
- task: DownloadPipelineArtifact@0
displayName: "Download Artifact: ${{ parameters.artifactName }}"
inputs:
artifactName: "${{ parameters.artifactName }}"
targetPath: $(System.ArtifactsDirectory)/${{ parameters.artifactName }}
- task: AzureResourceManagerTemplateDeployment@3
displayName: Deploy Main Template
inputs:
azureResourceManagerConnection: "${{ parameters.serviceConnectionName }}"
deploymentScope: "Subscription"
subscriptionId: "${{ parameters.subscriptionId }}"
location: ${{ parameters.location }}
templateLocation: "Linked artifact"
csmFile: "$(System.ArtifactsDirectory)/${{parameters.artifactName}}/phase001.json"
csmParametersFile: $(Build.Repository.LocalPath)/deployment/parameters-${{ parameters.environmentName }}.json
deploymentMode: "Incremental"
This downloads the ARM files produced by the transpile and deploys them to Azure using the permissions embedded in the passed-in serviceConnectionName
.
Create main pipeline YAML
Filename: pipelines/deploy-phase001.yaml
Contents:
trigger:
- none
stages:
- stage: build
displayName: Publish Bicep Files
jobs:
- job: publishbicep
displayName: Publish bicep files as pipeline artifacts
steps:
- template: ./templates/transpile-job.yaml
- stage: deployinfradev
dependsOn: build
displayName: Deploy to dev
jobs:
- job: deploy_westus_dev
displayName: Deploy infra to US West region dev
steps:
- template: ./templates/deploy-phase001-transpiled-arm.yaml
parameters:
serviceConnectionName: "DevEnvironment"
subscriptionId: "71ef9743-fef3-4b44-80d3-8250a36d9439"
environmentName: "dev"
location: "westus"
TODO: Create pipeline to deploy all the infrastructure
TODO: Run pipeline to deploy all the infrastructure
Since there are no instructions for automating this via pipeline yet, feel free to create it from the command line using an appropriate bicep deploy command.
Checkpoint X: Infrastructure has been created
Check that storage account exists
Check that static web site is enabled on the storage account
Enable static web site on storage account
The name isn't displayed directly in the pipeline output, but is visible in the Subscription-level Deployment object.
TODO: Make the pipeline display the storage account name
Once you have the storage account name put it in a bash variable for later reference. Your value will not exactly match the below but it should start with "sadev":
az storage blob service-properties update \
--account-name "$AZUREIAC_DEV_STORAGE_ACCOUNT_NAME" \
--static-website \
--index-document "index.html"
You may notice this warning:
WARNING:
There are no credentials provided in your command and environment, we will query for account key for your storage account.
It is recommended to provide --connection-string, --account-key or --sas-token in your command as credentials.
You're seeing the result of how certain operations on storage accounts are done using the "data plane" instead of the usual "control plane." The "control plane" would be changing metadata about the storage account and is where the creation commands operate. The "data plane" is normally used to change the CONTENTS of a storage account rather than information about the storage account. It's very odd that changing this setting requires data plan authentication instead of control plane authentication.
If all else fails, use your control plane credentials to generate a temporary SAS token and then use that with the above command to complete the operation. (--sas-token).
Do not ever put a SAS token into a pipeline. Use a Service Connection instead.
Create Service Connection into the storage account
OWNER: Create Service Principal specific to this storage account
export AZUREIAC_BASENAME="sbonds2023azureiac"
export AZUREIAC_DEV_STORAGE_ACCOUNT_NAME=sadevuhvssoq2tgk
az account set --subscription $AZUREIAC_BASENAME
az account show
SUB_ID=$(az account show --query id -o tsv)
SA_ID=$(az storage account show --name "$AZUREIAC_DEV_STORAGE_ACCOUNT_NAME" --query id)
az ad sp create-for-rbac --name "sc_${AZUREIAC_BASENAME}_${AZUREIAC_DEV_STORAGE_ACCOUNT_NAME}" --role "Contributor" --scopes "$SA_ID"
This failed with:
(MissingSubscription) The request did not have a subscription or a valid tenant level resource provider.
Code: MissingSubscription
Message: The request did not have a subscription or a valid tenant level resource provider.
WORKAROUND:
Create bare cred:
Manually assigned "Storage Account Contributor" and "Storage Blob Data Contributor" to sc_sbonds2023azureiac_sadevuhvssoq2tgk
to the storage account via Azure Portal. This requires searching for the SP name-- it won't be in the user list. The SP also required "Reader" for the parent resource group.
TODO: Find code way to do this
Create Service Connection using the Dev Storage Account Service Principal
Project Settings - Service Connections - New Service Connection - Azure Resource Manager - Service Principal (manual)
- Subscription Id:
71ef9743-fef3-4b44-80d3-8250a36d9439
($SUB_ID above) - Subscription Name:
sbonds2023azureiac
($AZUREIAC_BASENAME above) - Service Principal ID:
db1d1c85-27c3-46f4-994a-0209a38648e8
(appId from output) - Service Principal Key: omitted password field above
- Tenant ID:
04e32eb1-1f37-4207-945d-c5860625403b
(tenant from output) - Service Connection Name:
DevWebUpload
- Description:
Connects to sbonds2023azureiac for dev web content uploads
- Grant access permission to all pipelines: checked
Release pipeline to deploy static content
steps:
- task: AzureFileCopy@5
displayName: 'AzureBlob File Copy'
inputs:
SourcePath: '$(System.DefaultWorkingDirectory)/_docs-mkdocs/staticwebcontent/a/*'
azureSubscription: DevWebUpload
Destination: AzureBlob
storage: sadevuhvssoq2tgk
ContainerName: '$web'
AdditionalArgumentsForBlobCopy: '--recursive'
CleanTargetBeforeCopy: true
The cross-platform Azcopy requires a Windows agent? Funny.
NOTE: *.*
does not match directories. Use UNIX style file matching.
(Optional): Set up user authentication - Rejected as too expensive
Goal: Allow only whitelisted people who have authenticated via their Microsoft Azure AD account to view our content.
Azure AD App Proxy? Intended for on-prem access but it might just work. John Savill: https://www.youtube.com/watch?v=dcAY-qrzTYA
He mentions these as better options for cloud content:
- Azure App Gateway: doesn't do per-user authentication
- WAF: is a filter, not a user authentication platform
- FrontDoor: can link with easy auth (
https://learn.microsoft.com/en-us/azure/app-service/overview-authentication-authorization
) but is $35/month + data. Nope! - Azure AD App Proxy: requires Azure AD P1 at $6/user/month. Nope!
TODO: Consider valet key approach to user authentication
https://github.com/mspnp/cloud-design-patterns/tree/master/valet-key
This is an example showing a web app that includes authentication which auto-generates SAS tokens to access to content in a storage account. Authentication is done via Azure defaults. This example is for enabling temporary write access, so it's not a perfect fit, but the idea could be extended to include getting a read-only key for the site content only after authentication is completed and authorization is confirmed.
TODO: Consider migrate to static web app and use built in authentication
TODO: determine likely costs of the static web app vs. super cheap storage account
https://learn.microsoft.com/en-us/azure/static-web-apps/authentication-authorization
Build all docs into a single directory, e.g. /docs
, then add a route to that directory for authenticated users. See https://learn.microsoft.com/en-us/azure/static-web-apps/configuration#routes. For example:
Set up endpoint and custom domain
I like subdomains, so I'll use something like docs.azureiac.stevebonds.com
, letting me use *.azureiac.stevebonds.com
for all related content.
TODO: Automated method.
Create directory deployment/web
for the content delivery network and endpoint configuration Bicep files.
Custom domain CNAME records:
stevebonds.com stevebonds.azureedge.net 300 sec
_domainconnect.stevebonds.com _domainconnect.gd.domaincontrol.com 3600 sec
autodiscover.stevebonds.com autodiscover.outlook.com 3600 sec
cdnverify.stevebonds.com cdnverify.stevebonds.azureedge.net 300 sec
resume.stevebonds.com resume-stevebonds.azureedge.net 3600 sec
www.stevebonds.com stevebonds.azureedge.net 300 sec
(MANUAL): Enable Microsoft.CDN resource provider
Run from the Owner Az CLI shell if not already done as part of the subscription setup. This is where I learned the auto-registration failed and updated the prior instructions.
export AZUREIAC_BASENAME="sbonds2023azureiac"
az account set --subscription $AZUREIAC_BASENAME
az provider register --namespace 'Microsoft.CDN'
In Azure Portal from the storage account select "Azure CDN" to create an endpoint.
Above registration needs to complete before that appears, which takes 20-30 minutes.
Also cannot pre-register via Bicep. See https://github.com/Azure/bicep/issues/3267.
Set up CDN to pull content from static website storage account
Manual:
Navigate into cdnsadevuhvssoq2tgk
on Azure Portal and use Settings - Custom domains
Add a CNAME to my DNS provider to resolve docs.azureiac.stevebonds.com
as sbonds2023azureiac-mkdocs.azureedge.net
. For a faster cutover, you could set up cdnverify.docs.azureiac.stevebonds.com
as cdnverify.sbonds2023azureiac-mkdocs.azureedge.net
and then make the above CNAME change whenever.
Enable SSL on the domain. This does not cost extra:
Set up DNS entry for CDN endpoint
DNS entries needed in stevebonds.com
:
cdnverify.docs.azureiac.stevebonds.com
CNAMEcdnverify.sbonds2023azureiac-mkdocs.azureedge.net
docs.azureiac.stevebonds.com
CNAMEsbonds2023azureiac-mkdocs.azureedge.net
Set up custom domain on storage account (should do earlier)
Once the above was set up and working, I got this instead of my test content:
<Error>
<Code>InvalidQueryParameterValue</Code>
<Message>Value for one of the query parameters specified in the request URI is invalid. RequestId:6b86957a-902e-0024-0aab-fdadba000000 Time:2023-10-13T13:45:41.2184671Z</Message>
<QueryParameterName>comp</QueryParameterName>
<QueryParameterValue/>
<Reason/>
</Error>
This somewhat defeats the whole "zero downtime" part I was testing and the fix seems to be adding the custom domain to the storage account before changing any DNS entries.
- Microsoft Question: InvalidQueryParameterValue using Custom Domain
- Microsoft: Pre-register your custom domain with Azure
I have another static website on a custom domain working just fine without this configuration.
Try it and see what happens:
Fails, because the CNAME points to the CDN as it should.
Compare broken storage account with working one
Origin - working: Origin type Storage Static Website; broken: Origin type: Storage
Try setting this to Storage Static Website. No idea why this default would be different, but these sorts of odd behaviors are part of why learning the CLI and using it can lead to more consistent behavior.
Config location:
Before config:
After config:
Works on https://sbonds2023azureiac-mkdocs.azureedge.net/
:
And on the custom domain https://docs.azureiac.stevebonds.com/
!
Copy this content to a "how this site was made" folder in docs-mkdocs
Now that we know how the site was made, publish our results using the infrastructure we've built.
It's not perfect, but it's working. We have a starting point from which to refine and improve. See all those TODOs above?
But that will be tracked on the new doc site, not in this README.
Create a place in docs-mkdocs to put these files
This assumes AZUREIAC_BASENAME
is set, you still have docs-mkdocs checked out locally, and your notes on creating the site are in a README.md
like this one.
cd $HOME
cd git-${AZUREIAC_BASENAME}/docs-mkdocs/docs
mkdir "how-the-site-was-made"
cd "how-the-site-was-made"
cp $HOME/git-${AZUREIAC_BASENAME}/infrastructure/README* .
git add README*
git commit -m "Copied info on how this site was built from the infrastructure repo"
git push
Forgot a couple files:
cp $HOME/git-${AZUREIAC_BASENAME}/infrastructure/0100* .
cp $HOME/git-${AZUREIAC_BASENAME}/infrastructure/create-new-subscription.bicep .
git add 0100* create-new-subscription.bicep
git commit -m "Forgot a couple files from the infrastructure repo"
git push
Test docs compile with mkdocs
You should see no warnings or errors and a line like Serving on http://127.0.0.1:8000/
Point a local browser to http://localhost:8000/how-the-site-was-made/ and you should see this content.
Set the pipelines to auto-run on every push to the main branch
This results in changes going live immediately after they're committed.
Change the trigger:
section to list the name of the branch on which changes should trigger the pipeline to run. We only have one trunk branch which should generate artifacts, so add a trigger for main
only. This enables Continuous Integration.
Change the release pipeline trigger to deploy when a release is built ("After release"), this enables Continuous Deployment.
Change the main branch to not allow direct commits
This enforces a "pull request from a branch only" policy which is what larger projects with more members tend to use. During early development or when a repo is single-user, this just slows things down for minimal benefit. For larger groups which need coordination, this slow-down is worth the benefit of time saved avoiding unexpected changes.
From Project Settings, go to Repos, Repositories:
Choose the repository to change, in this case we're going to require PRs for the content repository, docs-mkdocs
:
Choose Policies to show the policies options:
Choose the main branch at the bottom of the policies page:
As soon as one or more of these policies is enabled, then all changes need to go through a pull request. These policies get applied to those pull requests. Requiring that all PR comments be resolved is a simple policy to work with, so I'll enable that one for this repo. (Note if this is set to "optional" that pushes to the main branch remain allowed.)
There is no "save" button, as soon as that slider is hit, that policy is active.
Future attempts to push a commit onto "main" will result in:
[remote rejected] main -> main (TF402455: Pushes to this branch are not permitted; you must use a pull request to update this branch.)
Configure an http to https redirect
The Content Delivery Network is not fully Infrastructure as Code yet, so this is an unfortunate manual fix.
Details on the process are in Microsoft: Set up the Standard rules engine for Azure CDN
CDN Rules Engine:
Clicking Add Rule gives me a spot to, unsurprisingly, add a rule:
Add a condition based on the request protocol:
Condition: If Operator Equals HTTP
Add an action for URL Redirect:
I have no intention of ever putting this site on http, so I can use a permanent 301 redirect instead of the temporary 302. This primarily affects caching and link indexing which will both only ever use the redirected location for their lifespan.
Completed 301 redirect:
TODO: Include this in the Infrastructure as Code definition for the Content Delivery Network.
How the site was made URL should always end in slash
The image relative paths go to / instead of /how-the-site-was-made/ if the index.html is presented on the /how-the-site-was-made
URL instead of /how-the-site-was-made/
with the trailing slash.
Add a CDN rule to 302 redirect /how-the-site-was-made
to /how-the-site-was-made/
similar to the HTTP to HTTPS method above:
TODO: Add a site version file
This will show when the site was generated and by what pipeline. This is great for tracking whether the CDN-delivered content I see is the latest version or a cached copy.
TODO: Add unit tests to build pipeline and/or infrastructure pipeline
TODO: Add integration tests to release pipeline
TODO: Add production environment
TODO: Finish up the Infrastructure as Code deployment of the Content Delivery Network
This is partially done, but is not working yet. I need to troubleshoot an issue with resource staticWebOrigin 'Microsoft.Cdn/profiles/endpoints/origins@2021-06-01'
.
TODO: Show the benefits of scripting all this
The benefit of automation is lost on the first run-through. The benefits come about on the next and future implementations since much of the work can be done simply by running scripts/commands.
Demonstrate this benefit by creating a new environment using as much automation as practical.
Purge CDN during release
Via the GUI (cringe) add an Azure CLI step to the release pipeline:
Because this command will interact with the CDN Azure resource, this is in the "control plane" and uses the "control plane" Azure service connection
Script type: Shell. This could also be PowerShell or any of the others.
For simplicity I'll use an Inline script. An alternative would be to publish the script as an artifact used by this release.
We're going to run an az cdn endpoint purge
command. Here's the docs on that: Microsoft: az cdn endpoint purge. The command requires this info:
--content-paths
:/*
for all content--ids
: resource ID of the endpoint resource. This is less ambiguous than using other options, but harder to find and read. However, we only need to find it once and code it in, so it's a good choice for this situation.
It's a good idea to test the syntax works before putting it in a pipeline.
SPECIAL WINDOWS GIT NOTE: If you see invalid resource ID: C:/Program Files/Git/
or similar when passing an ID into a command in Git for Windows Bash this is not something you did wrong. This is Git for Windows trying to be "helpful" and failing spectacularly. This does not happen in Bash on Linux.
Prefix your command with MSYS_NO_PATHCONV=1
to make Windows Bash not try to help for the next command. For example:
Instead of:
Use:
You do not need to include MSYS_NO_PATHCONV
in your pipeline command. Ha! I spoke too soon. You don't need this unless your build machine is Windows and running Git bash, and look what Microsoft does for their shared hosted VMs:
So yes, the above, very niche case workaround is needed for our specific config which requires Windows agents for our file upload.
Checkpoint 8: Document site is visible on the Internet
Visit your custom domain via https and ensure there are no cert errors.
Check via https://www.ssllabs.com/ssltest/index.html to ensure no major issues. Your site should score an "A" on their reporting methods. Getting to "A+" isn't necessarily a good thing as at this level many clients will be unable to connect.
Checkpoint 9: Your document is visible on your custom URL
In my case it is https://docs.azureiac.stevebonds.com/how-the-site-was-made/ but your name will vary.
Appendix A: (optional) Side project: CI/CD enabled HTML resume
This is a great way to showcase your skills while making your resume easy for people to access. An HTML resume is just static content, and it can be hosted in Azure for free. Other hosting sites (like GitHub) get de-emphasized by search engines, but Azure hosted content does not, making it easier for your resume to match recruiter searches.
Appendix B: Azure DevOps Project using IaC
Make sure your $AZUREIAC_BASENAME
is still set.
Create a new Azure DevOps project in your existing non-IaC-created Azure DevOps Organization:
az devops project create \
--organization "https://dev.azure.com/${AZUREIAC_BASENAME}" \
--name "${AZUREIAC_BASENAME}-project" \
--visibility private
Create a repo for the infrastructure code: