The walktrough describes how to distribute locally (on-prem) created VM images to one or more Azure subscriptions and data centers.
- The production VMs will use 'managed disks', therefore we need 'managed images' in the target subscriptions and data centers.
- For this sample, I locally create VM images (VHD files) using Hashicorp packer against Hyper-V. The local OS to be installed is openSUSE.
- Some scripts below are designed to be executed on a Windows host, particularly
- Running
packer build
against Hyper-V - Running the
Convert-VHD
Commandlet, to convert the dynamic-size.vhdx
file in a static-size.vhd
file.
- Running
- All other commands can be executed on Windows, Windows Subsystem for Linux, Linux or MacOS X, as long as the
az
command line utility is installed.- All the variable escaping, string interpolation, etc. below nevertheless is assumed to be executed in a
bash
shell.
- All the variable escaping, string interpolation, etc. below nevertheless is assumed to be executed in a
- Upload the image to a main storage account in a management subscription
- Copy the image to the desired data center and subscription. For example, if customer 1 needs the image in datacenter A and B, and customer 2 needs the image in datacenter A, there would be three transfers out of the management storage.
If there are many datacenters and customers, an alternative approach would be to distribute the image within the management subscription into management storage accounts in each datacenter, and then 'locally' copy it over to all customers.
- faster provisioning times for new customers (because the images are already in the right datacenter)
- Only pay once egress per image/datacenter location, instead of per image/datacenter/customer.
- Higher complexity in infrastructure and copy scripts
In order to have a VHD I can distribute, I used packer on Hyper-V:
export openSuseVersion=42.3
export imageLocation="https://download.opensuse.org/distribution/leap/${openSuseVersion}/iso/openSUSE-Leap-${openSuseVersion}-DVD-x86_64.iso"
curl --get --location --output "openSUSE-Leap-${openSuseVersion}-DVD-x86_64.iso" --url $imageLocation
go get github.com/mitchellh/packer
TODO: Right now, the image is not yet fully prepared according to Azure: Prepare a SLES or openSUSE virtual machine for Azure. Some initial steps are executed in scripts/setup_azure.sh
REM turn on packer logging
set PACKER_LOG=1
REM run packer against local Hyper-V
packer build packer-hyper-v.json
- The installation is configured through the
http/autoinst.xml
file. The file stucture is defined in AutoYaST documentation. - The
packer build
run creates a.vhdx
file inoutput-hyperv-iso\Virtual Hard Disks\packer-hyperv-iso.vhdx
.
Convert-VHD `
–Path "output-hyperv-iso\Virtual Hard Disks\packer-hyperv-iso.vhdx" `
-DestinationPath "output-hyperv-iso\Virtual Hard Disks\packer-hyperv-iso.vhd" `
-VHDType Fixed
export managementSubscriptionId="724467b5-bee4-484b-bf13-d6a5505d2b51"
export demoPrefix="hecdemo"
export managementResourceGroup="${demoPrefix}management"
export imageIngestDataCenter="westeurope"
export imageIngestStorageAccountName="${demoPrefix}imageingest"
export imageIngestStorageContainerName="imagedistribution"
export imageLocalFile="output-hyperv-iso/Virtual Hard Disks/packer-hyperv-iso.vhd"
export imageBlobName="2017-12-06-opensuse-image.vhd"
export productionSubscriptionId="706df49f-998b-40ec-aed3-7f0ce9c67759"
export productionDataCenter="northeurope"
export productionImageResourceGroup="${demoPrefix}production"
export productionImageIngestStorageAccountName="${demoPrefix}prodimages"
az account set \
--subscription $managementSubscriptionId
az group create \
--name "${managementResourceGroup}" \
--location "${imageIngestDataCenter}"
az storage account create \
--name "${imageIngestStorageAccountName}" \
--resource-group "${managementResourceGroup}" \
--location "${imageIngestDataCenter}" \
--https-only true \
--kind Storage \
--sku Standard_RAGRS
export imageIngestStorageAccountKey=$(az storage account keys list \
--resource-group "${managementResourceGroup}" \
--account-name "${imageIngestStorageAccountName}" \
--query "[?contains(keyName,'key1')].[value]" \
--o tsv)
az storage container create \
--account-name "${imageIngestStorageAccountName}" \
--account-key "${imageIngestStorageAccountKey}" \
--name "${imageIngestStorageContainerName}" \
--public-access off
az storage blob upload \
--type page \
--account-name "${imageIngestStorageAccountName}" \
--account-key "${imageIngestStorageAccountKey}" \
--container-name "${imageIngestStorageContainerName}" \
--file "${imageLocalFile}" \
--name "${imageBlobName}"
az account set \
--subscription $productionSubscriptionId
az group create \
--name "${productionImageResourceGroup}" \
--location "${productionDataCenter}"
az storage account create \
--name "${productionImageIngestStorageAccountName}" \
--resource-group "${productionImageResourceGroup}" \
--location "${productionDataCenter}" \
--https-only true \
--kind Storage \
--sku Premium_LRS
export productionImageIngestStorageAccountKey=$(az storage account keys list \
--resource-group "${productionImageResourceGroup}" \
--account-name "${productionImageIngestStorageAccountName}" \
--query "[?contains(keyName,'key1')].[value]" \
--o tsv)
az storage container create \
--account-name "${productionImageIngestStorageAccountName}" \
--account-key "${productionImageIngestStorageAccountKey}" \
--name "${imageIngestStorageContainerName}" \
--public-access off
az storage blob copy start \
--source-account-name "${imageIngestStorageAccountName}" \
--source-account-key "${imageIngestStorageAccountKey}" \
--source-container "${imageIngestStorageContainerName}" \
--source-blob "${imageBlobName}" \
--account-name "${productionImageIngestStorageAccountName}" \
--account-key "${productionImageIngestStorageAccountKey}" \
--destination-container "${imageIngestStorageContainerName}" \
--destination-blob "${imageBlobName}"
Once the destination storage account received the call to start the copy operation, it pulls the data from the source storage account. Calling az storage blob show
retrieves the destination blob's properties, amongst which you find the copy.status
and copy.status
values. A "status":"pending"
let's you know it's still not finished. A "progress": "3370123264/4294967808"
tells you how many bytes of which total have already been transferred.
statusJson=$(az storage blob show \
--account-name "${productionImageIngestStorageAccountName}" \
--account-key "${productionImageIngestStorageAccountKey}" \
--container-name "${imageIngestStorageContainerName}" \
--name "${imageBlobName}")
echo $statusJson | jq ".properties.copy.status"
echo $statusJson | jq ".properties.copy.progress"
Before creating an image, wait until the full copy operation finished successfully.
export productionImageIngestUrl=$(az storage blob url \
--protocol "https" \
--account-name "${productionImageIngestStorageAccountName}" \
--account-key "${productionImageIngestStorageAccountKey}" \
--container-name "${imageIngestStorageContainerName}" \
--name "${imageBlobName}" \
--o tsv)
az image create \
--name "${imageBlobName}" \
--resource-group "${productionImageResourceGroup}" \
--location "${productionDataCenter}" \
--source "${productionImageIngestUrl}" \
--os-type Linux