Challenges with Firewall Restrictions and the Azure Container Registry for IoT Edge

Those who know me know I am a fan of Azure and the greatness that public cloud brings to digital transformation. Along this journey, I do believe that when possible make architectural choices that move to the right in the Cloud Model [towards SaaS] where possible for better ROI, scalability and agility.

With this approach, I let business or technical requirements push me back to the left [towards On Premises]. Also, it is a common pattern to have a solution that encompasses some or all of these models, using the right model for the right job. Example, you might use Power BI as a SaaS, SQL Azure as a PaaS and perhaps legacy code in IaaS or On Premises.

Azure Container Registry — Networking Challenge

Azure Container Registry [ACR] is an example where you have this choice. ACR is offered as a managed platform service (PaaS), like many in Azure, that leverage other Azure services including: Traffic Manager, Blob Storage, Content Delivery Network and App Services — just to name the top few.

From a scale, cost and agility perspective, this is the absolute right approach to deliver Container Registry service at a Hyper Scale level.

From a network lockdown perspective, this makes locking down to single/few IP addresses a near impossibility.

Microsoft publishes the Azure IP range here: https://www.microsoft.com/en-us/download/details.aspx?id=56519, which will give you the range of IP addresses for Azure services, like Container Registry and IoT Hub. This list is the supported method for those wishing to lock down to IP Address ranges. This approach is acceptable to some network / security administrators, though it creates a configuration item to monitor (when IP ranges are added to Azure regions).

To simplify the wildcard network challenge in Azure Container Registry, Microsoft introduced “Dedicated Data Endpoints”: http://aka.ms/acr/dedicated-data-endpoints, to overcome the many URLs and named endpoints. With Dedicated Data Endpoints, you now only focus on 2 classes of URLS:

  1. the ACR Logon Server — [registry].azurecr.io
  2. the Dedicated Endpoints — [registry].[region].data.azurecr.io

Optionally, if you want to change the name of the URLs above, consider Custom Domains and your own SSL Certificates: https://github.com/Azure/acr/tree/main/docs/custom-domain.

This approach eliminates the wildcard [*.blob.core.windows.net] required in the past, but still has a dependency on URLs vs IP Addresses that some companies require. For most customers, URLs should suffice and are a great balance between the characteristics of the Public Cloud and the traditional practices of locking down source and destination hosts.

Alternatives to Container Registry running in Azure

For those customers who need absolute IP Addresses, we can address this in a few ways, some of the top 2 being: running a Container Registry in IaaS (simple, but no HA) or running a Container Registry in Azure Kubernetes (native HA).

Consider the following high level steps:

  1. Create an IaaS VM or Kubernetes Cluster, exposing TCP port 443 and using a static IP address
  2. Install the Registry Server: https://docs.docker.com/registry/deploying/
  3. Generate a certificate for a secure Registry Server
  4. Configure the Docker Registry Server to listen on TCP 443 port and use a certificate
  5. Configure authentication: https://docs.docker.com/registry/deploying/#native-basic-auth
  6. Upload your container images

This method addresses the “must have a single IP Address” need, trading off the added features of ACR, namely:

just to name the top few.

Windows Containers

If using Windows Containers, be mindful of the Foreign Layer issue with the NanoServer image, discussed here: https://github.com/moby/moby/issues/34216. To address this, add the ‘allow-nondistributable-artifacts’ discussed here: https://docs.docker.com/registry/deploying/#considerations-for-air-gapped-registries.

Setup: Container Registry in an IaaS VM:

1. In the Azure Portal (https://portal.azure.com), install a Linux IaaS VM with a static IP address and exposing TCP 443.

2. Log into the VM and install Docker running the following command:

sudo apt update && sudo apt install -y docker.io

3. Run the option 1 or option 2 to get certificates on the VM (see options below).

4. Run the following commands assuming cruser as the login user and crPassword as the password for the Container Registry:

# make the certificates avialable to the container
sudo mkdir /registryContainer
sudo cp Container*.pem /registryContainer

# create the container userid and password
sudo docker run --rm --entrypoint htpasswd httpd -Bbn cruser crPassword  > htpasswd
sudo cp htpasswd /registryContainer

sudo docker run -d --restart=always \
  --name containerregistry \
  -v /registryContainer:/config \
  -e REGISTRY_HTTP_ADDR=0.0.0.0:443 \
  -e REGISTRY_HTTP_TLS_CERTIFICATE=/config/ContainerRegistry-publicKey.pem \
  -e REGISTRY_AUTH=htpasswd \
  -e "REGISTRY_AUTH_HTPASSWD_REALM=Registry Container" \
  -e REGISTRY_AUTH_HTPASSWD_PATH=/config/htpasswd \
  -e REGISTRY_VALIDATION_DISABLED=true \
  -e REGISTRY_HTTP_TLS_KEY=/config/ContainerRegistry-privateKey.pem \
  -p 443:443 \
  registry:2

5. Assuming the Root CA is trusted and we have DNS setup to point to the correct IP address, test a container registry login from a remote machine, tag an existing image and push it, as shown below:

If using Windows Containers and getting the “Foreign Layer” message, add the “allow-nondistributable-artifacts” to your daemon.json on the workstation pushing the image as shown below:

{
  "allow-nondistributable-artifacts": ["ContainerRegistry.saye.org"]
}

(certificate option 1) Request a real certificate:

Run the following commands to generate a ContainerRegistry.csr file, which will be transferred and signed by your real Certificate Authority:

openssl genrsa 2048 > ContainerRegistry-privateKey.pem
openssl req -new -key ContainerRegistry-privateKey.pem > ContainerRegistry.csr

(certificate option 2) Generate a private (untrusted) certificate:

Run the following commands to generate both Private Root CA and certificates used by the Container Registry. Note, the Private Root CA must be trusted by each application calling this Container Registry.

#Get a default config.txt
wget -q https://raw.githubusercontent.com/ksaye/simpleCAforIoT/master/config.txt

#Generate the CA Key to sign with
openssl genrsa 2048 > RootCAPrivateKey.pem
openssl req -new -x509 -config config.txt -nodes -key RootCAPrivateKey.pem -out RootCAPublicKey.pem -subj "/CN=RootCA"

#Generate the Docker Key request based on the CA above
openssl req -newkey rsa:2048 -nodes -keyout ContainerRegistry-privateKey.pem -out request.pem -subj "/CN=ContainerRegistry.saye.org"

#convert the private Key to RSA format
openssl rsa -in ContainerRegistry-privateKey.pem -out ContainerRegistry-privateKey.pem

#finally sign the request and generate the key
openssl x509 -req -in request.pem -CA RootCAPublicKey.pem -CAkey RootCAPrivateKey.pem -set_serial 1 -out ContainerRegistry-publicKey.pem

#clean up

rm config.txt && rm request.pem

Setup: Container Registry in Kubernetes:

While the single IaaS VM is simpler to setup, being a single VM it is subject to downtime. One (of many ways) to enable High Availability is Kubernetes.

1. Install the Azure CLI from: https://docs.microsoft.com/en-us/cli/azure/install-azure-cli which we will use to deploy to the cluster. Because I need some tools like base64, I installed on a Linux shell on Windows, using WSL v2:

2. Open the Azure Portal (http://portal.azure.com), search for the Kubernetes Service, click create, name your service and finally click review + create, as shown below:

3. In the Azure Portal, select your AKS deployment and click connect. In a command prompt, run the 2 commands on the right to connect your Azure CLI to your AKS, as shown below:

4. Request certificates in PEM files, similar to the process for IaaS VMs. For this example, I assume the file names: ContainerRegistry-publicKey.pem and ContainerRegistry-privateKey.pem.

Run the following command to add the pem file to a secret for Kubernetnes:

kubectl create secret generic publickey --from-file ContainerRegistry-publicKey.pem

kubectl create secret generic privatekey --from-file ContainerRegistry-privateKey.pem

5. Run the following command to generate the base64 encoded userid and password that will be used to log into the container registry:

docker run --entrypoint htpasswd --rm httpd -Bbn cruser crPassword | base64

6. Create a file called secret.yaml, adding your HTPASSWD entry and import the “Secret” using the following command.

My secret.yaml file:

apiVersion: v1
kind: Secret
metadata:
  name: docker-registry
type: Opaque
data:
  HTPASSWD: Y3J1c2VyOiQyeSQwNSRnbzlSTlVOZnpTR2p3Yy5LR0RwY0guZ3Jwb0REVzRFeHRzVTd1bWhLTGJMWjhPTUlNL0V2LgoK

Command to run to create the secret:

kubectl apply -f secret.yaml

7. Create a filed called configMap.yaml as shown below, and import the map using the following command. You can add any value from here: https://docs.docker.com/registry/configuration/

My configMap.yaml file:

apiVersion: v1
kind: ConfigMap
metadata:
  name: docker-registry
data:
  registry-config.yml: |
    version: 0.1
    log:
      fields:
        service: registry
    storage:
      cache:
        blobdescriptor: inmemory
      filesystem:
        rootdirectory: /var/lib/registry
    http:
      addr: 0.0.0.0:5000
      tls:
          certificate: /etc/publickey/publickey
          key: /etc/privatekey/privatekey
      headers:
        X-Content-Type-Options: [nosniff]
    auth:
      htpasswd:
        realm: "Container Registry"
        path: /auth/htpasswd
    health:
      storagedriver:
        enabled: true
        interval: 10s
        threshold: 3
    validation:
        disabled: true

Command to import the map:

kubectl apply -f configMap.yaml

8. Create a file called pod.yaml as shown below and run the following command to import it:

my pod.yaml file:

apiVersion: v1
kind: Pod
metadata:
  name: docker-registry
  labels:
    name: docker-registry
spec:
  volumes:
    - name: config
      configMap:
        name: docker-registry
        items:
          - key: registry-config.yml
            path: config.yml
    - name: htpasswd
      secret:
        secretName: docker-registry
        items:
        - key: HTPASSWD
          path: htpasswd
    - name: publickey
      secret:
        secretName: publickey
        items:
        - key: ContainerRegistry-publicKey.pem
          path: publickey
    - name: privatekey
      secret:
        secretName: privatekey
        items:
        - key: ContainerRegistry-privateKey.pem
          path: privatekey
    - name: storage
      emptyDir: {}
  containers:
    - name: docker-registry
      image: registry:2.6.2
      imagePullPolicy: IfNotPresent
      ports:
        - name: http
          containerPort: 5000
          protocol: TCP
      volumeMounts:
        - name: config
          mountPath: /etc/docker/registry
          readOnly: true
        - name: htpasswd
          mountPath: /auth
          readOnly: true
        - name: publickey
          mountPath: /etc/publickey
          readOnly: true
        - name: privatekey
          mountPath: /etc/privatekey
          readOnly: true
        - name: storage
          mountPath: /var/lib/registry

Run this command to import the file:

kubectl apply -f pod.yaml

9. Create a filed called service.yaml as shown below and run the following command to import it:

my service.yaml file:

apiVersion: v1
kind: Service
metadata:
  name: docker-registry
spec:
  type: ClusterIP
  ports:
    - name: http
      protocol: TCP
      port: 5000
      targetPort: 5000
      
  selector:
    name: docker-registry

Run this command to import:

kubectl apply -f service.yaml

10. Finally we will expose our service with a load balancer running a single command:

kubectl expose service docker-registry --port=443 \
  --target-port=5000 --name=loadbalancer --type=LoadBalancer

And running the command kubectl get service will show the external IP address:

kubectl get service

11. Finally, to see it in action after mapping the DNS name to the IP address run the following command on a remote machine that trust the certificate to login, tag an image and push it to your container registry running in Kubernetes!

docker login -u cruser -p crPassword ContainerRegistry.saye.org
docker pull mcr.microsoft.com/azureiotedge-agent:1.0
docker tag mcr.microsoft.com/azureiotedge-agent:1.0 ContainerRegistry.saye.org/azureiotedge-agent:1.0
docker push ContainerRegistry.saye.org/azureiotedge-agent:1.0

Kubernetes: What about Scale?

The example above deploys a single container in a managed Kubernetes cluster.

To scale and using Kubernetes structure, replace the service with a deployment. Skip step 9 above and instead use the following deployment.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: docker-registry
  labels:
    app: docker-registry
spec:
  replicas: 50
  selector:
    matchLabels:
      app: docker-registry
  template:
    metadata:
      labels:
        app: docker-registry
    spec:
      containers:
      - name: docker-registry
        image: registry:2.6.2
        ports:
        - containerPort: 5000

Run the following command:

kubectl apply -f deployment.yaml

And instead of using a load balancer to the service, use a load balancer to the deployment using the following command:

kubectl expose deployment docker-registry --port=443 \
 --target-port=5000 --name=loadbalancer --type=LoadBalancer

Shown below, I now have 50 containers all behind a load balancer for scale:

Leave a comment