Windows Containers: Run an Image

A container is an instance of an image. When we “run” the image, a container is created. Let’s see what happens when we do this.

In previous posts I covered installing the Windows Containers feature, and downloading a base image of Windows Nano Server or Windows Server Core.

Docker is the daemon or service that manages images and containers. It is managed at the command line, in PowerShell or the Command Prompt. To use Windows Containers we need to get familiar with the Docker commands.

I am using the base image of Nano Server as an example, because it is simple and small. When you see how it works, it is easy to imagine how other images based on Windows Server Core might also work.

Lets just run the Nano Server image and see what happens. In an elevated PowerShell console:

docker run microsoft/nanoserver

The console blinks, briefly changes to C:\ prompt, then returns to the PowerShell prompt. It seems that nothing happened at all!

Docker Run Nano

We can try:

docker container ls

to see if any containers exist. It shows none. But:

docker container ls -a

(or –all) shows a container that has exited.

Docker Container LS

So there was a container but it exited. I can see that the container has an ID, and a name.

I can start the container with:

docker start [ID]

but it exits again. Clearly it is configured to do nothing and then exit.

Docker Start Container

This container is no use to me, since it just runs and exits. I can remove it with:

docker container rm [ID]

Docker Container Remove

How can I get it to hang around, so I can see what it is? Normally a server waits for something to do, but this container seems to exit if it has nothing to do. It acts more like a process than a server. I could try giving it a command that continues until stopped, like:

docker run microsoft/nanoserver ping -t 8.8.8.8

Now I can see that the container continues to perform the ping. If I disconnect the terminal with Ctrl+C, the container is still running.

Docker Run Nano Ping

If I run:

docker container attach [ID]

then the PowerShell console attaches again to the output of the running ping process.

Docker Container Attach

I need to run:

docker container stop [ID]

to stop the ping process, and:

docker container rm [ID]

to remove the container.

If I want to run a container and see what it is doing, then I can run an interactive container. Putting the commands together:

    • create the container:
docker run [image]
    • remove it when it exits:
--rm
    • double dash is for a full word, or an abbreviation:
--
    • single dash is for a concatenation:
-
    • so -[interactive][tty] is:
-it
    • give the container a name so that I don’t have to find the ID or the random name:
--name [friendly name]
    • a command parameter at the end is the executable to run in the container:
powershell

So:

docker run --rm -it --name nano microsoft/nanoserver powershell

gives me a running container with an attached PowerShell console:

Docker Run Interactive

Now we can see the inside of the container as well as the outside, and get a good idea of how it works.

Windows Containers: Base Images

The Containers feature on Windows Server 2016 runs applications in containers. A container is an instance of an OS image. Let’s explore what an image is.

For Windows containers we can start with one of two base images:

  1. Windows Server Core
  2. Windows Nano Server

The base images are provided by Microsoft, and kept in the Microsoft repository in the Docker Hub registry.

To get a copy of the current Nano Server base image, we use the command:

docker pull microsoft/nanoserver

This downloads, unzips and saves the image in the Docker folder at C:\ProgramData\docker\windowsfilter.

  • Docker commands are run by the docker client from either the Command Prompt or PowerShell
  • The command processor must run elevated
  • The place where the images are stored, by default, can be changed in the configuration of Docker
  • By default Docker pulls the latest version of the image. Other versions can be specified explicitly.

An image is a set of files. Here are the image files for Nano Server:

Nano Image Base Layer

The image consists of files and registry hives. Here are the contents of one of the folders for the Nano Server image:

Nano Image Folder

The files look like a standard system drive:

Nano Image Files

The Hives folder is a collection of registry hives:

Nano Image Registry Hives

You can load the registry hives into Regedit in the normal way:

Nano Image Registry Software Hive

We also have a Utility VM folder with two Hyper-V Virtual Hard Disk (vhdx) files:

Nano Image Utility VM

These are used when the container is run in a small Hyper-V virtual machine instead of directly on the host OS (Hyper-V isolation mode).

In my example of the Nano Server base image, there were two folders:

Nano Image Pull

Each folder represents a layer. When an image is modified, the changes are saved in a new layer.

The command:

docker image ls

shows one image:

Nano Image List 1593

The command:

docker image history microsoft/nanoserver

shows two layers. The first layer is the original release of Nano Server, 10.0.14393.0, and the second layer is an update, 10.0.14393.1593. You can see the name, date, action that created it, and size of each layer:

Nano Image History

The command:

docker image inspect microsoft/nanoserver

shows the details of the image. These include:

  • The unique ID of the image
  • The OS version
  • The unique ID of each of the two layers

If we look back at the Microsoft repository on Docker Hub, we can see the tags for different updates:

Update 10.0.14393.1066
Update 10.0.14393.1198
Update 10.0.14393.1358
Update 10.0.14393.1480
Update 10.0.14393.1593
Update 10.0.14393.1715

Update 1715 is newer than the one I pulled recently. If I run the command:

docker pull microsoft/nanoserver

again, I get the latest image. If I run the command with a tag appended, I get that specific image. In this case they are different update levels, but they could be different configurations or any other variation.

Now a third folder is added in C:\ProgramData\docker\windowsfilter:

Nano Image Pull 1715

The command

docker image ls

shows that I have two images:

Nano Image List 1715

The command:

docker image history microsoft/nanoserver

again shows two layers in the latest image. One layer is the new update, and the other layer is the same original layer as in the previous version:

Nano Image History 1715

The image name “microsoft/nanoserver” refers, by default to the latest version of the image, consisting only of the original layer and the newest layer. Docker keeps track of images and layers in a local database:

Docker Local Database

Summary:

  1. Windows containers are instances of an image
  2. An image is a set of files and registry hives
  3. An image comprises one or more layers
  4. All Windows container images start from either Windows Server Core or Nano Server
  5. The layers may comprise updates, roles or features, language, applications, or any other change to the original OS.

Windows Containers: Add Feature

The Windows Server 2016 Containers feature enables Windows Server 2016 to run applications in “containers”. Let’s take a look at what this feature is.

There are plenty of guides on the Internet for how to set up containers on Windows. The purpose here is not so much to provide the instructions, as to see and understand how the new Containers feature is implemented.

Step 1: Build a standard Windows server. It can be a physical or virtual server.

Step 2: Install the Containers feature.

Windows Containers Feature

This creates a new service: the Hyper-V Host Compute Service. Note that several Hyper-V components are already installed by default in the server OS, without adding the Hyper-V role explicitly. The Containers feature extends the default Hyper-V services.

Hyper-V Host Compute Service for Containers

The Hyper-V Host Compute service is the one that will partition access to the Windows kernel by different containers.

Next, install the PowerShell module for Docker. There are two steps to obtain the module:

  1. Add the Microsoft Nuget module
  2. Add the PowerShell Docker module.

Nuget is the Microsoft package manager for open source .NET packages:

  • Install-PackageProvider -Name NuGet -Force

Then the PowerShell module for Docker:

  • Install-Module -Name DockerMsftProvider -Repository PSGallery -Force

Next, we need to add the Docker components. Docker is a third party application that manages containers, on Linux and now on Windows. Microsoft provides the API (in the Hyper-V Host Compute Service) and Docker provides the application that uses the API to run containers. The documentation for Docker comes from Docker, not from Microsoft. The command to install the Docker package is:

  • Install-Package -Name docker -ProviderName DockerMsftProvider

I have broken these out as separate steps for clarity. If you install the PowerShell Docker module you will be prompted first for Nuget. The Docker package (last step above) will also add the Containers feature, if you have not already done it.

Docker is installed as a service (daemon) and a client to operate the service.

Docker Daemon Service

The Docker installation has these two executables.

Docker Executables

The file dockerd.exe is the Docker service.

Docker Properties

The file docker.exe is the client. Like a lot of open source tools, Docker is managed at the command line. You can run the docker client executable in the Command Prompt.

Docker Client

The Containers feature also creates an internal network where the containers will run by default. This consists of:

  1. A Hyper-V virtual switch
  2. A subnet used for the virtual network (always 172.17.nnn.0/20)
  3. A virtual NIC on the host server that is presented to the virtual switch
  4. Two new rules in the Windows firewall.

By default the Containers feature sets up a NAT switch. A Windows component, WinNAT, maps ports on the host to IP addresses and ports on the container network.

Here is the virtual switch:

Docker Virtual Network

And the NAT component:

Container VMSwitch and NAT

The host NIC on this virtual switch:

Hyper-V Virtual Ethernet Adapter

 The Hyper-V Virtual Ethernet Adapter shown in the normal Network and Sharing Centre:

Hyper-V HNS Internal NIC

You can create other types of virtual switches later.

The installation also creates two default firewall rules:

Docker Automatic Firewall Rules

The Inter- Container Communication (ICC) default rule allows anything from the virtual container network:

Docker Automatic Firewall Rules ICC to Docker Network

and RDP:

Docker Automatic Firewall Rules RDP

It is not obvious why the Containers feature creates a firewall rule for RDP. It does not enable RDP on the host. And the containers do not support RDP.

In summary:

  • The Windows Containers feature is enabled as an extension of the default Hyper-V services.
  • The Hyper-V Host Compute Service allows containers to run processes on the Windows kernel. The Hyper-V Host Network Service creates the internal logical networks for the containers.
  • There is no need to install the Hyper-V role itself, unless you want to run containers in a VM (called Hyper-V Isolation Mode).
  • Docker is a third party application that uses the Windows Containers feature to create and run containers.
  • The Docker package installs the Docker components on top of the Windows Containers feature.
  • The Docker package installation also creates a virtual network for containers. This has a Hyper-V virtual switch with NAT networking, and a Hyper-V virtual NIC on the host attached to the switch.

So far, we have installed the Containers feature and the Docker components. We still can’t do anything until we obtain an image to create containers from.

Azure Domain Join

As well as doing large scale IT infrastructure projects, I also support a few small businesses run by friends. In one of them, for over a decade, they have had a server on site. Now they don’t. Everything is done in Azure.

They started with Microsoft Small Business Server. This provided Active Directory, Exchange, and File and Print. Over several years we moved to hosted e-mail, then Office 365. In this last stage we moved the PC’s from the local domain to the Azure domain. Users now sign in with Windows Hello, using a PIN. All the shared data is in SharePoint Team Site. All the personal data is in OneDrive. The local Special Folders on the PC are redirected to OneDrive. They use Skype, Yammer, Delve to work together, on iPad or PC. They can work at home or in the Office. Management of the PC’s is done with Intune.

Most of all, the server is switched off. No-one needs to come on site for hardware problems. Anyone can provide support, from anywhere, if they know Office 365 and Azure.

The Azure domain is not quite the same as a local domain. There is no Group Policy. If you wanted, you could add GPO’s with Azure Active Directory Premium, but it is not cheap, and of course you need some skills to manage it. It got me thinking about how we could replace GPO’s.

In one of my large scale assignments recently, where we rolled out a new global desktop, we actually needed only a few GPO’s. We had a big and complex policy to make Internet Explorer compliant with security standards. We had policies for certificates, wireless networking, passwords. But it was not very many. Without GPO’s we would need another way to do these configurations. But it would not be enough to justify keeping a global Active Directory infrastructure.