Docker Swarm on Windows

Docker Swarm enables containers to be managed across different hosts. It work on Windows Server 2016 hosts, but the built-in routing mesh is not supported until the newest Windows Server version 1709, released in October 2017.

Docker Swarm is the tool for managing containers across separate docker machines. It defines machines as managers or workers. They communicate with each other to implement docker services. A service is a collection of containers running with the same configuration, and following a set of rules to define the service.

Just to complete the picture, Docker Compose is the tool that creates an application from a set of services. The Containers feature in Windows Server 2016 by default includes Docker Swarm but not Docker Compose.

To set up the Swarm cluster we need more than one machine, obviously. Azure Container Service (ACS) does not currently include Windows hosts, although it is changing so fast that may be out of date any time soon. Instead we can create a cluster of Windows hosts using the Azure virtual machine scale set with Windows Server 2016 Datacenter – with Containers.

We need to open ports on the Windows firewall on each host to allow communication between the docker machines:

  • TCP port 2377 is for Docker communication between manager and worker.
  • TCP and UDP port 7946 is for the “control plane” communication between hosts (worker to worker). This trafffic synchronises the state of a service between hosts.
  • UDP port 4789 is for the “data plane” VXLAN encapsulated traffic between applications in containers.

To create the swarm, run:

docker swarm init --advertise-addr [IP address of manager]

The default is to listen on all addresses on port 2377 (0.0.0.0:2377), so there is no need to specify it. The dialogue returns a token.

To join a host as a worker, run:

docker swarm join --token [the token number returned when creating the swarm] [the listening address of the manager]

We can add or remove nodes later, as workers or managers. The documentation for setting up and managing the swarm is here: Docker Swarm.

If we want to use a GUI to see what is going on, we can use Portainer. I have described setting it up here: Windows Containers: Portainer GUI. This is what we see in the dashboard after creating the swarm:

Docker Swarm Portainer Dashboard

In the Swarm section, we can see an overview of the cluster:

Docker Swarm Portainer Swarm Cluster

And the default overlay network:

Docker Swarm Portainer Swarm Network

Before we create a service, we need to decide how external clients will connect to containers, and how containers will connect to each other. The default network type in Docker is nat. A port on the host is translated to a port on the container so, for example, we use --publish 80:80. But this limits us to one container only, on that port. If we do not define the host port (by using --publish 80), then one is created dynamically on the host, and so we can have more than one container listening on the same port. But then the client does not know what port on the host to connect to. We would need to discover the dynamic ports and put them into an external load balancer. In the case of a docker service, we would need to do this whenever a new replica is created or removed.

Alternatively we can set up a transparent network, where the container has an externally reachable IP address. This way we can have more than one container listening on the same port. But we would still need to manage the addresses in a load balancer whenever a replica is created or removed.

This is a general problem with service scaling across hosts. The Docker solution is to use an Overlay network for swarm traffic. Connections from external clients arriving at any host are routed to any replica in the service (a “routing mesh”). Connections from one container to another are on a private subnet shared across containers in the swarm, rather than on the subnet shared with the host. 

Windows Server before version 1709 supports the overlay network for communication between containers, but not the routing mesh for communication between external clients and containers. This leads to some confusing documentation.

For version 1709 and beyond, the command to create a service using the overlay network and routing mesh is, for example:

docker service create to create a new service
--name to give the service a friendly name
--replicas to specify the numbers of replicas at any one time
--publish if any ports are to be published externally
[image name] for the name of the image to run.

We can include other options, both for the configuration of the service, and the configuration of the containers. The full command for an IIS web server would be:

docker service create --name web --replicas 2 --publish 80:80 microsoft/iis

By default the containers are attached to the swarm overlay network (called “ingress”). The publishing mode is also “ingress”. Any client connection to any host on port 80 is routed in a round robin to one of the containers on any host participating in the service. The containers can reach each other on their internal network on any port.

Here is the service in Portainer:

Docker Swarm Portainer Service 2

A wide range of parameters is shown in the Service Details:

Docker Swarm Portainer Service Details 2

Portainer shows the published port, in ingress mode:

Docker Swarm Portainer Service Publish Mode Ingress

We can see all the parameters of the service with docker service inspect [service name]. The overlay network has a subnet of 10.255.0.0/16. The service has created a Virtual IP of 10.255.0.4. With docker container inspect [container name] we can see the IP addresses of the containers are 10.255.0.6 and 10.255.0.7.

For version 1607 the routing mesh does not work. The approach that works on the earlier build is to publish the ports in host mode. Each host publishes the port directly, and maps it to the container. If we use a defined port on the host, then we can only have one container per host. Instead of defining the number of replicas we need to specify --mode global, so that one container is created on each node. The command to create the service this way is:

docker service create --name web --mode global --publish mode=host,published=80,target=80 microsoft/iis

If we use a dynamic port on the host, then we can have more than one, but we have to discover the port to connect to. The command to create the service this way is:

docker service create --name web --replicas 2 --publish mode=host,target=80 microsoft/iis

Doing it this way, the container is created on the “nat”network. Portainer shows the published port, in host mode:

Docker Swarm Portainer Service Publish Mode Host

Now we have containers running as a service. If a container fails, another is created. If a node fails or is shutdown, any containers running on it are replaced by new containers on other nodes.

Windows Containers: Hyper-V

An option with Windows Containers is to run a container in Hyper-V Isolation Mode. This blog shows what happens when we do this.

When we run a container normally, the processes running in the container are running on the kernel of the host. The Process ID and the Session ID of the container process are the same as on the host.

When we run a container in Hyper-V Isolation Mode, a utility VM is created and the container runs within that. We need to have the Hyper-V role installed on the host. Then we need to add --isolation hyperv to the docker run command.

Here are some of the main differences.

The processes in the container are isolated from the host OS kernel. The Session 0 processes do not appear on the host. Session 1 in the container is not Session 1 on the host, and the Session 1 processes of the container do not appear on the host.

Container:

Get Process Hyper-V Container

Host:

Get Process Hyper-V Host Same SI

There is no mounted Virtual Hard Disk (VHD):

Disk Management Hyper-V

Instead we have a set of processes for the Hyper-V virtual machine:

Hyper-V Processes on Host

A set of inbound rules is not automatically created on the host Windows firewall. There are no rules for ICC, RDP, DNS, DHCP as there are when we create a standard container:

Firewall Rules Hyper-V Host

But the container is listening on port 135, and we can connect from the host to the container on that port, as we can with a standard container:

Netstat Hyper-V Container Established

And if we create another, standard, container, they each respond to a ping from the other.

Hyper-V does not add to the manageability of containers. The Hyper-V containers do not appear in the Hyper-V management console.

Hyper-V Manager

So in summary: in Hyper-V Isolation Mode the container processes are fully isolated; but the container is not on an isolated network, and is still open to connections from the host and from other containers by default.

Windows Containers: Data

A container is an instance of an image. The instance consists of the read-only layers of the image, with a unique copy-on-write layer, or sandbox. The writable layer is disposed of when we remove the container. So clearly we need to do something more to make data persist across instances. Docker provides two ways to do this.

When Docker creates a container on Windows, the container is instantiated as a Virtual Hard Disk (VHD). You can see the disk mounted without a drive letter, in Disk Management on the host. Docker keeps track of the layers, but the file operations take place inside the VHD.

Host Disk Manager

If we use the interactive PowerShell console to create a new directory in the container, C:\Logs, then this is created directly inside the VHD:

Sandbox Logs

When Docker removes the container, the VHD is also removed and the directory is gone.

Docker provides two ways to mount a directory on the host file system inside the container file system, so that data can persist across instances:

  1. Bind mount
  2. Volume mount.

A bind mount is simply a link to a directory on the host. A volume mount is a link to a directory tracked and managed by Docker. Docker recommends generally using volumes. You can read more about it in the Docker Storage Overview.

The parameter you commonly see to specify a mount is -v or --volume. A newer parameter, and the one Docker recommends, is --mount. This has a more explicit syntax.

In this example, we mount a volume on the host called MyNewDockerVolume to C:\MyNewDockerVolume in the container:

docker run -it --rm --name core --mount type=volume,src=MyNewDockerVolume,dst=C:\MyNewDockerVolume microsoft/windowsservercore powershell

If the volume does not already exist, it is created inside the docker configuration folder on the host:

Docker volumes MyNewDockerVolume

The Hyper-V Host Compute Service (vmcompute.exe) carries out three operations inside the VHD:

CreateFile: DeviceHarddiskVolume13MyNewDockerVolume. Desired Access: Generic Read/Write, Disposition: OpenIf, Options: Directory, Open Reparse Point, Attributes: N, ShareMode: Read, Write, AllocationSize: 0, OpenResult: Created
FileSystemControl:DeviceHarddiskVolume13MyNewDockerVolume. Control: FSCTL_SET_REPARSE_POINT
CloseFile

Now if we look in the VHD, in Explorer, we see the directory implemented as a shortcut:

Sandbox MyNewDockerVolume

In PowerShell, we can see that the directory mode is “l”, to signify a reparse point, or link:

Dir MyNewDockerVolume

Files already in the volume will be reflected in the folder in the container. Files written to the folder in the container will be redirected to the volume.

Windows reparse points come in several flavours: directory or file link; hard link (“junction”) or soft link (“symbolic link” or “symlink”). If we use the command prompt instead of PowerShell we can see that the Docker volume is implemented as a directory symlink:

Dir in Command

Working with data in Windows Containers requires keeping three things in mind:

  1. The difference between bind mount and volume mount
  2. The different syntax for --volume and --mount
  3. Differences in behaviour between Docker on Linux and Windows hosts.

The first two are well documented. The third is newer and less well documented. The main differences I can find are:

  • You cannot mount a single file
  • The target folder in the container must be empty
  • Docker allows plugins for different drivers. On Linux you can use different storage drivers to connect remote volumes. On Windows the only driver is “local” and so the volume must be on the same host as the container.

If you reference a VOLUME in the Dockerfile to create an image, then the volume will be created automatically, if it does not already exist, without needing to specify it in the docker run command.

Windows Containers: Build

This post is a building block for working with containers on Windows. I have covered elsewhere installing the Containers feature with Docker, and running containers with the Docker command line. We can’t do much that is useful without building our own images. Doing this tells us a lot about what we can and cannot do with containers on Windows.

Some preamble:

  1. A container is not persistent. It is an instance of an image. You can make changes inside a running container, for example installing or configuring an application, but unless you build a new container image with your changes, they will not be saved.
  2. A Windows container has no GUI. Any installation or configuration will be done at the command line.
  3. Therefore we should make our changes in a script, containing the instructions to build a new image.
  4. This script is a Dockerfile.

The command to build an image is: docker image build with a range of options, including the path to the Dockerfile.

You can also run: docker image commit to create a new image from a running container. This gives scope for configuring a container interactively before saving it as a new image. But, since the only interface to configure the container is the command line, and since the same commands can be performed in the Dockerfile, this has limited use.

Building an image in Docker is a similar idea to building an image for OS deployment. The Dockerfile is like the task sequence in MDT or SCCM, being a scripted set of tasks. The documentation is here: Dockerfile reference. An example is this one, from Microsoft, for IIS on Windows Server Core:

FROM microsoft/windowsservercore
RUN powershell -Command Add-WindowsFeature Web-Server
ADD ServiceMonitor.exe /ServiceMonitor.exe
EXPOSE 80 
ENTRYPOINT ["C:\ServiceMonitor.exe", "w3svc"]

The basic structure of a Dockerfile is:

  • FROM to specify the image that the new image is developed from
  • ADD or COPY from source to destination to put new files into the image
  • RUN to execute commands to configure the image
  • CMD to specify a command to start the container with, if no other command is specified
  • EXPOSE to indicate what port or ports the application listens on
  • ENTRYPOINT to specify the services or executables that should run automatically when a container is created.

We can immediately see some implications:

  1. We don’t have to build every part of the end image in one Dockerfile. We can chain images together. For example, we could build a generic web server FROM microsoft/iis, then build specific web sites with other components in new images based on that.
  2. Adding a single feature is easy, like: Add-WindowsFeature Web-Server. But configuring it with all the required options will be considerably more complicated: add website; application pool; server certificate etc.
  3. We may want to bundle sets of commands into separate scripts and run those instead of the individual commands.
  4. There is no RDP to the container, no remote management, no access to Event Logs: and arguably we don’t need to manage the container in the same way. But we can add agents to the image, for example a Splunk agent.
  5. Static data can be included in the image, of course, but if we want dynamic data then we need to decide which folders it will be in, so we can mount external folders to these when we run the container.

It is rather like doing a scripted OS deployment without MDT. I would not be surprised if a GUI tool emerges soon to automate the build scripting.

You may find a number of Dockerfiles for Windows using the Deployment Image Servicing and Management (DISM) tool. There is a confusing choice of tools and no particular need to use DISM (or reason not to). DISM is typically used for offline servicing of Windows Imaging Format (WIM) images. For example it can be used to stream updates and packages into a WIM image by mounting it. But in the case of Docker images the changes are made by instantiating a temporary container for each RUN, and the DISM commands are executed online. This means we can use three different types of command to do the same thing:

  • Install-WindowsFeature from the ServerManager module in PowerShell
  • Enable-WindowsOptionalFeature from the DISM module in PowerShell
  • dism.exe /online /enable-feature from DISM.

Just to make life interesting and keep us busy, the commands to add a feature use different names for the same feature!

Windows Containers: Portainer GUI

When you first set up Containers on Windows Server 2016, you would imagine there would be some kind of management console. But there is none. You have to work entirely from the command line. Portainer provides a management GUI that makes it easier to visualise what is going on.

The Windows Container feature itself only provides the base Host Compute and Host Network Services, as Hyper-V extensions. There is no management console for these. Even if you install the Hyper-V role, as well as the Containers feature, you can’t manage images and containers from the Hyper-V management console.

Images and containers are created and managed by a third party application, Docker. Docker also has no management console. It is managed from the Docker CLI, either in PowerShell or the Command Prompt.

There is a good reason for this. But, for me at least, it makes it hard to visualise what is going on. Portainer is a simple management UI for Docker. It is open source, and itself runs as a container. It works by connecting to the Docker engine on the host server, then providing a web interface to manage Docker.

Portainer Dashboard

Portainer Dashboard

Setting up the Portainer container will also give us a better idea of how to work with Docker. Docker has a daunting amount of documentation for the command line, and it is not easy to get to grips with it.

Configure Docker TCP Socket

The first step in setting up Portainer is to enable the Docker service to listen on a TCP socket. By default Docker only allows a named pipe connection between client and service.

Quick version:

  • create a file with notepad in C:\ProgramData\docker\config
  • name the file daemon.json
  • add this to the file:
    {"hosts": ["tcp://0.0.0.0:2375","npipe://"]}
  • restart the Docker service.

The long version is: the Docker service can be configured in two different ways:

  1. By supplying parameters to the service executable
  2. By creating a configuration file, daemon.json, in C:\ProgramData\docker\config.

The parameters for configuring the Docker service executable are here: Daemon CLI Reference. To start Docker with a listening TCP socket on port 2375, use the parameter

-H tcp://0.0.0.0:2375

This needs to be configured either directly in the registry, at HKLM\SYSTEM\CurrentControlSet\Services\Docker; or with the Service Control command line:

sc config Docker binPath= ""C:\Program Files\docker\dockerd.exe\" --run-service -H tcp://0.0.0.0:2375"

The syntax is made difficult by the spaces, which require quotation with escape characters.

The easier way is to configure the Docker service with a configuration file read at startup, daemon.json. The file does not exist by default. You need to create a new text file and save it in the default location C:\ProgramData\docker\config. The daemon.json file only needs to contain the parameters you are explicitly configuring. To configure a TCP socket, add this to the file:

{
 "hosts": ["tcp://0.0.0.0:2375","npipe://"]
}

Other options for the configuration file for Docker in Windows are documented here: Miscellaneous Options. For example you can specify a proxy server to use when pulling images from the Docker Hub.

Just to add complexity:

  • the Docker service will not start if the same parameter is set in service startup and in the configuration file
  • You can change the location of the configuration file by specifying a parameter for the service:
    sc config Docker binPath= ""C:\Program Files\docker\dockerd.exe\" --run-service --config-file "[path to file]""

Ports 2375 (unencrypted) and 2376 (encrypted with TLS) are the standard ports. You will obviously want to use TLS in a production environment, but the Windows Docker package does not include the tools to do this. Standard Windows certificates can’t be used. Instead you will need to follow the documentation to create OpenSSL certificates.

Allow Docker Connection Through Firewall

Configure an inbound rule in the Windows firewall to allow TCP connections to the Docker service on port 2375 or 2376. This needs to be allowed for all profiles, because the container virtual interface is detected as on a Public network.

netsh advfirewall firewall add rule name="Docker" dir=in action=allow protocol=TCP localport=2375 enable=yes profile=domain,private,public

Note that, by default, containers do not have access to services and sockets on the host.

Pull the Portainer Image

Back in an elevated PowerShell console, pull the current Portainer image from the Portainer repository in the Docker Hub:

docker pull portainer/portainer

If we look in the images folder in C:\ProgramData\docker\windowsfilter we can see that we have downloaded 6 new layers. We already had two Nano Server layers, because we pulled those down previously.

Portainer Layers

If we look at the image history, we can see the layers making up the image:

docker image history portainer/portainer

Portainer Image History

The two base layers of the Portainer image are Windows Nano Server. We already had a copy of the Nano Server base image, but ours was update 10.0.14393.1593, so we have downloaded a layer for the newer update 10.0.14393.1715. We can also see the action that created each layer.

If we inspect the image, with:

docker image inspect portainer/portainer

we can see some of the things we need to set it up

  1. The container is going to run portainer.exe when it starts
  2. The exposed port is 9000
  3. The volume (or folder) to mount externally is C:\Data

Set up Portainer container

Quick version:

  1. Create a folder in the host called: C:\ProgramData\Containers\Portainer
  2. Open an elevated PowerShell console on the host
  3. Run this command:
    docker run -d --restart always --name portainer -v C:\ProgramData\Containers\Portainer:C:\Data -p 9000:9000 portainer/portainer

The long version is: we need the command line to run the Portainer image:

  1. Standard command to create a container: docker run
  2. We want to run the container detached as a free standing container, with no attached console: -d or --detach
  3. There is no need to remove the container if it is stopped. Instead, we want to restart the container automatically if, for example, the host is rebooted: --restart always
  4. We can give the container a name, to make it easier to manage: --name portainer
  5. Portainer reads information about images and containers directly from Docker, so it does not need to store that. But it needs to store it’s own configuration, for example settings and user passwords. To do this, we need to save the configuration data outside the container.  We can do this in Docker by mounting an external folder in the file system of the container. The folder in the container has already been designated as C:\Data in the image, but the folder in the host can be anything you choose. In this example we are using C:\ProgramData\Containers\Portainer. The folder needs to exist before using this: -v C:\ProgramData\Containers\Portainer:C:\Data
  6. The Portainer process is listening on port 9000 (see above). We can connect to this directly from the host itself, without doing anything more. But the outside world has no access to it. The container is running on a virtual switch with NAT enabled. This does port forwarding from the host to the container. We need to decide what port on the host we would like to be forwarded to port 9000 on the container. If we don’t specify a port, Docker will assign a random port and we can discover it through docker container inspect portainer. Otherwise we can specify a port on the host, which in this case can also be 9000: -p 9000:9000
  7. The image to run: portainer/portainer
  8. We don’t need to specify a command to run, since the image already has a default command: portainer.exe

Putting the parameters together, the full command is:

docker run -d --restart always --name portainer -v C:\ProgramData\Containers\Portainer:C:\Data -p 9000:9000 portainer/portainer

Connect to Portainer

Using a browser on your desktop, connect to the Docker TCP port on the remote host: http://192.168.1.144:9000. Set up a password for the admin user:

Portainer Setup

Set up the Docker host as the endpoint:

Portainer Setup Endpoint

Note that the endpoint is the IP address of the host virtual interface on the container subnet (in this case 172.17.64.1). This address is also the gateway address for the container, but in this context it is not acting as a gateway. The virtual interface on the host is listening on port 2375 for Docker connections.

And we are in:

Portainer Dashboard

We can also connect directly from a browser on the host to the container. For this, we need to use the IP address of the container itself, in this case 172.17.68.78, or whatever address we find from docker container inspect portainer.

The Portainer Container

We don’t need to set up a firewall rule to allow access to the container on port 9000. Docker sets up a bunch of rules automatically when the container is created:

Container Automatic Firewall Rules

These rules include: DHCP; ICMP; and DNS. They also include port 9000 on the host, which we specified would be forwarded to port 9000 in the container:

Container Automatic Firewall Rule for Portainer

In Portainer, when we set up the endpoint (being Docker on the host) we need to specify the virtual interface of the host that is on the same subnet as the container (the 172.17.64.1 address). This is because Windows does not allow the container to connect directly through the virtual interface to a service on the physical interface (192.168.1.144).

If we look at the TCP connections on the host, with: netstat -a -p tcp, we see that there is no active connection to Portainer in the container, although my browser is in fact connected from outside:

Portainer Host TCP Connections

However, if we look at the NAT sessions, with Get-NetNATSession, we see the port forwarding for port 9000 to the container:

Host Get-NetNATSession

Docker has attached a virtual hard disk to the host, being the file system of the container:

Host Disk Manager

If we give it a drive letter we can see inside:

Portainer Container System Drive

The portainer executable is in the root of the drive. C:\Data is the folder that we mounted in the docker run command. Other folders like css and fonts are part of the application. These are contained in the first layer of the image, after the Nano Server layers. The layer was created by the COPY command in the Portainer Dockerfile used to create the image:

FROM microsoft/nanoserver
COPY dist /
VOLUME C:\\data
WORKDIR /
EXPOSE 9000
ENTRYPOINT ["/portainer.exe"]

And here is the portainer process running on the host in Session 2, using:

Get-Process | Where-Object {$_.SI -eq 2} | Sort-Object SI

Portainer Process Running on Host

Security

You can see in the Portainer GUI for creating endpoints that we can connect to Docker with TLS. This assumes we have set up Docker with certificates and specified encrypted TCP connections, covered in the Docker daemon security documentation.

We should also connect to Portainer over an encrypted connection. We can do this by adding more parameters to the docker run command: Securing Portainer using SSL.

More about using Portainer

You can read more about using Portainer in the Portainer documentation.

Windows Containers: Properties

If we create an instance of an image in interactive mode, and run a PowerShell console in it, then we can see inside the container.

In a previous post I used the Nano Server image, because it is small and quick. But Nano Server is a cut down OS so, for the purposes of seeing how a container works, let’s take a look inside a Windows Server Core container. The question of when Nano Server can be used in place of Core is a subject for another time.

The Docker command to do this is:

docker run --rm -it --name core microsoft/windowsservercore powershell

The system information, with systeminfo, shows a Windows server where some of the properties belong to the container, and some to the host. For example, the language and locale belong to the container, but the BIOS and boot time belong to the host:

Container SystemInfo

The TCP/IP information, with ipconfig /all shows that the container has its own:

  • hostname
  • Hyper-V virtual ethernet adapter
  • MAC address
  • IP address in the private Class B subnet, which we saw previously was allocated to the Hyper-V virtual switch
  • gateway, which we saw previously was the Hyper-V virtual ethernet adapter on the host
  • DNS server addresses.

Container IPConfig All

I can connect to the outside world, with ping 8.8.8.8 and get a reply:

Container Ping World

The running processes, from Get-Process, show the PowerShell process, as well as what look like typical user processes. If I run Get-Process | Sort-Object SI I can see that there are two sessions: a system session in Session 0, and a user session in Session 2.

Container Get Process Sort SI

I can start other processes. For example, if I start Notepad, then I see it running as a new process in Session 2.

Container Start Notepad

The services, from Get-Service, show normal system services. It is easier to see if I filter for running services, with:

Get-Service | Where-Object {$_.Status -eq "Running"} | Sort-Object DisplayName

Container Get Service Filter and Sort

I have listening ports, shown with Get-NetTCPConnection, but nothing connected:

Container Get TCP Connection

There are three local user accounts, shown with Get-LocalUser:

Container Get Local User

PowerShell tells me that it is being executed by the user ContainerAdministrator:

Container Get Process PowerShell UserName

In summary, I have something that looks similar to an operating system running a user session. It can start a new process and it can communicate with the outside world.

Let’s see what it looks like from outside. From the host I can ping the container:

Host Ping Container

I can telnet from the host to port 135 (one of the ports that I saw was listening) in the container, and make a connection:

Host Telnet Container

But I can’t make a connection from outside the host. I already know there is no route to the container subnet. What happens if I supply a route? Still no reply. I am not really surprised. The connection would have to go through the host, and there is nothing in the host firewall to allow a connection to the container.

World Ping Container

If I start another container, though, it can ping and get a reply from the first container:

Container Ping Container

If I look in Task Manager on the host, there is no obvious object that looks like a container. I don’t even know what size I would expect it to be. But I notice that the PowerShell process in the container shows as the same process on the host.

Get-Process PowerShell in the container:

Container Get Process PowerShell

Get-Process PowerShell on the host:

Host Get Process PowerShell

The process ID 3632 is the same process. All three PowerShell processes, including the one in the container, are using the same path to the executable. You could say that the container is a virtual bubble (session, namespace or whatever you want to call it) executing processes on the host:

Host Get Process PowerShell Path

If I look at all the processes on the host, I can see that the container’s Session 2 is also Session 2 on the host. Here are the host processes filtered by session:

Get-Process | Where-Object {$_.SI -eq 0 -or $_.SI -eq 2} | Sort-Object SI

Host Get Process Filter and Sort SI

Session 0 (the System session) has a lot more processes than shown inside the container, but Session 2 is the same. Processes like lsass, wininit, csrss are the normal processes associated with a session.

The host does not see the user who is executing the processes. In the container the user is ContainerAdministrator, but there is no such user on the host, and the host does not have the username:

Host Get Process PowerShell UserName

A container is ephemeral. But if I create files inside the container they must be stored somewhere.

In the image folder of the host I can see a new layer has been created:

Docker Images Folder

The layerchain.json file tells me that the layer is chained to the Windows Server Core base image layers. The layer has a virtual hard disk drive called “sandbox”, which sounds like the kind of place that changes would be saved.

If I look in Disk Manager, I can see that a new hard disk drive has been attached to the host:

Host Disk Manager

The disk is the same size as the apparent system drive inside the container. It is shown as Online, but with no drive letter. However, if I give it a drive letter, then I can see the same file system that I was able to see inside the container:

Container Sandbox Disk on Host

So the file system of the container is created by mounting a Hyper-V virtual hard disk drive. This only exists for the lifetime of the container. When the container is removed, any changes are lost.

In summary:

  • From inside, the container appears to have similar properties to a normal virtual machine.
  • The container has a network identity, with a host name, virtual Ethernet adapter and IP address.
  • It can connect to the outside world, and with other containers, but the outside world cannot (until we change something) connect to it.
  • It has a file system, based on the image the container was created from.
  • On the host, the container processes are implemented as a distinct session.
  • The file system of the container is implemented as a virtual hard disk drive attached to the host.
  • Files can be saved to the virtual hard disk drive, but they are discarded when the container is removed.