Cross-platform is a term we're used to hear several times per day. It means a component can run on different platforms without any significant modifications. In the context of software, it means the same piece of software can be run on different operating systems without worrying about the underlying platform that runs it.
Of course, there are some situations where this doesn't apply, but let's assume this scenario doesn't exist.
We can summarize the cross-platform concept with the motto: "Build once, run everywhere".
If this motto rang a bell in your head, you're in the right place, and you should go ahead with the reading. In fact, this is one of the fundamental pillars of Docker.
Today, I'll show how I put in practice this concept in the last pet project I did.
Windows Robocopy... On Linux
Disclaimer: I wasn't able to run Windows Robocopy on my machine running Ubuntu 24.04 LTS. Robocopy is a Windows-based tool. In Linux, there are alternatives like
cporrsync, among others. Being able to run Robocopy on Linux is beyond cross-platform; it's more like magic.
The idea behind this project has been to develop, build, run, and test a Windows-native application on an incompatible machine, which was Linux-based.
The final application will be consumed directly in your terminal, exposing a Command-Line-Interface (in short CLI) API.
The Journey
In a nutshell, the journey of this application has consisted of:
- Development of the core feature
- Set up the CI/CD process to build and release the application
- Manual run of the binary on a Windows machine for testing purposes
- Automatic testing of the application on a Windows machine
The most appealing part turned out to be the automatic testing of the application (point 4).
In today's adventure, my travel companions will be #Go, #Docker, and the #dockerSDK. In case you're not familiar with the DockerSDK, it's a smart way to programmatically interact with the Docker Daemon from your Go code. Thanks to this pet project, I managed to familiarize myself with it.
Disclaimer: if you're eager to see the code, you can check it out in my GitHub profile.
Now, let's unveil ten uh-oh moments I came across.
1. dockur/windows image
The dockur/windows image discovery has been mindblowing. While surfing the Internet, I randomly came across this Docker image that allows you to run Windows... in a container!
Visit the Dockur GitHub page for more details. Over there, you can also find a super nice introductory video to quickly ramp up.
If you're curious about how this image works under the hood, refer to their documentation, where you should find every technical detail.
In the documentation, there's a handy docker-compose.yml ready to use. It resembles something like this:
services:
windows:
image: dockurr/windows
container_name: windows
environment:
VERSION: "11l"
KEYBOARD: "it-IT"
REGION: "en-US"
devices:
- /dev/kvm
- /dev/net/tun
cap_add:
- NET_ADMIN
ports:
- 8006:8006
- 5985:5985
- 3389:3389/tcp
- 3389:3389/udp
volumes:
- ./windows:/storage
- ./testdata:/shared
restart: always
stop_grace_period: 2m
This is an extremely powerful tool, but it comes with considerations.
First Run
If you're familiar with Windows-based operating systems, you'd also be familiar with their setup process... The first time you run this image, you are walked through the initialization process where you can customize settings.
The last thing you want is to have to go, yet another time, through this lengthy process. To bypass this, you can use a Docker volume:
volumes:
- ./windows:/storage
For convenience, I used the windows folder, located in the root of my project.
TL;DR: Embrace a bit of patience and run this container.
Purposes
This Windows container is used for two purposes:
- performing manual testing
- performing automatic testing
The first consists of manually connecting to the Windows Desktop instance (you can easily do that in your browser), performing the action, and checking the results. The latter is done in the end-to-end test that leverages the capabilities of a tool called Windows Remote Management (WinRM).
Having this properly set up was mandatory for the successful project.
Automated Testing Process
The diagram below depicts the automated testing process our application should undergo to be considered healthy:
The process is straightforward:
- Building the Docker image starting from the source code
- Creation of a container with the final executable binary (called
extractor) - Copying out the binary from the
extractorcontainer - Copying the binary to the running Windows container
- Testing the binary on the Windows container
Now, let's start exploring the capabilities offered by the Docker SDK.
2. The Docker Client
The Docker SDK wraps the HTTP calls done against the Docker Daemon. These are the same calls done when interacting with Docker via the Docker CLI or the Docker Desktop environment. To consume these APIs, a Docker Client must be created in our code. That's where the client.Client struct shines.
The client.Client exposed by the Docker Module provides you with a handle to interact.
We initialized the Docker client in the robocopy_test.go file:
func setupDockerClient(ctx context.Context) (err error) {
dockerClient, err = client.NewClientWithOpts(client.FromEnv)
if err != nil {
return err
}
dockerClient.NegotiateAPIVersion(ctx)
return nil
}
Please note dockerClient.NegotiateAPIVersion(ctx) is needed to match the version used by the API.
All the subsequent code snippets are taken from the
robocopy_test.gofile.
3. List Containers
Listing the containers on your host can be done via the ContainerList method, as you can see below:
func getWindowsContainerID(ctx context.Context, dockerClient *client.Client) (*string, error) {
containers, err := dockerClient.ContainerList(ctx, container.ListOptions{})
if err != nil {
return nil, err
}
for _, c := range containers {
if c.Image == windowsImageName {
return &c.ID, nil
}
}
return nil, fmt.Errorf("no running Windows container found")
}
4. Building a Docker image
One of the most interesting parts. The code looks like this:
func buildDockerImageForRobocopyBinary(ctx context.Context) error {
// create tarball from source code
buf := new(bytes.Buffer)
tw := tar.NewWriter(buf)
defer tw.Close()
if err := createTarball(".", tw); err != nil {
return err
}
twReader := bytes.NewReader(buf.Bytes())
imageBuildRes, err := dockerClient.ImageBuild(ctx, twReader, build.ImageBuildOptions{
Tags: []string{"test-my-robocopy"},
Remove: true,
})
if err != nil {
return err
}
defer imageBuildRes.Body.Close()
if _, err := io.Copy(os.Stdout, imageBuildRes.Body); err != nil {
return err
}
return nil
}
The process consists of two steps:
- creation of a tar archive with the application's source code
- running the
ImageBuildmethod by passing:- a tar reader
- the image build options
The unspecified options resolve with the default values. Please be sure to have the Dockerfile placed in the root of the project
I encourage you to dig into the
createTarballfunction to learn more about the creation of the tar archive.
5, 6, 7 Containers Management
Here, there are three things tied together. Let me share the code, and then I'll walk you through:
func copyBinaryToWindowsContainer(ctx context.Context) error {
// creation of the extractor container
containerRes, err := dockerClient.ContainerCreate(ctx, &container.Config{
Image: "test-my-robocopy",
Cmd: []string{"echo", "hello"},
}, nil, nil, nil, "extractor")
if err != nil {
return err
}
extractorContainerID = containerRes.ID
// pull out the binary from the extractor container
reader, _, err := dockerClient.CopyFromContainer(ctx, containerRes.ID, "/go-robocopy.exe")
if err != nil {
return err
}
defer reader.Close()
// copy binary to the target Windows Container
return dockerClient.CopyToContainer(ctx, *windowsContainerID, "./shared", reader, container.CopyToContainerOptions{})
}
The process is:
- creating the
extractorcontainer based on thetest-my-robocopyimage we tagged before - copying from the
extractorcontainer the binary produced (/go-robocopy.exe) - copying to the
windowscontainer the executable binary
8, 9 Images Management
Now, we're entering the cleanup process. The goal is to get rid of the image built for the test. First, the code:
func removeDockerImagesByRepoTags(ctx context.Context, repo, tag string) error {
filters := filters.NewArgs(filters.Arg("label", fmt.Sprintf("repo=%v", repo)), filters.Arg("label", fmt.Sprintf("tag=%v", tag)))
images, err := dockerClient.ImageList(ctx, image.ListOptions{Filters: filters})
if err != nil {
return err
}
for _, img := range images {
if _, err := dockerClient.ImageRemove(ctx, img.ID, image.RemoveOptions{}); err != nil {
return err
}
}
return nil
}
The process is:
- building the filter since we want to filter images on two criteria:
-
repowhere we passed in the valuetest-my-robocopy -
tagwhere we passed in the valuelatestsince we did not specify any tag for the image (bad practice: always specify tags)
-
- retrieval of matching Docker images
- Docker images removal
10. Container Removal
Last, we remove the extractor container with this trivial cleanup function:
func removeContainerByID(ctx context.Context, containerID string) error {
return dockerClient.ContainerRemove(ctx, containerID, container.RemoveOptions{})
}
Untold Stories
If you navigate the GitHub Repository, you'll notice a lot of stuff that has not been covered. These can be topics for future posts. An extract of the list can be:
- Git Hooks
- GitHub Actions
- Goreleaser
-
tararchive in Go - Windows Remote Management (
WinRM) - working with CLI-apps and flags in Go
If you want me to cover any of these topics, feel free to reach out.
Outro
I had a lot of fun with this project. Especially, I could have adopted any unneeded complexity without having to worry about the old-good patterns, best practices, and guidelines. After all, this is the beauty of useless pet projects.
Thanks for your attention, folks! If you have any questions, doubts, feedback, or comments, I'm available to listen and discuss. If you want me to cover some specific concepts, please reach out.

Top comments (0)