Does anyone around here know whether it is feasible or whether it even makes sense to set up a C++ Continuous Integration/Continuous Delivery(CI/CD) build server that can run inside Docker?
“Duh…” I hear the Linux crowd say, but here is the catch:
The container shall:
Be based off a Windows image (we are currently trying to use servercore:ltsc2022)
Provide the following tool chain: (Visual Studio 2022 (or MS Build ), CMake, Perl for OpenSSL, NiNja, Python, .NET tool (for CPack Wix), Wix Toolset
Be able to compile executables based on the following DEPENDENCIES: Qt 6.9.0, VTK 4.92., ITK 5.4.3, GDCM 3.0.24, Boost 1.88, OpenSSL 3.5.0 and a few others
We use Chocolatey with a private package repo for the toolchain and that seems to work like a charm. So far so good. But how to handle building externals and storing the built external binaries in between consecutive builds for the same release?
We are getting progressively more worried that the size of our Frankenstein container will be so big that it cannot be used with Docker image repositories.
On a previous project we fell back to a hardware CI/CD server with all externals pre-built and available. However, this time we want to pull through.
If someone here has been on a similar quest, I am very curious to know if you pulled it off and also what you think are the best starting point to look for information.
Yes…this was a major problem I ran into when I initially looked into Windows containers. Just getting the toolchain (VS), CUDA tools, and Qt into the container got us to 10GB. Then I had to restart because the max size is baked in at image creation time.
Note that VS installation requires (maybe this has been lifted by now?) special Microsoft-provided base images to work for .NET-shaped Reasons™. There were also restrictions on matching the image builder OS version with any runners of the image. I believe that was being worked on at the time but I haven’t paid attention since I last investigated (2019 or so).
An alternative solution is to download your dependencies during the CI job. That keeps such things more repo-specified. Basically, instead of “set up this tower of software and then you can run the build/tests” you have “run these scripts to make things available and then run the build/tests”. Our Windows CI machines are limited in what they provide: Visual Studio (and toolchains), MS-MPI (though I’d love for this to be downloaded as needed), and a CUDA setup (where applicable).
I’d suggest trying Conan for managing and building dependencies. In my project I managed to build all the dependencies from ground up: gammaray/conan2_recipes at build_with_conan · PauloCarvalhoRJ/gammaray · GitHub . Fully tested on a RedHat 8 machine, but Conan is portable (Conan recipes are Python scripts) and should work on a Windows box too.
The repository I shared is a self-contained build chain. Alternativelly, you may want to use Conan Center’s remote repository to either download source codes or/and managing pre-built binaries.
Conan does for C/C++ what Maven does for Java. In addition to automating builds, you can build and run tests and installers for each target platform. Yes, it can be a little intimidating at a first glance but it really pays off in the long run, especially if you need to manage CI/CD with a complex dependency tree across multiple platforms: