Share

engineering

7min read

How to build iOS CI CD Environment in the Most Efficient Way?

How to build iOS CI CD Environment in the Most Efficient Way?

How it all started

Building the ultimate CI/CD solutions for iOS development is not an easy task, especially in terms of choosing a proper hardware setup. It became really popular in the industry to buy few Mac Minis and spread the builds on them. It wasn’t that obvious for us since we were already struggling with two top equipped Mac Minis at that time. In this article, I will show why we didn’t follow the common pattern and how it ended. Trust me, I was quite surprised!

CI/CD requirements and the story behind

Let’s start with discussing two Mac Minis setup and confront this with the general continuous integration requirements. As we are working on multiple projects with different clients, we need to handle managing multiple Xcode versions at the same time. This is necessary because some projects have to be maintained on the Xcode version that was chosen when project started and some can be easily upgraded to the higher version. Upgrading Xcode sometimes involves upgrading OS version and it could be potentially harmful to the older version of Xcode that could break additional tools and so on… you know the drill.

So it’s much better to have each Xcode installed on a separate virtual machine (VM) instance (we use Parallels software), which is automatically provisioned with necessary tools, gems, and other homebrew dependencies using Packer. The VM creation process is fully automatic and the only manual action to be done involves downloading desired Xcode xip file and OS (operation system) dmg file. As a result of a Packer build, we have configured VM that we need to put on build machine and match it with proper projects. This seems to be quite easy, especially taking into consideration that we use self-hosted instance of GitLab to host our code and spin up builds.

To put it shortly, each project has a certain tag “XCODE-TAG” defined in special .gitlab-ci.yml file, located in a project root directory. This file contains pipelines’ definition and as soon as the pipeline starts, GitLab reads this configuration and matches the job to be executed on particular runners using tag value. Runners are associated with VM containing certain Xcode version and GitLab uses them to trigger builds on proper VM instances. Thanks to this, we can achieve:

  • build environment separation – builds are done inside of proper VMs and we don’t mess anything on a host machine,
  • concurrent builds – builds can be run simultaneously. When projects with different Xcode versions need to be built at the same time, two different VMs start. To be precise, the builds are running on VM snapshots, so builds with the same Xcode version can be run in two separate environments at the same time,
  • repeatable build environment – as certain builds may have different requirements and React Native is almost everywhere now, then having separate snapshots of VM solves this issue and provides cleanness. We have a nicely working environment with an increasing number of builds and an increasing demand for CPU power. As Carthage and Cocoapods became a standard, we ended up with terribly slow 20-30 minutes builds, fulfilling first three requirements, while missing the last one concerning the fast feedback loop for developers.

What are the options?

We had a lively discussion about the possible options to speed up builds. Buying a few more Mac Minis could be a potential solution if we decide to run single VM snapshot at each and provide concurrency in build execution by having multiple machines running and available at the same time. On the other hand, it’s questionable in terms of the effective usage of that environment. What to do when machines are not used, how to tune up those machines and their setup with only 16GB of RAM memory and 4 cores (8 threads) i7 processor available? On the contrary, we had a new Mac Pro packed with up to 12 cores and 64GB of 1833MHz fast memory that could give us lots of space for further customizations, but being way too expensive. Moreover, if something bad had happened to the hardware, we would have been left with nothing. Finally, we’ve decided to go with two customized Mac Pros (Mid 2010), each of them having 2x3.46 GHz 6-Core Intel Xeon processor with 128 GB 1333 MHz DDR3 ECC memory. All in all, it turned out to be the right decision, also taking the financial aspect into account.

Experimenting

So here the journey began. We started off with everything available: 12 cores and Xcode directly installed on a host machine (Mac Pro) and here we had… 111(95)s build (compilation) time of development version of one of our customer’s project – Braster. That was pretty fast, but this could not be used as a final solution and was performed only to test the machine. Struggling towards the fastest production setup, I wanted to create RAM disk and put DerrivedData (special Xcode build cache directory) inside at first and there was no improvement. I even tried to put sources and whole build environment in RAM disk, but with no success on reducing build time. I was kind of disappointed, but what could you expect from 1333MHz memory in 2016. I took a step further and recreated the environment we had on a Mac Mini to compare the build time. It is important to mention that Parallels allows using only 16 cores and despite this limitation I got fairly nice results of 283(134)s! Also, the resource utilization was impressive with a maximum 1590% of CPU usage. Quick look on a build script results achieved on Mac Mini with ideal single build running – 818(671)s.

Perfect situations rarely happen, as for most of the time, there is a high demand for multiple builds being created simultaneously. Keeping that in mind, I began testing what changes when two builds run at the same time in two different snapshots, using 16 cores. I was totally amazed when the build time was slightly worse – 301(145)s, while with three builds running it was just 330(165)s – still acceptable. The first major difference in comparison to Mac Minis is that VMs behave in a much more stable manner when a host machine has enough memory to manage multiple VMs running simultaneously. Giving 16GB each of them wasn’t a problem, but it was quite challenging to run two, three VMs on Mac Minis at the same time.

Secondly, it was possible to have three VMs requesting 16 cores running together and some of the build operations don’t work on multiple cores. This is important because build time covers not only the compilation time but it also includes the duration of creating snapshot from VM image, snapshot starting time and the duration of the cloning code repositories and resolving dependencies (additional 100-120s for each build).

Tabelka.png

More fun with tuning

Encouraged by these results, I started eliminating the biggest bottleneck we currently have – resolving gem dependencies. You may ask why we need them. The answer is quite simple. We use custom fastlane plugin for continuous delivery which integrates easily with our services (for instance in deploying apps to our internally-built app store Shuttle) and saves a lot of time. It has multiple gem dependencies that are resolved using bundler tool. However, the whole process could take up to 120s as some of them require to be compiled with native extensions or system libraries. Those additional two minutes for each build is not something we want.

I began experimenting with sharing host space as a cache directory for builds, using Parallels’ shared volumes and bundler. It has an option that allows checking if the currently installed gem dependencies are satisfied, so it was obvious that when served properly, the mix of those two mechanisms would work. The easiest option was to use this shared volume as a directory where we install and check dependencies, but this would work as long as you don’t think about simultaneous builds that may write to your cache at the same time… unfortunately, you definitely should think about it. Taking that into account, we needed to have a cache stored in host OS space and injected into VM environment. Rsyncing files took a lot of time as there were thousands of files. Packing them into a single file and copying between environment worked, but took more than 30s. That’s a lot of time, right? With 16 cores available for each machine, we needed simultaneous actions and we achieved them using pbzip2 tool. It turned out that there is a brew package for it, so after brew install pbzip2 and with default settings using all available cores I ended up with wonderful results.

Output Size: 107274240 bytes
Completed: 100%
-------------------------------------------

Wall Clock: 3.978891 seconds

It was an extremely efficient solution and I started using it globally on iOS runners as a caching mechanism. Then I used it in caching npm and yarn dependencies, especially in React Native projects. Yarn installation that normally took 618.23s was done in 1.51s with additional 14s for decompressing cache and instead of blocking build runner when resolving dependencies, it allowed to increase the overall build velocity.

Conclusion

We have been using these two 2010 Mac Pros for 8 months now and during this time almost 8520 jobs have been run on various Xcode versions – starting from 7.3 version up to 8.3 right now. In the meantime, two old Mac Minis were converted to OSX VM builder and testing machine and now provide additional support. Curious what would have happened if Apple hadn’t decided to give up i7 quad-core processor inside Mac Minis? Me too because in my opinion they made Mac Minis unusable (slow) in CI / CD environment.

To sum it up – having so many available cores allowed us to have an incredible number of options for tuning and customizations in build execution. Additionally, we don’t have to worry about stability with 128GB of host memory. The customized 2010 Mac Pro turned out to be a perfect trade-off between slow Mac Mini and expensive new Mac Pros. They both allowed for redundancy and nice build velocity. But let’s not forget that there is always an option for further developments.

We will definitely explore them in the future!

Share

Kamil

Senior DevOps Engineer

Did you enjoy the read?

If you have any questions, don’t hesitate to ask!

Did you enjoy the read?

If you have any questions, don’t hesitate to ask!