share

ENGINEERING

7min read

How to build iOS CI CD Environment in the Most Efficient Way?

How it all started

Building the ultimate CI/CD solutions for iOS development is not an easy task, especially in terms of choosing a proper hardware setup. It became really popular in the industry to buy few Mac Minis and spread the builds on them. It wasn’t that obvious for us since we were already struggling with two top equipped Mac Minis at that time. In this article, I will show why we didn’t follow the common pattern and how it ended. Trust me, I was quite surprised!

CI/CD requirements and the story behind

Let’s start with discussing two Mac Minis setup and confront this with the general continuous integration requirements. As we are working on multiple projects with different clients, we need to handle managing multiple Xcode versions at the same time. This is necessary because some projects have to be maintained on the Xcode version that was chosen when project started and some can be easily upgraded to the higher version. Upgrading Xcode sometimes involves upgrading OS version and it could be potentially harmful to the older version of Xcode that could break additional tools and so on… you know the drill.

So it’s much better to have each Xcode installed on a separate virtual machine (VM) instance (we use Parallels software), which is automatically provisioned with necessary tools, gems, and other homebrew dependencies using Packer. The VM creation process is fully automatic and the only manual action to be done involves downloading desired Xcode xip file and OS (operation system) dmg file. As a result of a Packer build, we have configured VM that we need to put on build machine and match it with proper projects. This seems to be quite easy, especially taking into consideration that we use self-hosted instance of GitLab to host our code and spin up builds.

To put it shortly, each project has a certain tag “XCODE-TAG” defined in special .gitlab-ci.yml file, located in a project root directory. This file contains pipelines’ definition and as soon as the pipeline starts, GitLab reads this configuration and matches the job to be executed on particular runners using tag value. Runners are associated with VM containing certain Xcode version and GitLab uses them to trigger builds on proper VM instances. Thanks to this, we can achieve:

  • build environment separation – builds are done inside of proper VMs and we don’t mess anything on a host machine,
  • concurrent builds – builds can be run simultaneously. When projects with different Xcode versions need to be built at the same time, two different VMs start. To be precise, the builds are running on VM snapshots, so builds with the same Xcode version can be run in two separate environments at the same time,
  • repeatable build environment – as certain builds may have different requirements and React Native is almost everywhere now, then having separate snapshots of VM solves this issue and provides cleanness. We have a nicely working environment with an increasing number of builds and an increasing demand for CPU power. As Carthage and Cocoapods became a standard, we ended up with terribly slow 20-30 minutes builds, fulfilling first three requirements, while missing the last one concerning the fast feedback loop for developers.

What are the options?

We had a lively discussion about the possible options to speed up builds. Buying a few more Mac Minis could be a potential solution if we decide to run single VM snapshot at each and provide concurrency in build execution by having multiple machines running and available at the same time. On the other hand, it’s questionable in terms of the effective usage of that environment. What to do when machines are not used, how to tune up those machines and their setup with only 16GB of RAM memory and 4 cores (8 threads) i7 processor available? On the contrary, we had a new Mac Pro packed with up to 12 cores and 64GB of 1833MHz fast memory that could give us lots of space for further customizations, but being way too expensive. Moreover, if something bad had happened to the hardware, we would have been left with nothing. Finally, we’ve decided to go with two customized Mac Pros (Mid 2010), each of them having 2x3.46 GHz 6-Core Intel Xeon processor with 128 GB 1333 MHz DDR3 ECC memory. All in all, it turned out to be the right decision, also taking the financial aspect into account.

Experimenting

So here the journey began. We started off with everything available: 12 cores and Xcode directly installed on a host machine (Mac Pro) and here we had… 111(95)s build (compilation) time of development version of one of our customer’s project – Braster. That was pretty fast, but this could not be used as a final solution and was performed only to test the machine. Struggling towards the fastest production setup, I wanted to create RAM disk and put DerrivedData (special Xcode build cache directory) inside at first and there was no improvement. I even tried to put sources and whole build environment in RAM disk, but with no success on reducing build time. I was kind of disappointed, but what could you expect from 1333MHz memory in 2016. I took a step further and recreated the environment we had on a Mac Mini to compare the build time. It is important to mention that Parallels allows using only 16 cores and despite this limitation I got fairly nice results of 283(134)s! Also, the resource utilization was impressive with a maximum 1590% of CPU usage. Quick look on a build script results achieved on Mac Mini with ideal single build running – 818(671)s.

Perfect situations rarely happen, as for most of the time, there is a high demand for multiple builds being created simultaneously. Keeping that in mind, I began testing what changes when two builds run at the same time in two different snapshots, using 16 cores. I was totally amazed when the build time was slightly worse – 301(145)s, while with three builds running it was just 330(165)s – still acceptable. The first major difference in comparison to Mac Minis is that VMs behave in a much more stable manner when a host machine has enough memory to manage multiple VMs running simultaneously. Giving 16GB each of them wasn’t a problem, but it was quite challenging to run two, three VMs on Mac Minis at the same time.

Secondly, it was possible to have three VMs requesting 16 cores running together and some of the build operations don’t work on multiple cores. This is important because build time covers not only the compilation time but it also includes the duration of creating snapshot from VM image, snapshot starting time and the duration of the cloning code repositories and resolving dependencies (additional 100-120s for each build).

Tabelka.png

More fun with tuning

Encouraged by these results, I started eliminating the biggest bottleneck we currently have – resolving gem dependencies. You may ask why we need them. The answer is quite simple. We use custom fastlane plugin for continuous delivery which integrates easily with our services (for instance in deploying apps to our internally-built app store Shuttle) and saves a lot of time. It has multiple gem dependencies that are resolved using bundler tool. However, the whole process could take up to 120s as some of them require to be compiled with native extensions or system libraries. Those additional two minutes for each build is not something we want.

I began experimenting with sharing host space as a cache directory for builds, using Parallels’ shared volumes and bundler. It has an option that allows checking if the currently installed gem dependencies are satisfied, so it was obvious that when served properly, the mix of those two mechanisms would work. The easiest option was to use this shared volume as a directory where we install and check dependencies, but this would work as long as you don’t think about simultaneous builds that may write to your cache at the same time… unfortunately, you definitely should think about it. Taking that into account, we needed to have a cache stored in host OS space and injected into VM environment. Rsyncing files took a lot of time as there were thousands of files. Packing them into a single file and copying between environment worked, but took more than 30s. That’s a lot of time, right? With 16 cores available for each machine, we needed simultaneous actions and we achieved them using pbzip2 tool. It turned out that there is a brew package for it, so after brew install pbzip2 and with default settings using all available cores I ended up with wonderful results.

Output Size: 107274240 bytes
Completed: 100%
-------------------------------------------

Wall Clock: 3.978891 seconds

It was an extremely efficient solution and I started using it globally on iOS runners as a caching mechanism. Then I used it in caching npm and yarn dependencies, especially in React Native projects. Yarn installation that normally took 618.23s was done in 1.51s with additional 14s for decompressing cache and instead of blocking build runner when resolving dependencies, it allowed to increase the overall build velocity.

Conclusion

We have been using these two 2010 Mac Pros for 8 months now and during this time almost 8520 jobs have been run on various Xcode versions – starting from 7.3 version up to 8.3 right now. In the meantime, two old Mac Minis were converted to OSX VM builder and testing machine and now provide additional support. Curious what would have happened if Apple hadn’t decided to give up i7 quad-core processor inside Mac Minis? Me too because in my opinion they made Mac Minis unusable (slow) in CI / CD environment.

To sum it up – having so many available cores allowed us to have an incredible number of options for tuning and customizations in build execution. Additionally, we don’t have to worry about stability with 128GB of host memory. The customized 2010 Mac Pro turned out to be a perfect trade-off between slow Mac Mini and expensive new Mac Pros. They both allowed for redundancy and nice build velocity. But let’s not forget that there is always an option for further developments.

We will definitely explore them in the future!

share


KamilSenior DevOps Engineer

LEARN MORE

Contact us if you have any questions regarding the article or just want to chat about technology, our services, job offers and more!

POLIDEA NEWSLETTER

Sign in and expect sharp insights, recommendations, ebooks and fascinating project stories delivered to your inbox

The controller of the personal data that you are about to provide in the above form will be Polidea sp. z o.o. with its registered office in Warsaw at ul. Przeskok 2, 00-032 Warsaw, KRS number: 0000330954, tel.: [0048795536436], email: [hello@polidea.com] (“Polidea”). We will process your personal data based on our legitimate interest and/or your consent. Providing your personal data is not obligatory, but necessary for Polidea to respond to you in relation to your question and/or request. If you gave us consent to call you on the telephone, you may revoke the consent at any time by contacting Polidea via telephone or email. You can find detailed information about the processing of your personal data in relation to the above contact form, including your rights relating to the processing, HERE.

Data controller:

The controller of your personal data is Polidea sp. z o.o. with its registered office in Warsaw at ul. Przeskok 2, 00-032 Warsaw, KRS number: 0000330954, tel.: [0048795536436], email: [hello@polidea.com] (“Polidea”)

Purpose and legal bases for processing:

 

Used abbreviations:

GDPR – Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016
on the protection of natural persons with regard to the processing of personal data and on the free movement
of such data, and repealing Directive 95/46/EC (General Data Protection Regulation)

ARES – Polish Act on Rendering Electronic Services dated 18 July 2002

TL – Polish Telecommunications Law dated 16 July 2004

1)        sending to the given email address a newsletter including information on Polidea’s new projects, products, services, organised events and/or general insights from the mobile app business world |art. 6.1 a) GDPR, art. 10.2 ARES and art. 172.1 TL (upon your consent)

Personal data:name, email address

2)       statistical, analytical and reporting purposes |art. 6. 1 f) GDPR (based on legitimate interests pursued by Polidea, consisting in analysing the way our services are used and adjusting them to our clients’ needs, as well as developing new services)

Personal data:name, email address

Withdrawal of consent:

You may withdraw your consent to process your personal data at any time.

Withdrawal of the consent is possible solely in the scope of processing performed based on the consent. Polidea is authorised to process your personal data after you withdraw your consent if it has another legal basis for the processing, for the purposes covered by that legal basis.

Categories of recipients:

Your personal data may be shared with:

1)       authorised employees and/or contractors of Polidea

2)       persons or entities providing particular services to Polidea (accounting, legal, IT, marketing and advertising services) – in the scope required for those persons or entities to provide those services to Polidea

 

Retention period:

1)       For the purpose of sending newsletter to the given email address – for as long as the relevant consent is not withdrawn

2)       For statistical, analytical and reporting purposes – for as long as the relevant consent is not withdrawn

Your rights:

 

Used abbreviation:

GDPR – Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016
on the protection of natural persons with regard to the processing of personal data and on the free movement
of such data, and repealing Directive 95/46/EC (General Data Protection Regulation)

According to GDPR, you have the following rights relating to the processing of your personal data, exercised by contacting Polidea via [e-mail, phone].

1)       to access to your personal data (art. 15 GDPR) by requesting sharing and/or sending a copy of all your personal data processed by Polidea

2)       to request rectification of inaccurate personal data
(art. 16 GDPR) by indicating the data requiring rectification

3)       to request erasure of your persona data (art. 17 GDPR); Polidea has the rights to refuse erasing the personal data in specific circumstances provided by law

4)       to request restriction of processing of your personal data (art. 18 GDPR) by indicating the data which should be restricted

5)       to move your personal data (art. 20 GDPR) by requesting preparation and transfer by Polidea of the personal data that you provided to Polidea to you or another controller in a structured, commonly used machine-readable format

6)       to object to processing your personal data conducted based on art. 6.1 e) or f) GDPR, on grounds relating to your particular situation (art. 21 GDPR)

7)       to lodge a complaint with a supervisory authority,
in particular in the EU member state of your habitual residence, place of work or place of the alleged infringement if you consider that the processing
of personal data relating to you infringes the GDPR
(art. 77.1 GDPR)

No obligation to provide data:

Providing your personal data is not obligatory, but necessary for Polidea to provide you the newsletter service

Refusal to provide the above data will result in inability to receive the newsletter service.

Profiling

In the process of providing the newsletter service, we make decisions in an automated way, including profiling, based on the data you provide.

 

“Profiling” means automated processing of personal data consisting of the use of your personal data to evaluate certain personal aspects relating to you, in particular to analyze or predict aspects concerning your personal preferences and interests.

 

The automated decisions are taken based on the analysis of clicked and viewed content. They affect the targeting of specific newsletter content to selected users registered to receive the newsletter service, based on the anticipated interests of the recipient.