Before moving to the problem of choosing and applying a package manager in our project let’s define the list of features we want from a dependency manager usage perspective:
Now let’s proceed to the decision making part. Let’s overview the available options for setting up the dependency management process.
The first fork - “either create our own solution from scratch or use some ready out of the box solution”.
A package manager with all the necessary features, like any other product, can always be developed from scratch on its own. Our solution would fulfill all the requirements that we wanted. Except for the fact that it immediately contradicts one of the main goals at the time of launch - “minimal effort for implementation, support, and use”. It would also add serious risks of defects or at worst hold up the entire development process. It could additionally result in unidentified failures and malfunctions of the final product.
We didn’t have enough time or any willingness to reinvent the wheel. Therefore, it was worth considering some ready-made solutions. Moreover, there are several options approved, verified, and supported by the community that we could take advantage of.
At the time of rewrite, we considered the most common tools like Carthage and CocoaPods. There were also other solutions that were more likely alternative build systems supporting the composition of the modules stored locally in one repository. Here we found the problem of a high entry threshold, the effort of migrating existing packages, and implementing support for future ones.
It is worth mentioning that 3 - 4 months after the rewrite had started, in the summer of 2019 Apple introduced Swift Package Manager (hereinafter SPM) beta support for the iOS platform. The native system package manager looked nice and interesting, but a bit more detailed examination revealed serious problems blocking its usage at full capacity.
It gives an illusion of a convenient GUI solution for the developer, especially for a beginner.
It seemed very convenient and simple to add packages through the wizard GUI, but it soon emerged that the list of dependencies is tightly wired up to the xcodeproj file.
This drawback turned out to be really painful when we were working with VCS and performing a code review. Critical information starts to be contained in the non-reviewable file. It also meant it was extremely easy to skip changes on code reviews or to resolve merge conflicts.
A much more serious problem was the lack of support for the resource files as well as lack of support for the compiled libraries. The last issue even led to a conflict of business interests where a business wanted a solution from a vendor, but the vendor was not ready to provide the source code for its product.
It is worth noting that in the upcoming fall 2020 Swift 5.3, resource problems and binary libraries will finally be resolved, so maybe we can consider the migration at a later time.
Github’s Readme tells us about Carthage - “A simple, decentralized dependency manager for Cocoa”.
The essence of this manager is to prepare modules but leave the integration part to the developer. Initially, its main feature was to reduce the number of recompilations of the dependencies.
It can both download source files and an already assembled binary module artifact. In case of loading the source files, Carthage must assemble them into a dynamic framework anyway, since the expected output is a directory with a set of ready-made binary module artifacts.
Unlike SPM, Carthage does not integrate modules into the project and leaves this task to the developers. Developers have to manually add modules into the application and set appropriate linking preferences. Full-fledged tuning for integration into the project must be developed first to avoid continuous manual work.
There is no out of the box solution for easy creation of an internal in-house package. Demo projects can be done, but setting them up is not trivial and has a certain entry threshold. Especially if inside the demo project we also need to resolve our own dependency graph.
When using Carthage, developers no longer have the opportunity to debug the source code of a dependency after integration due to the fact that the module is connected with a ready-made compiled artifact. There is no convenient way to switch between the binary version/source code from the box - only implement it by yourself.
As you can see, in the case of Carthage we have to integrate a compiled framework and debugging the library in a client project is unavailable. Not so critical, but it still hurts. Also, from our point of view it is not very convenient to have a compilation process in the scope of the dependency manager’s task. There is a risk of inconsistent results between the Carthage build and the compilation settings of the overall project. And we keep in mind our chosen strategy of Mono-repo where we strive to store all the modules near each other on the file system.
One of its features is atomic commits and a bump version of the swift. It’s important to note that Swift 4 has only source code backward compatibility. Binary framework compatibility was achieved only in Swift 5.1 in the fall of 2019. And here we come to the next problem - we cannot make incremental bumps of individual modules. Otherwise a migration would require a lot of simultaneous effort and a huge merge request, which is not suitable for a large team, continuous changes, and new code pushes.
It is also worth mentioning the presence of the Rome plugin, which is a client for the remote binary cache. It helps to deliver compiled components and speed up the build time. But unfortunately this contradicts the business restrictions about not using AWS. Support for custom cache appeared only on May 18, 2019.
The bottom line is we have an open source dependency manager and a set of small utility tools like graph resolutions, data transfer, file copying, a cache plugin — which is convenient in general, but forbidden by the project’s policies. Setting up isolated sandboxes for developing modules is tricky. It can be done, but requires developing an automated tool. And now add to this the presence of 30 + 6 legacy pods requiring migration. For a lack of alternatives, we could try to apply and implement extra functionality to close the usability gaps.
We still have one more candidate named CocoaPods, which is the most commonly used by the community. According to various statistics, it accounts for almost 90% of iOS projects. Let’s now consider in more detail how well it suits our needs.
CocoaPods has a centralized public repository of modules specifications, but at the same time supports private/decentralized repositories. As a rule, it operates with at least two components - a repository of versioned specifications and a repository of source code referenced from the specification. It is possible to directly link to the source by link/branch/tag avoiding an intermediate specs repository usage.
What is important for us is that there is support for local modules, which is expected with Mono-repo. Using CocoaPods instead of Carthage, the developers can easily perform a module debugging while running the host application.
Unlike Carthage, it can integrate/reintegrate modules into a project and apply the appropriate configuration of a workspace. At the same time, it remains possible to resolve versions without integrations. Integration in its turn can be done by using both dynamic frameworks and static libraries as well. Flexibility is always a good sign, since it opens a way for future tuning and optimization.
CocoaPods by default integrates all modules as a source code. As a result we are able to perform an incremental Swift version migration because there is no need in binary ABI stability and XCode is able to compile source code of multiple Swift versions in a single project.
In addition, there is no need to migrate old packages since the project has already used CocoaPods. Some of the packages have nothing other than CocoaPods dependency manager support, and adding Carthage support for example is a really serious challenge. As a result we get one more benefit from choosing CocoaPods - it saved time for rewrite activity kickoff.
For the majority of the team, it is enough to use one simple command - pod install and the dependency manager will prepare the ready workspace. It’s really easy to use for developers with any level of technical expertise.
Moving on now. We had a goal to follow the modules in a single template and format. CocoaPods gives us this opportunity by specifying a link to the desired template when creating a new module. It provides a super convenient process for creating a new module. Any developer calls a simple pod lib create command and then gets a generated module with a demo project after answering up to five questions from a Wizard.
In the demo project, a target for unit tests is also created and the necessary module and all necessary imports are added.
The developer can immediately begin to develop and write tests by spending 1-2 minutes to create the module.
As it turned out, CocoaPods almost perfectly covered all our requirements and tasks. Moreover, there were some additional benefits, which included:
Of course, there were some challenges also. Let's now go through our decisions in chronological order.
At first we cloned all 3rd parties and reuploaded them to the internal mirror repositories. Mirrored modules have the same names but with a project prefix so that CocoaPods can distinguish it from the original version in the public domain.
AFter completing the 3rd party packages it was time to switch to inhouse pods issues. The development of rewrite functionality was done entirely in this repository, so new modules are stored there. This leads to a couple of problems. How best to link two repositories, an old and a new one, and how to deliver inhouse pods to a current host application repository?
Let me remind you that the standard CocoaPods approach assumes the presence of an additional repository with specifications. The specification of each version can only point to a tag in the source code repository. Now remember that in our rewrite repository there are several modules. Therefore, when making changes we must carefully monitor which modules have been changed, bump their versions, upload new versions of the subspec to the subspec repository. As you can see, it is inconvenient and easy to make a mistake.
There was a temptation to get rid of intermediate specs repo usage and continuous updates for specs. And it was decided to try using an explicit link to the branch of the rewrite repository under the current project podfile. In this case, we do not need an additional specification repository at all.
Firstly, CocoaPods has to somehow find versions of the subspec. When CocoaPods uses a separate specs repo it expects some specific file structure within it. But it didn’t work for our rewrite repository where specs and source code are both stored. Quite quickly it became clear that for us it was enough to move the specs to the root of the repository, which solves the issue. It’s not the most elegant solution and is a bit of a hack, but it works.
Secondly, pointing to a specific branch means that its HEAD and the entire branch will be cloned and pulled on pod install. Commits with potentially breaking changes can fall into the rewrite branch. As a result, the CI/CD pipeline started to fail due to such suddenly breaking changes. Of course, it was possible to introduce serious organizational control and force the team to strictly stick to the rules of making changes, but this required the training of a large team and careful verification of the merge requests of at least those responsible. And even in this case there are no guarantees - people are not computers and tend to make mistakes.
Thirdly, it turned out that CocoaPods in the current setup does not do a shallow checkout, but clones a complete branch. It is checked out whether the HEAD of the branch has changed and if it has changed, the branch is checked out again. As the repository grew, the time of the pod install operation began to grow also. It’s enough for someone to push an even non-breaking commit, and another developer has to wait for 5 - 8 minutes for the next pod install.
The next idea was to switch to tags. Podfile also offers the ability to specify an explicit tag in the repository. Tags, unlike branches, should tend to a whole cut without history. In addition, we have minor protection against extra-breaking changes. The minor issue of the necessity of regular manual tag creation was solved blazingly fast. The process of creating auto-increment tags was implemented in the rewrite repository.
Integration is supposed to have the tag update in the podfile. For some time there was a significant acceleration compared to the branches. But then the pod install began to take a long time again. The reason was an even larger increase in the size of the repository. And as it turned out, CocoaPods does a separate checkout for the tag and it is completely non-shallow. So if you wait once for 15 minutes to download the tag, further pod installs will then reuse the cache. But when we want to pull a new tag we have to start pulling from the beginning, which is time consuming.
We decided that maybe we needed to take a step back and try a classic approach with the repository of the subspec, but with an automatic podspec update and push. The output turned out that this solution is clearly not friendly with Mono-repo. The tag slice is too large and in addition, for every dependency statement in the podfile, CocoaPods will check out a separate tag, even if the tag for all dependencies is the same. In other words, the CocoaPods cache structure is initially organized by pod name and not cloned tags or commit SHAs.
Some developers from the team started using a hack, which involved driving the checkout repositories and, at the time of development, connecting them by local path reference.
Later, this workaround even evolved so the podfile was modified and it became possible to override the settings of the remote/local rewrite repository, based on the ignored file and the settings stored by the developer.
A bit later after some consideration we came up with an idea on how the local integration hack could be made more robust. We want a single monolithic repository in the future, so we let one repository use another as a submodule. In this case, the pulling of changes when changing the HEAD occurs incrementally by means of the git and is guaranteed once for all modules. Moreover, we do not need to create tags automatically. There is also the opportunity to test integration without creating temporary tags. This makes it simple, convenient, and reliable.
Of the risks that remain - on the code review we see only the changed SHA of the submodule, which does not give specific information on which branch we are looking at (for instance, maybe someone accidentally switched to the old submodule). The solution for that is the integration of a simple Danger script that warns about the relevance of the specified branch.
The next problem we faced was a long build time. CocoaPods by default performs dependency integration as a source code. Therefore, a clean build on the CI pipeline or the developer's machine requires recompilation of all dependencies, which entails a waste of time for waiting, since not all modules have changed. This was not a first priority, but later this issue reached the top of the backlog.
3rd Party libraries change extremely rarely, therefore ideally it would be rational to have already compiled libraries and just link to the final product. CocoaPods supports the delivery of binary artifacts. However, there is a problem with where to store them. Of course we can store them directly in the git repository, but storing large binary files in a git is considered bad practice for a number of reasons. There is a need for binary storage. Fortunately, CocoaPods has an Artifactory plugin that allows the dependency manager to download large files from binary storage.
More information can be found here: https://www.jfrog.com/confluence/display/JFROG/CocoaPods+Repositories
Downloading to Artifactory itself is extremely simple, you just need to collect the archive with the binary artifact and the modified sub-list and call Command Line Interface (CLI) for the update. We decided to go ahead and automate this process. We created a special pipeline that received a link to the source of the pod, and a set of necessary architectures and automatically loaded the artifact to the Artifactory. Thus, we got the most user friendly interface and access control.
As for rebuilding the source code of internal modules, this issue was also solved. But that’s a different story deserving of its own blogspot.