I've been battling Jenkins for the past couple of weeks thanks to Angular's Ivy compiler. I've added a build script to help ease the build process for our new product and have tried to automate it using a Jenkins pipeline. But for each angular project we have, the build time goes way way up.
Our product contains a suite of websites aimed for different users on different platforms, each constructed with Angular 9. And between npm install, ngcc, and then the actual ng build, it can take a few minutes for the website to build. Multiply this several times over, run the build on a machine with 2 CPUs, 4-8MB of ram, and the build time for the whole process can be anywhere between 20-35 minutes. In my opinion, a CI/CD pipeline should not take that long for a product still in it's pre-release "we've only just begun" phase.
We're using Jenkins in the AWS cloud, running on an EC2 instance and using spot instances for build agents by virtue of the EC2-Fleet plugin. I took the time to customize an image with the necessary build environment tools and created an auto scaling group. It works great! I see the multi-branch pipeline trigger builds and allocate agents as I expected. The problem--the build takes FOREVER!
I've tried to get around this issue with a couple of solutions. The challenge here is that our spot instances do not contain persistent data. This basically means that each time we kick off a new agent, it's a fresh clean environment. So time-consuming operations such as downloading nuget packages, docker images, and npm packages are a base cost for each build. I don't have a cache to rely on to speed the next build up.
Recently, I tried using NFS options (Amazon EFS, Ubuntu NFS, etc.) but those turned out to make the git checkout process timeout. Lesson learned -- don't use an NFS fileshare for your git repository. You're only asking for trouble. Even Gitlab says so.
I also looked into archiving my node_modules folder to speed up the npm and ngcc steps. This could work, but I need a way to figure out if the package.json file changed between checkouts. I figured I could use a crc or md5 check.
Unfortunately, Jenkins doesn't have a built in declarative step to handle that. I want to avoid writing scriptable pipeline as much as possible. And doing this sort of check, even the logic of it, requires remembering my brief stints with Groovy syntax. And thus, my battle with Jenkins took a turn to the crazy.
I got so frustrated with the experience that I started looking at other build systems. I just set up this Jenkins instance less than a month ago and already I'm trying to find a way out? That must mean I'm doing something wrong, right?
Now, will this speed up the build? Probably not. I'm still more than likely going to end up doing some interesting things in Jenkins in terms of storing build artifacts and potential file caches in AWS because there are plugins that I've already installed and configured to do so. But things like a CRC or MD5 comparison? Or rather, things that require a branch in code? I'm going to try and leave that to gulp. The most that Jenkins should be deciding is when to run a build stage (using the when directive) instead of how to run a step using an if statement.