Revamping our build scripts

by Tsvi Mostovicz - Wed 18 March 2020
Tags #Scripting
Reading time: 3 minutes, 1 second

A short history

After a year or so at my job, I decided to delve a bit deeper into the build scripts we were using. They were mostly an emalgamation of some tcsh and perl. Being horrified at the cruft that has crept into the system, I went and rewrote most of the code in bash. I threw out a whole bunch of options that were not in use, or that wouldn't do anything and simplified the code tremendously. But it still was a homebrew script which was basically a translation of the existing code.

Problems on the horizon

Over time, with the implementation of CI via Jenkins, I started seeing the limitations of the current system. Our build system expected a lot of knowledge from the engineer. Most of which if was documented existed in a README file for every environment. The problem was that changes wouldn't always be documented: "Oh yes, this environment must be run with define FOO=bar"

At first I implemented some kind of system to setup these details for Jenkins. This gave birth to a project.properties and multiple compilation.properties files.

Over time, the system became a bit more cumbersome. Some of the defines specified had meaning beyond the actual code. They specified which a different file list to be compiled. So now I had to understand those definitions in the build script. Then we started using pre-compiled libraries to speed up the compilation process. We needed to understand what project and what version was used to specify the path to these shared libraries.

And still, the user had to know the correct incantation to run compilation and simulation by themselves. The properties file was only used by Jenkins.

The vision

So I started envisioning what I would want our build system to look like.

In my imagination the user would type:

compile my_env

And this would take all the default values necessary. Obviously some values would need overrides if possible. Same would go for simulation runs.

run test_name

The advantages of this approach:

  • Engineers performance - no need to search for default flags
  • DRY - Less overhead for Jenkins, code is the same everywhere
  • Which in turn brings greater maintainability

Searching for the holy grail

At first I thought of refactoring our current code base. The obvious disadvantage: maintainability. Refactoring is expensive as well, both in time and labor.

Over time you feel like you're reinventing the wheel. Define A based on define B in the build script? That's something someone else already thought about, no?

Maintaining bash is a pain, but maintaining any other language (I'm more of a Python guy) has its issues as well.

Also, as an added bonus, I want something that could parallelize the compilations on it's own for all the environments. I want to kill as much as possible of the Jenkins Groovy code and have most of it work directly on the command line.

Then it struck me, build scripts are a thing.

So I took a look at a few:

  • make - Complex and archaic language
  • SCons - Great! Python, but looks overly complex and not very popular
  • Maven, Ant, Gradle - Yuck! Java. Enough said.
  • CMake - Popular, check. Support for tests, interesting. - let's try it.

So here goes documenting my trials in using CMake to run simulation using Cadence's simulation toolchain.

If all goes well, I might even use this for using Quartus synthesis.

Comments