Hacker News new | past | comments | ask | show | jobs | submit login

This is a lesson that I've learned after going all-out on actions once.

Now my makefiles in addition to the usual "make" and "make test" also support "make prerequisites" to prepare a build environment by installing everything necessary and "make ci" to run everything that CI should check. With actual implementation being scripts placed under "scripts/ci".

The scripts do provide some goodies when they are run by GitHub Actions – like folding the build logs or installing dependencies differently – but these targets also work on the developer machine.




What about caching to reduce ci time? GH setup scripts cache dependencies in a way that would seem hard to replicate in a make file.


If it’s manageable – just don’t. Build from scratch. Make sure your build works from scratch and completes in acceptable timeframe. If it’s painful, treat the root cause and not the symptoms.

If it’s unbearable due to circumstances out of your control, there’s nothing wrong with adding some actions/cache steps to .github/workflows – this goes around the build: fetch previous cache before, update the cache after if needed.

The build is still reproducible outside of GitHub Actions, but a pinch of magic salt makes it go faster sometimes without being an essential part of the build pipeline married to GitHub.

If you need to install a whole host of mostly static dependencies, GitHub Actions support running steps in arbitrary Docker container. Prepare an image beforehand, it can be cached too, now you have a predictable environment. (The only downside is that it doesn’t work on macOS and Windows.)


> If you need to install a whole host of mostly static dependencies, GitHub Actions support running steps in arbitrary Docker container. Prepare an image beforehand, it can be cached too, now you have a predictable environment. (The only downside is that it doesn’t work on macOS and Windows.)

Actually I use a similar workflow for some of the other projects that I have to work with, keeping everything CI agnostic. I incrementally add various types of dependencies to the container images where the application will be built.

For example:

  1. common base image (e.g. Debian, Ubuntu, or something like Alpine, or maybe RPM based ones)
  2. previous + common tools (optional, if you want to include curl or other tools in the container for debugging or testing stuff quickly)
  3. previous + project runtime (depending on tech stack, for example OpenJDK for Java projects)
  4. previous + development tools (depending on the tech stack, typically for pulling in dependencies, like Maven, or npm, or whatever)
  5. previous + project dependencies (if the project is large and the dependencies change rarely, you can install them once here and the changing 5% or so later)
  6. previous + project build (including things like running tests, typically multi-stage with the build and tests, and built app handled separately)
Compared to the more "common" way to do things, step #5 probably jumps out the most here, I do a pass of installing all of the dependencies, say, every morning, or hourly in the background, so that later when the project is built the CI can just go: "Hmm, it seems like 95% of the things I need here are already present, I'll just pull the remaining packages (if any)." Clean installs only need to be done when packages are removed, which is also reasonably easy to do.

Though the benefits of this aren't quite as staggering, if you use a self-hosted package repository like Sonatype Nexus, which can cache any dependencies that you've used previously and make everything faster on the network I/O side. This only doesn't hold true when actually installing the packages takes up the majority of the time (e.g. compiling native code), in which case the above is still very useful.

So, an example of how the stages might look, is as follows:

  Builder: Ubuntu + tools (optional) + OpenJDK + Maven + project dependencies + project build (and run tests)
  Runner:  Ubuntu + tools (optional) + OpenJDK + built project from last image (using COPY with --from, typically .jar file or app directory)
Of course, things are less comfortable when you don't have all of your app's dependencies packaged statically but need them "on the system" instead, like Python packages or Ruby Gems, but then your builder and runner will simply look more alike.

For my own personal stuff I also use a slightly simplified version of this, about which I wrote on my blog here, the drawbacks included: https://blog.kronis.dev/articles/using-ubuntu-as-the-base-fo...


You can split your make file in two CI steps, one cached, and the other one depending on it.


GH setup scripts cache dependencies in a way that is hard to replicate --- full stop.


om.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: