I don't understand the love for Buildkite around here at all. And I find the author's arguments inconsistent. Feels definitely like an ad for Buildkite.
I have to admit, I have limited experience with GitHub Actions though. My benchmark is GitLab mainly.
> With Buildkite, the agent is a single binary that runs on your machines.
Yes, and so it is for most other established CI systems with differing variance in orchestrator tooling to spawn agents on demand on cloud providers or Kubernetes. Isn't that the default? Am I spoiled?
> Buildkite has YAML too, but the difference is that Buildkite’s YAML is just describing a pipeline. Steps, commands, plugins. It’s a data structure, not a programming language cosplaying as a config format. When you need actual logic? You write a script. In a real language. That you can run locally. Like a human being with dignity and a will to live.
Again, isn't that the default with modern CI tools? The YAML definition is a declarative data structure, that let's me represent which steps to execute under which conditions. That's what I want from my CI tooling, right? That's why declarative pipelines are what everyone's doing right now and I haven't really heard a lot of people wanting to implement the orchestration of their entire pipeline imperatively instead and run them on a single machine.
But that's where you'll run into limitations pretty soon with Buildkite.
You have `if` conditionals, but they're quite limited. You finally have `if_changed` since a few months, which you can use to run steps only if the commit / PR / tag contains changes to certain file globs, but it's again quite rudimentary. Also, you can't combine it with `if` conditionals, so you can't implement a full rebuild independent of file changes - which should be a valid feature, e.g. nightly or on main branches.
The recommended solution to all that:
> Dynamic Pipelines
> In Buildkite, pipeline steps are just data. You can generate them.
To me, that's the cursed thing about Buildkite. You start your pipeline declaratively, but as soon as you branch out of the most trivial pipelines, you'll have to upload your next steps imperatively if a certain condition is met. Suddenly you'll end up with a Frankensteinian mess that looks like a declarative pipeline declaration initially, but when you look deeper you'll find a bunch of 20+ bash scripts that upload more pipeline fragments from Heredocs or other YAML files conditionally and even run templating logic on top of them. You want to have a mental model on what's happening in your pipeline upfront? You want to model dependencies between steps that are uploaded under different conditions somewhere scattered through bash scripts? Good luck with that.
I really don't see how you can market it as a feature, that you make me re-implement CI basics that other tools just have and even make me pay for it.
And I also don't see how that is more testable locally than a pipeline that's completely declared in YAML. Especially when your scripts need to interact with the buildkite-agent CLI to download artifacts, meta-data or upload artifacts, meta-data and more pipelines.
> I’ll be honest: Buildkite’s plugin system is structurally pretty similar to the GitHub Actions Marketplace. You’re still pulling in third-party code from a repo. You’re still trusting someone else’s work. I won’t pretend there’s some magic architectural difference that makes this safe.
Yep it is and I don't like either. I prefer GitLab's approach of sharing functionality and logic via references to other YAML files checked into a VCS. It's way easier to find out what's actually happening instead of tracing down third-party code in a certain version from an opaque market place.
But yes, the log experience and the possibility to upload annotations to the pipeline is quite nice compared to other tools I've used. Doesn't outweigh the disadvantages and headaches I had with it so far though.
---
I think many of the critique points the author had on GitHub Actions can be avoided when just using common sense when implementing your CI pipelines. No one forces you to use every feature you can declare in your pipelines. You can still still declare larger groups of work as steps in your pipeline and implement the details imperatively in a language of your choice. But to me, it's nice to not have to implement most pipeline orchestration features myself and just use them - resulting in a clear separation of concerns between orchestration logic and actual CI work logic.