Skip to content
Ryan Parman edited this page Apr 16, 2024 · 10 revisions

Important

STATUS: Generally working, but unreliable. Planning to move storage from Amazon S3Cloudflare R2. This means that URLs and endpoints will change. DO NOT RELY ON THIS. See Milestone 1 for remaining tasks.

Constraints

System design

Detecting a new version

For every package, once per day, a GitHub Actions workflow runs that looks up the currently-released version of a piece of software. This leverages download-asset to do the heavy-lifting.

Do we need to build it?

Once we have that version, we check the packaging cache to see if there is a match for {package}-{version}. If so, we stop. If not, we continue building.

Building the package

The short version is that we use GoReleaser Pro/nFPM, download-asset, and some shell scripts to build/download software for Linux-on-64bit-Intel and Linux-on-64bit-ARM, then package them as .rpm (e.g., RedHat, Fedora, Amazon Linux), .deb (e.g., Debian, Ubuntu), and .apk (Alpine Linux).

Some software comes pre-built. Other software requires compilation. Some is sensitive to the versions of other libraries on the system, and others are not. Those instructions are in the sidebar to the right.

Uploading to repository and generating metadata

We use the AWS CLI to upload the packages to Amazon S3 after we've built them. Most Linux package managers require metadata about the packages in order to be able to serve them.

Rather than copying all of the already-built packages back to the runner, then indexing them, then copying everything back to S3, we instead use s3fs to mount the S3 bucket on the machine, then use standard software to generate the metadata as though it were all on the local file system. That way, we only have to pay for reads and a small number of written files.

Testing that the package works

TBD: Follow-up job to install the repo and test the package installation