From f1998c321a4eec6d75b58d84aa8610971bf21979 Mon Sep 17 00:00:00 2001 From: Brian Picciano Date: Sat, 31 Jul 2021 11:35:39 -0600 Subject: move static files into static sub-dir, refactor nix a bit --- ...sing-processes-into-a-static-binary-with-nix.md | 248 --------------------- 1 file changed, 248 deletions(-) delete mode 100644 src/_posts/2021-04-22-composing-processes-into-a-static-binary-with-nix.md (limited to 'src/_posts/2021-04-22-composing-processes-into-a-static-binary-with-nix.md') diff --git a/src/_posts/2021-04-22-composing-processes-into-a-static-binary-with-nix.md b/src/_posts/2021-04-22-composing-processes-into-a-static-binary-with-nix.md deleted file mode 100644 index 885d56b..0000000 --- a/src/_posts/2021-04-22-composing-processes-into-a-static-binary-with-nix.md +++ /dev/null @@ -1,248 +0,0 @@ ---- -title: >- - Composing Processes Into a Static Binary With Nix -description: >- - Goodbye, docker-compose! ---- - -It's pretty frequent that one wants to use a project that requires multiple -processes running. For example, a small web api which uses some database to -store data in, or a networking utility which has some monitoring process which -can be run alongside it. - -In these cases it's extremely helpful to be able to compose these disparate -processes together into a single process. From the user's perspective it's much -nicer to only have to manage one process (even if it has hidden child -processes). From a dev's perspective the alternatives are: finding libraries in -the same language which do the disparate tasks and composing them into the same -process via import, or (if such libraries don't exist, which is likely) -rewriting the functionality of all processes into a new, monolithic project -which does everything; a huge waste of effort! - -## docker-compose - -A tool I've used before for process composition is -[docker-compose][docker-compose]. While it works well for composition, it -suffers from the same issues docker in general suffers from: annoying networking -quirks, a questionable security model, and the need to run the docker daemon. -While these issues are generally surmountable for a developer or sysadmin, they -are not suitable for a general-purpose project which will be shipped to average -users. - -## nix-bundle - -Enter [nix-bundle][nix-bundle]. This tools will take any [nix][nix] derivation -and construct a single static binary out of it, a la [AppImage][appimage]. -Combined with a process management tool like [circus][circus], nix-bundle -becomes a very useful tool for composing processes together! - -To demonstrate this, we'll be looking at putting together a project I wrote -called [markov][markov], a simple REST API for building [markov -chains][markov-chain] which is written in [go][golang] and backed by -[redis][redis]. - -## Step 1: Building Individual Components - -Step one is to get [markov][markov] and its dependencies into a state where it -can be run with [nix][nix]. Doing this is fairly simple, we merely use the -`buildGoModule` function: - -``` -pkgs.buildGoModule { - pname = "markov"; - version = "618b666484566de71f2d59114d011ff4621cf375"; - src = pkgs.fetchFromGitHub { - owner = "mediocregopher"; - repo = "markov"; - rev = "618b666484566de71f2d59114d011ff4621cf375"; - sha256 = "1sx9dr1q3vr3q8nyx3965x6259iyl85591vx815g1xacygv4i4fg"; - }; - vendorSha256 = "048wygrmv26fsnypsp6vxf89z3j0gs9f1w4i63khx7h134yxhbc6"; -} -``` - -This expression results in a derivation which places the markov binary at -`bin/markov`. - -The other component we need to run markov is [redis][redis], which conveniently -is already packaged in nixpkgs as `pkg.redis`. - -## Step 2: Composing Using Circus - -[Circus][circus] can be configured to run multiple processes at the same time. -It will collect the stdout/stderr logs of these processes and combine them into -a single stream, or write them to log files. If any processes fail circus will -automatically restart them. It has a simple configuration and is, overall, a -great tool for a simple project like this. - -Circus also comes pre-packed in nixpkgs, so we don't need to do anything to -actually build it. We only need to configure it. To do this we'll write a bash -script which generates the configuration on-the-fly, and then runs the process -with that configuration. - -This script is going to act as the "frontend" for our eventual static binary; -the user will pass in configuration parameters to this script, and this script -will translate those into the appropriate configuration for all sub-process -(markov, redis, circus). For this demo we won't go nuts with the configuration, -we'll just expose the following: - -* `MARKOV_LISTEN_ADDR`: Address REST API will listen on (defaults to - `localhost:8000`). - -* `MARKOV_TIMEOUT`: Expiration time of each link of the chain (defaults to 720 - hours). - -* `MARKOV_DATA_DIR`: Directory where data will be stored (defaults to current - working directory). - -The bash script will take these params in as environment variables. The nix -expression to generate the bash script, which we'll call our entrypoint script, -will look like this (assumes that the expression to generate `bin/markov`, -defined above, is set to the `markov` variable): - -``` -pkgs.writeScriptBin "markov" '' - #!${pkgs.stdenv.shell} - - # On every run we create new, temporary, configuration files for redis and - # circus. To do this we create a new config directory. - markovCfgDir=$(${pkgs.coreutils}/bin/mktemp -d) - echo "generating configuration to $markovCfgDir" - - cat >$markovCfgDir/redis.conf <$markovCfgDir/circus.ini <