summaryrefslogtreecommitdiffhomepage
diff options
context:
space:
mode:
-rw-r--r--blog.org237
-rw-r--r--content/blog/self-host-the-world.md253
2 files changed, 485 insertions, 5 deletions
diff --git a/blog.org b/blog.org
index daf6199..b06b0d3 100644
--- a/blog.org
+++ b/blog.org
@@ -34,7 +34,7 @@ this blog was built using emacs' excellent org-mode and [[https://github.com/goh
:PROPERTIES:
:EXPORT_HUGO_SECTION: /blog
:END:
-** rust is not about memory safety :rust:correctness:
+** rust is not about memory safety :rust:correctness:
SCHEDULED: <2024-06-01 dom>
:PROPERTIES:
:EXPORT_FILE_NAME: rust-is-not-about-memory-safety
@@ -45,7 +45,6 @@ most of rust discussions nowadays revolve around memory safety, and how it is sa
your program segfaults? skill issue
#+end_quote
but i'd like to make the counter-argument that, no, this has nothing to do with skill issue.
-
*** formal language theory
the first thing one learns when they're studying formal languages (the field that studies grammars, state automata, etc) is that the rules that describe a certain grammar must match *exactly* the ones that you want to include in your language. this means that there's a bidirectional relationship between the grammar you describe (which directly define the automata that parses that language) and the words[fn:: formally they are defined as a sequence of tokens in certain alphabet that the automata closures over. normally we think of "words" as the whole program that we're parsing.] that it parses (which are related to the semantics of the program, how it executes).
@@ -58,12 +57,12 @@ and no, i'm not talking about modeling a C parser as a state machine (which prob
in the same way that you'd hope that a parenthesized arithmetic expression parser would recognize that ~(1 + 2) + 3)~ is an invalid expression, you'd expect that the C compiler would correctly verify that the following series of tokens is not a /well behaved/ program:
#+begin_src c
int foo(int * myptr) {
- *myptr = 5;
+ ,*myptr = 5;
}
foo(NULL);
#+end_src
-i say /well behaved/ because i can't say /invalid/. it is in fact defined by the spec that when you dereference a ~NULL~ pointer the result is [[http://blog.llvm.org/2011/05/what-every-c-programmer-should-know.html][/undefined behavior/]]. and this is C's achilles heel: instead of outright banning programs like the one above (which i'd argue is the correct approach), it will happily compile and give you garbage output.
+i say /well behaved/ because i can't say /invalid/. it is in fact defined by the spec that when you dereference a ~NULL~ pointer the result is [[http://blog.llvm.org/2011/05/what-every-c-programmer-should-know.html][undefined behavior]]. and this is C's achilles heel: instead of outright banning programs like the one above (which i'd argue is the correct approach), it will happily compile and give you garbage output.
framing it this way really exposes the fragility of C, because undefined behavior has to always be taken into account. and, by the nature of it, there is no way to represent it other than as a black box, such that, if your code ever encounters it, then literally all you can say is that *the whole result of the program* is undefined - that is, it can be anything. you cannot show properties, nor say what will happen once your program enters this state, as the C specification literally does not define it. it may come to a halt, write garbage to the screen or completely delete half of the files of your program, and there's no way to predict what will come out of it, by definition. in the lucky case, it will segfault while executing and you'll be extremely pissed off, but that is not at all guaranteed. this is akin to having a float expression with some deep term being ~NaN~, in that it eventually must evaluate to ~NaN~ and you can't draw any conclusions about the result of the expression (other that it isn't a number).
@@ -74,7 +73,6 @@ it is essential to realize that this is an *assumption*, and in almost most case
and there are a huge number of tools to aid in finding undefined behavior in a code base, it's just that
1. they are not by any means standards of C development (not in spec and not in standard compilers) and
2. they are fallible and will always let some undefined programs slip by.
-
*** runtime exceptions are not the solution
most languages try to handle this by introducing some sort of runtime exception system, which i think is a terrible idea. while this is much, much safer than what C does, it still makes reasoning about the code extremely hard by completely obliterating locality of reason. your indexing operation may still be out of bounds, and while this now has defined outcomes, it is one of the possible outcomes of your program (whether you like it or not), and you must handle it. and, of course, no one handles all of them, for it is humanely impossible to do it in most languages because:
@@ -117,7 +115,236 @@ it is not by chance that Yang et al. could only find measly 9 bugs after 6 CPU y
i really think software developers should strive for that kind of resilience, which i believe can only be achieved through properly valuing *correctness* . i don't think it is reasonable to expect that all software be built using coq and proving every little bit of it (due to business constraints) but i think that rust is a good enough language to start taking things more seriously.
+** self host the world :nix:nixos:
+:PROPERTIES:
+:EXPORT_FILE_NAME: self-host-the-world
+:END:
+
+I've known since forever that google is not to be trusted, and that [[https://killedbygoogle.com/][they whimsically create and destroy]] products like no one else. I've also been a not so proud owner of a google mail account for the past 15 years, that I rely on for almost all my services.
+
+honestly, those facts didn't bother me that much, because, like everyone else, I'm always under the impression that /it isn't going to happen to me, right/? that was, until june of last year, when [[https://www.theverge.com/2023/6/16/23763340/google-domains-sunset-sell-squarespace][google sunset'd google domains]] - which I was under the impression to be the king of domain registrars - plus the rise of AI and LLM's seriously made me question the hegemony of google search in the current state of affairs, and how well I was positioned to a possible google meltdown as a whole.
+
+of course, I don't think that gmail is going anywhere soon, but it tipped me off into searching into the world of self hosting. I mean, how hard could it be to host my own emails right? I wanted to find it out using a home device and nixos, in order to get declarative and reproducible systems for free.
+
+*** the raspberry pi
+
+i managed to get my hands on a raspberry pi model 4B in december 2023, but at the time I didn't have time to try and get something running on it. it was just around april or may of 2024 that I actually started to try to get it working. at first, I wanted to go with a completely headless nixos setup, by writing a proto-configuration akin to my [[https://git.santi.net.br/nixos][current ones]], exporting it as an sd image and flashing it to the pi, while baking in my ssh key. thus, no manual installation process would be necessary, and just inserting it to the pi and turning it own would be all.
+
+sadly it didn't work, as in it would turn on but never appear as a network device, and given that detecting it through the network was the only way to interact with it [that I knew of], it left me in a dead end. at the time, I believed that it was because I was doing something wrong in the nixos configuration - maybe some network config mishap? maybe I forgot to turn something on? - but given that I couldn't see it's video output, I just gave up and decided to buy an HDMI adapter and do it the normal way.
+
+of course, I only bought the hdmi adapter about 3 months later, and only then did I try to install nixos manually. I went with the normal approach: downloading the bare arm image, fetching [[https://git.santi.net.br/nixos][my nixos repo]] locally and rebuilding to install everything. only by having visual confirmation did I understand that the problem wasn't with my original nixos image, but rather with the fact that it was shutting down after the first boot phase!
+
+it made me realize that I had never given proper thought into buying a real power supply, as I thought that connecting it through a usb-c cable I had lying around to my computer's usb port was good enough. I was able to gracefully connect the dots and realize that most likely it was rebooting because it didn't have enough power to boot, so I switched it to a 5 amps 3 volts cellphone charger I had to spare and it finally booted correctly!
+
+*** networking issues
+
+after that, I figured that I'd like to be able to not only turn it on, but also connect to my raspberry pi from outside my house's network. sadly, my router's public ip changes pretty much every day, so my only real option was to use ddns + a domain name.
+
+I bought =santi.net.br= cheaply and quickly transfered it to cloudflare, as I wanted to get some ddns action going on. as I'm using the ISP provided all-in-one [shitty] router, it's not shocking to say that trying to open the relevant ports (22, 80 and 443) in the default configuration interface wouldn't have any external effect whatsoever.
+
+I found out that there was a way to get the "admin version" of the router's setup page, and through that I was able to get port 22 open to the public internet (even though changing it the normal way wouldn't do anything), but neither 80 nor 443 were pingable still. I even questioned if my network was inside a CGNAT, as that is very common in brazil, but my ip wasn't one of the common formats and I could access port 22 of my router's public ip just fine. I don't know how the ISP could be blocking it other than the router's admin page port forwarding setup being a no-op for some specific ports.
+
+I fought with this problem for a week but eventually decided to give up and just setup cloudflare tunnels for 80 and 443 ports, and route all the subdomains through that. cloudflare turnnels work by serving as an outbound only connection, by using a ~cloudflared~ instance running on the pi to route the requests through. after using some stateful commands to generate credentials, the relevant piece of code to set this up in nixos is very simple:
+#+begin_src nix
+{
+ # ...
+ services.cloudflared = {
+ enable = true;
+ tunnels.iori = {
+ default = "http_status:404";
+ credentialsFile = "/var/lib/cloudflared/iori.json";
+ ingress = {
+ "santi.net.br" = "http://localhost:80";
+ "git.santi.net.br" = "http://localhost:80";
+ };
+ };
+ };
+}
+#+end_src
+
+though, I couldn't really use these tunnels to connect through ssh, and honestly I don't know why. I believe cloudflare expects you to use their [[https://developers.cloudflare.com/cloudflare-one/connections/connect-devices/warp/download-warp/][warp]] tool to authenticate through ssh connections (besides ssh key auth?), but I thought it was too much trouble to configure yet another tool (in all my computers), so I chose to use the router's public ip + ddns with port forwarding instead. I tested pretty much all ddns nixos services exposed in nixpkgs, and the only one that worked reliably was =inadyn=:
+#+begin_src nix
+{
+ # ...
+ services.inadyn = {
+ enable = true;
+ user = "leonardo";
+ group = "users";
+ settings.provider."cloudflare.com" = {
+ hostname = "santi.net.br";
+ username = "santi.net.br";
+ proxied = false;
+ include = config.age.secrets.cloudflare.path;
+ };
+ };
+}
+#+end_src
+
+**** remote rebuilds
+
+given that my computers' architecture (=x86_64-linux=) and the raspberry pi's one (=aarch64-linux=) are not the same, I needed a way to either trigger rebuilds remotely, or to cross compile locally and =nix-copy-closure= to the pi. cross compilation can be enabled by setting =boot.binfmt.emulatedSystems=, but I don't really like that solution as it requires me to enable that flag on every computer I'd like to deploy.
+
+instead, I went with the most barebones approach possible, [[https://www.haskellforall.com/2023/01/announcing-nixos-rebuild-new-deployment.html][nixos-rebuild]], by using the following command:
+#+begin_src sh
+nixos-rebuild switch --fast --use-remote-sudo \
+ --flake .#<remote> \
+ --build-host <remote-host-url> \
+ --target-host <remote-host-url>
+#+end_src
+
+this works because =--fast= avoids rebuilding =nixos-rebuild=, and passing =--build-host= forces it to build directly on the pi, avoiding the cross compilation issue. I still intend to use a proper build tool (most inclined to using [[https://github.com/serokell/deploy-rs][deploy-rs]]) but that is for the future.
+
+*** self hosting
+after setting up a way to connect to the pi from the public network, I could finally get some self hosting started.
+
+initially, all I did was a simple setup where I added my blog's repository as a flake input that would serve the result of calling =hugo build= on it through nginx. it did look something like the following:
+#+begin_src nix
+let
+ blog = pkgs.stdenv.mkDerivation {
+ name="hugo-blog";
+ src = inputs.blog;
+ buildInputs = [ pkgs.hugo ];
+ buildPhase = ''
+ mkdir $out
+ hugo --destination $out
+ '';
+ };
+in {
+ # ....
+ networking.firewall.allowedTCPPorts = [ 80 443 ];
+ services.nginx = {
+ enable = true;
+ virtualHosts."santi.net.br" = {
+ addSSL = true;
+ enableACME = true;
+ root = blog;
+ };
+ };
+ security.acme = {
+ acceptTerms = true;
+ certs."santi.net.br".email = "[email protected]";
+ };
+}
+#+end_src
+it sure worked fine for the first couple of weeks, and it auto generated ssl certificates for me, which is convenient, but it had a glaring flaw: in order to change something, I'd need to push a new commit to the blog repo, =nix flake update blog= and then =nixos-rebuild switch= (remotely) on the pi, every single time. the whole process was unnecessarily complicated, so I sought out to setup a simpler one.
+
+I vaguely knew that git repos had a notion of hooks, that can be run pre and post any command or action you take, but never had I implemented or tinkered with them. still, it occurred to me that if I could setup a bare git "upstream" in my pi, and set a hook to run after every commit it receives, I could run =hugo build= on the source files and generate a new blog in a known path, that I could then hardwire =nginx= to constantly watch. this way, it would be very much like the old setup that I had with github pages, except local and not depending on microsoft's ai products.
+
+funnilly enough, mere minutes after searching for this idea on the internet, I found a [[https://andreas.rammhold.de/posts/git-receive-blog-hook-deployment/][blog post]] by Andreas that did exactly that. while searching, I also figured that it would be pretty cool to have a [[https://git.zx2c4.com/cgit/][cgit instance]] exposed that could track my changes in this "git repos" directory, so that I could really stop relying on github while keeping the code fully open source.
+
+the main idea is to setup a git repository declaratively [of course] pre-baked with a =post-receive= hook file that calls =hugo build= with the directory we'd like =nginx= watch. Andreas' post shows exactly how to idempotently create (or no-op after second run) the git repo using a systemd one shot service, and the only problem remaining is, as always, managing the permissions around these directories:
+1. my user, =leonardo=, has it's own files and it's what I use to run =nixos-rebuild= from.
+2. the =git= user, will own the permissions to the git repositories directory
+3. the =cgit= user, will be responsible to run the cgit server.
+4. the =nginx= user, is responsible to run the nginx instance and respond to requests.
+
+thus, I devised the following structure:
+- =/server/blog= is where the hugo-generated files are going to be. the =nginx= user must be able to read it, and =git= must be able to write to it.
+- =/server/git-repos= is where the git repositories will be. the =cgit= user must be able to read all of it's directories, and =git= user must be able to read and write to it.
+
+it seems to suffice to set =git= as the owner of both of these directories, and give all users permission to read and execute files. to implement them, I used =systemd.tmpfile.rules=. I know, there's =tmp= in their name, but rest assured, you can use them to create permanent files setting the correct permissions if you don't give them an age parameter:
+#+begin_src nix
+users.users.git = {
+ description = "git user";
+ isNormalUser = true;
+ home = git-repo-path;
+};
+systemd.tmpfiles.rules = [
+ "d ${blog-public-path} 0755 git users -"
+ "d ${git-repo-path} 0755 git users -"
+];
+#+end_src
+after figuring this stuff out, the rest is pretty much text book nixos. we set up cgit with ~scanPath = git-repo-path~, with a hook using =pandoc= to generate the about pages using org files correctly:
+#+begin_src nix
+services.cgit.santi = let
+ org2html = pkgs.writeShellScript "org2md" ''
+ ${pkgs.pandoc}/bin/pandoc \
+ --from org \
+ --to html5 \
+ --sandbox=true \
+ --html-q-tags \
+ --ascii \
+ --standalone \
+ --wrap=auto \
+ --embed-resources \
+ -M document-css=false
+ '';
+in {
+ enable = true;
+ scanPath = git-repo-path;
+ nginx.virtualHost = "git.santi.net.br";
+ settings = {
+ readme = ":README.org";
+ root-title = "index";
+ root-desc = "public repositories for santi.net.br";
+ about-filter = toString org2html;
+ source-filter = "${pkgs.cgit}/lib/cgit/filters/syntax-highlighting.py";
+ enable-git-config = true;
+ enable-html-cache = false;
+ enable-blame = true;
+ enable-log-linecount = true;
+ enable-index-links = true;
+ enable-index-owner = false;
+ enable-commit-graph = true;
+ remove-suffix = true;
+ };
+};
+
+#+end_src
+while the following snippet sets up a systemd one shot service to initialize the path to the blog's public files (ran with the =git= user):
+#+begin_src nix
+systemd.services."blog-prepare-git-repo" = {
+ wantedBy = [ "multi-user.target" ];
+ path = [
+ pkgs.git
+ ];
+ script = ''
+ set -ex
+ cd ${git-repo-path}
+ chmod +rX ${blog-public-path}
+ test -e blog || git init --bare blog
+ ln -nsf ${post-receive} blog/hooks/post-receive
+ '';
+ serviceConfig = {
+ Kind = "one-shot";
+ User = "git";
+ };
+};
+#+end_src
+where the =post-receive= hook is very similar to the one Andreas used in his post:
+#+begin_src nix
+post-receive = pkgs.writeShellScript "post-receive" ''
+ export PATH=${env}/bin
+ set -ex
+
+ GIT_DIR=$(${pkgs.git}/bin/git rev-parse --git-dir 2>/dev/null)
+ if [ -z "$GIT_DIR" ]; then
+ echo >&2 "fatal: post-receive: GIT_DIR not set"
+ exit 1
+ fi
+
+ TMPDIR=$(mktemp -d)
+ function cleanup() {
+ rm -rf "$TMPDIR"
+ }
+ trap cleanup EXIT
+
+ ${pkgs.git}/bin/git clone "$GIT_DIR" "$TMPDIR"
+ unset GIT_DIR
+ cd "$TMPDIR"
+ ${pkgs.hugo}/bin/hugo --destination ${blog-public-path}
+'';
+#+end_src
+
+after running it the first time, I went ahead and statefully copied the git repo from github to the pi, in order to not lose the history, but other than that it should be fine.
+
+*** next steps
+
+sadly, I haven't got the time to actually setup email hosting. currently, I read my email through [[https://djcbsoftware.nl/code/mu/mu4e.html][mu4e]], using mu as a local maildir indexer and searcher. what I'd need is to host a server to receive and send email. receiving doesn't seem to have many difficulties, as it's just a normal listener, but sending apparently is a huge problem, as there seem to be a lot of measures need to be taken in order for your email to actually be delivered and not be flagged as spam.
+
+besides having to setup a reverse DNS lookups, you also need to mess with SPF, DMARC and DKIM, which are scary looking acronyms for boring business authentication stuff. moreover, your ip might be blacklisted, or have low reputation (what does that even mean?), and to top it off it seems like my router's port 25 is blocked forever so I'd also need to configure cloudflare tunnels for that, most likely. I'm currently avoiding all of it, but I intend to look into them in the near future.
+I've been meaning to experiment with [[https://gitlab.com/simple-nixos-mailserver/nixos-mailserver][nixos simple mailserver]]'s setup for a while now, but it is an "all in one" solution, and I think it might be trying to do much more than what I'm currently trying to achieve. if anyone has tinkered with it, I'd love to know more about it.
* COMMENT Local Variables :ARCHIVE:
# Local Variables:
diff --git a/content/blog/self-host-the-world.md b/content/blog/self-host-the-world.md
new file mode 100644
index 0000000..f57e039
--- /dev/null
+++ b/content/blog/self-host-the-world.md
@@ -0,0 +1,253 @@
++++
+title = "self host the world"
+author = ["santi"]
+description = "a lower case only blog, purely for aesthetics"
+lastmod = 2024-11-13T15:35:04-03:00
+tags = ["nix", "nixos"]
+draft = false
++++
+
+I've known since forever that google is not to be trusted, and that [they whimsically create and destroy](https://killedbygoogle.com/) products like no one else. I've also been a not so proud owner of a google mail account for the past 15 years, that I rely on for almost all my services.
+
+honestly, those facts didn't bother me that much, because, like everyone else, I'm always under the impression that _it isn't going to happen to me, right_? that was, until june of last year, when [google sunset'd google domains](https://www.theverge.com/2023/6/16/23763340/google-domains-sunset-sell-squarespace) - which I was under the impression to be the king of domain registrars - plus the rise of AI and LLM's seriously made me question the hegemony of google search in the current state of affairs, and how well I was positioned to a possible google meltdown as a whole.
+
+of course, I don't think that gmail is going anywhere soon, but it tipped me off into searching into the world of self hosting. I mean, how hard could it be to host my own emails right? I wanted to find it out using a home device and nixos, in order to get declarative and reproducible systems for free.
+
+
+## the raspberry pi {#the-raspberry-pi}
+
+i managed to get my hands on a raspberry pi model 4B in december 2023, but at the time I didn't have time to try and get something running on it. it was just around april or may of 2024 that I actually started to try to get it working. at first, I wanted to go with a completely headless nixos setup, by writing a proto-configuration akin to my [current ones](https://git.santi.net.br/nixos), exporting it as an sd image and flashing it to the pi, while baking in my ssh key. thus, no manual installation process would be necessary, and just inserting it to the pi and turning it own would be all.
+
+sadly it didn't work, as in it would turn on but never appear as a network device, and given that detecting it through the network was the only way to interact with it [that I knew of], it left me in a dead end. at the time, I believed that it was because I was doing something wrong in the nixos configuration - maybe some network config mishap? maybe I forgot to turn something on? - but given that I couldn't see it's video output, I just gave up and decided to buy an HDMI adapter and do it the normal way.
+
+of course, I only bought the hdmi adapter about 3 months later, and only then did I try to install nixos manually. I went with the normal approach: downloading the bare arm image, fetching [my nixos repo](https://git.santi.net.br/nixos) locally and rebuilding to install everything. only by having visual confirmation did I understand that the problem wasn't with my original nixos image, but rather with the fact that it was shutting down after the first boot phase!
+
+it made me realize that I had never given proper thought into buying a real power supply, as I thought that connecting it through a usb-c cable I had lying around to my computer's usb port was good enough. I was able to gracefully connect the dots and realize that most likely it was rebooting because it didn't have enough power to boot, so I switched it to a 5 amps 3 volts cellphone charger I had to spare and it finally booted correctly!
+
+
+## networking issues {#networking-issues}
+
+after that, I figured that I'd like to be able to not only turn it on, but also connect to my raspberry pi from outside my house's network. sadly, my router's public ip changes pretty much every day, so my only real option was to use ddns + a domain name.
+
+I bought `santi.net.br` cheaply and quickly transfered it to cloudflare, as I wanted to get some ddns action going on. as I'm using the ISP provided all-in-one [shitty] router, it's not shocking to say that trying to open the relevant ports (22, 80 and 443) in the default configuration interface wouldn't have any external effect whatsoever.
+
+I found out that there was a way to get the "admin version" of the router's setup page, and through that I was able to get port 22 open to the public internet (even though changing it the normal way wouldn't do anything), but neither 80 nor 443 were pingable still. I even questioned if my network was inside a CGNAT, as that is very common in brazil, but my ip wasn't one of the common formats and I could access port 22 of my router's public ip just fine. I don't know how the ISP could be blocking it other than the router's admin page port forwarding setup being a no-op for some specific ports.
+
+I fought with this problem for a week but eventually decided to give up and just setup cloudflare tunnels for 80 and 443 ports, and route all the subdomains through that. cloudflare turnnels work by serving as an outbound only connection, by using a `cloudflared` instance running on the pi to route the requests through. after using some stateful commands to generate credentials, the relevant piece of code to set this up in nixos is very simple:
+
+```nix
+{
+ # ...
+ services.cloudflared = {
+ enable = true;
+ tunnels.iori = {
+ default = "http_status:404";
+ credentialsFile = "/var/lib/cloudflared/iori.json";
+ ingress = {
+ "santi.net.br" = "http://localhost:80";
+ "git.santi.net.br" = "http://localhost:80";
+ };
+ };
+ };
+}
+```
+
+though, I couldn't really use these tunnels to connect through ssh, and honestly I don't know why. I believe cloudflare expects you to use their [warp](https://developers.cloudflare.com/cloudflare-one/connections/connect-devices/warp/download-warp/) tool to authenticate through ssh connections (besides ssh key auth?), but I thought it was too much trouble to configure yet another tool (in all my computers), so I chose to use the router's public ip + ddns with port forwarding instead. I tested pretty much all ddns nixos services exposed in nixpkgs, and the only one that worked reliably was `inadyn`:
+
+```nix
+{
+ # ...
+ services.inadyn = {
+ enable = true;
+ user = "leonardo";
+ group = "users";
+ settings.provider."cloudflare.com" = {
+ hostname = "santi.net.br";
+ username = "santi.net.br";
+ proxied = false;
+ include = config.age.secrets.cloudflare.path;
+ };
+ };
+}
+```
+
+
+### remote rebuilds {#remote-rebuilds}
+
+given that my computers' architecture (`x86_64-linux`) and the raspberry pi's one (`aarch64-linux`) are not the same, I needed a way to either trigger rebuilds remotely, or to cross compile locally and `nix-copy-closure` to the pi. cross compilation can be enabled by setting `boot.binfmt.emulatedSystems`, but I don't really like that solution as it requires me to enable that flag on every computer I'd like to deploy.
+
+instead, I went with the most barebones approach possible, [nixos-rebuild](https://www.haskellforall.com/2023/01/announcing-nixos-rebuild-new-deployment.html), by using the following command:
+
+```sh
+nixos-rebuild switch --fast --use-remote-sudo \
+ --flake .#<remote> \
+ --build-host <remote-host-url> \
+ --target-host <remote-host-url>
+```
+
+this works because `--fast` avoids rebuilding `nixos-rebuild`, and passing `--build-host` forces it to build directly on the pi, avoiding the cross compilation issue. I still intend to use a proper build tool (most inclined to using [deploy-rs](https://github.com/serokell/deploy-rs)) but that is for the future.
+
+
+## self hosting {#self-hosting}
+
+after setting up a way to connect to the pi from the public network, I could finally get some self hosting started.
+
+initially, all I did was a simple setup where I added my blog's repository as a flake input that would serve the result of calling `hugo build` on it through nginx. it did look something like the following:
+
+```nix
+let
+ blog = pkgs.stdenv.mkDerivation {
+ name="hugo-blog";
+ src = inputs.blog;
+ buildInputs = [ pkgs.hugo ];
+ buildPhase = ''
+ mkdir $out
+ hugo --destination $out
+ '';
+ };
+in {
+ # ....
+ networking.firewall.allowedTCPPorts = [ 80 443 ];
+ services.nginx = {
+ enable = true;
+ virtualHosts."santi.net.br" = {
+ addSSL = true;
+ enableACME = true;
+ root = blog;
+ };
+ };
+ security.acme = {
+ acceptTerms = true;
+ certs."santi.net.br".email = "[email protected]";
+ };
+}
+```
+
+it sure worked fine for the first couple of weeks, and it auto generated ssl certificates for me, which is convenient, but it had a glaring flaw: in order to change something, I'd need to push a new commit to the blog repo, `nix flake update blog` and then `nixos-rebuild switch` (remotely) on the pi, every single time. the whole process was unnecessarily complicated, so I sought out to setup a simpler one.
+
+I vaguely knew that git repos had a notion of hooks, that can be run pre and post any command or action you take, but never had I implemented or tinkered with them. still, it occurred to me that if I could setup a bare git "upstream" in my pi, and set a hook to run after every commit it receives, I could run `hugo build` on the source files and generate a new blog in a known path, that I could then hardwire `nginx` to constantly watch. this way, it would be very much like the old setup that I had with github pages, except local and not depending on microsoft's ai products.
+
+funnilly enough, mere minutes after searching for this idea on the internet, I found a [blog post](https://andreas.rammhold.de/posts/git-receive-blog-hook-deployment/) by Andreas that did exactly that. while searching, I also figured that it would be pretty cool to have a [cgit instance](https://git.zx2c4.com/cgit/) exposed that could track my changes in this "git repos" directory, so that I could really stop relying on github while keeping the code fully open source.
+
+the main idea is to setup a git repository declaratively [of course] pre-baked with a `post-receive` hook file that calls `hugo build` with the directory we'd like `nginx` watch. Andreas' post shows exactly how to idempotently create (or no-op after second run) the git repo using a systemd one shot service, and the only problem remaining is, as always, managing the permissions around these directories:
+
+1. my user, `leonardo`, has it's own files and it's what I use to run `nixos-rebuild` from.
+2. the `git` user, will own the permissions to the git repositories directory
+3. the `cgit` user, will be responsible to run the cgit server.
+4. the `nginx` user, is responsible to run the nginx instance and respond to requests.
+
+thus, I devised the following structure:
+
+- `/server/blog` is where the hugo-generated files are going to be. the `nginx` user must be able to read it, and `git` must be able to write to it.
+- `/server/git-repos` is where the git repositories will be. the `cgit` user must be able to read all of it's directories, and `git` user must be able to read and write to it.
+
+it seems to suffice to set `git` as the owner of both of these directories, and give all users permission to read and execute files. to implement them, I used `systemd.tmpfile.rules`. I know, there's `tmp` in their name, but rest assured, you can use them to create permanent files setting the correct permissions if you don't give them an age parameter:
+
+```nix
+users.users.git = {
+ description = "git user";
+ isNormalUser = true;
+ home = git-repo-path;
+};
+systemd.tmpfiles.rules = [
+ "d ${blog-public-path} 0755 git users -"
+ "d ${git-repo-path} 0755 git users -"
+];
+```
+
+after figuring this stuff out, the rest is pretty much text book nixos. we set up cgit with `scanPath = git-repo-path`, with a hook using `pandoc` to generate the about pages using org files correctly:
+
+```nix
+services.cgit.santi = let
+ org2html = pkgs.writeShellScript "org2md" ''
+ ${pkgs.pandoc}/bin/pandoc \
+ --from org \
+ --to html5 \
+ --sandbox=true \
+ --html-q-tags \
+ --ascii \
+ --standalone \
+ --wrap=auto \
+ --embed-resources \
+ -M document-css=false
+ '';
+in {
+ enable = true;
+ scanPath = git-repo-path;
+ nginx.virtualHost = "git.santi.net.br";
+ settings = {
+ readme = ":README.org";
+ root-title = "index";
+ root-desc = "public repositories for santi.net.br";
+ about-filter = toString org2html;
+ source-filter = "${pkgs.cgit}/lib/cgit/filters/syntax-highlighting.py";
+ enable-git-config = true;
+ enable-html-cache = false;
+ enable-blame = true;
+ enable-log-linecount = true;
+ enable-index-links = true;
+ enable-index-owner = false;
+ enable-commit-graph = true;
+ remove-suffix = true;
+ };
+};
+```
+
+while the following snippet sets up a systemd one shot service to initialize the path to the blog's public files (ran with the `git` user):
+
+```nix
+systemd.services."blog-prepare-git-repo" = {
+ wantedBy = [ "multi-user.target" ];
+ path = [
+ pkgs.git
+ ];
+ script = ''
+ set -ex
+ cd ${git-repo-path}
+ chmod +rX ${blog-public-path}
+ test -e blog || git init --bare blog
+ ln -nsf ${post-receive} blog/hooks/post-receive
+ '';
+ serviceConfig = {
+ Kind = "one-shot";
+ User = "git";
+ };
+};
+```
+
+where the `post-receive` hook is very similar to the one Andreas used in his post:
+
+```nix
+post-receive = pkgs.writeShellScript "post-receive" ''
+ export PATH=${env}/bin
+ set -ex
+
+ GIT_DIR=$(${pkgs.git}/bin/git rev-parse --git-dir 2>/dev/null)
+ if [ -z "$GIT_DIR" ]; then
+ echo >&2 "fatal: post-receive: GIT_DIR not set"
+ exit 1
+ fi
+
+ TMPDIR=$(mktemp -d)
+ function cleanup() {
+ rm -rf "$TMPDIR"
+ }
+ trap cleanup EXIT
+
+ ${pkgs.git}/bin/git clone "$GIT_DIR" "$TMPDIR"
+ unset GIT_DIR
+ cd "$TMPDIR"
+ ${pkgs.hugo}/bin/hugo --destination ${blog-public-path}
+'';
+```
+
+after running it the first time, I went ahead and statefully copied the git repo from github to the pi, in order to not lose the history, but other than that it should be fine.
+
+
+## next steps {#next-steps}
+
+sadly, I haven't got the time to actually setup email hosting. currently, I read my email through [mu4e](https://djcbsoftware.nl/code/mu/mu4e.html), using mu as a local maildir indexer and searcher. what I'd need is to host a server to receive and send email. receiving doesn't seem to have many difficulties, as it's just a normal listener, but sending apparently is a huge problem, as there seem to be a lot of measures need to be taken in order for your email to actually be delivered and not be flagged as spam.
+
+besides having to setup a reverse DNS lookups, you also need to mess with SPF, DMARC and DKIM, which are scary looking acronyms for boring business authentication stuff. moreover, your ip might be blacklisted, or have low reputation (what does that even mean?), and to top it off it seems like my router's port 25 is blocked forever so I'd also need to configure cloudflare tunnels for that, most likely. I'm currently avoiding all of it, but I intend to look into them in the near future.
+
+I've been meaning to experiment with [nixos simple mailserver](https://gitlab.com/simple-nixos-mailserver/nixos-mailserver)'s setup for a while now, but it is an "all in one" solution, and I think it might be trying to do much more than what I'm currently trying to achieve. if anyone has tinkered with it, I'd love to know more about it.