This blog, as well as the Sandstorm install at that hosts it, just moved from a DigitalOcean droplet to a private VM somewhere. The move was pretty painless. The upshot is that everything from the radio to the chat to the forums should be better faster stronger porridge. If you want to read about the gory details, read on.

This was a pretty straightforward, old-school server move: declare a maintenance window, then inside of the maintenance window, shut off site A, copy site A to site B, turn on site B. The copy step on this one was literally a tar -C /opt/sandstorm -cz . | ssh tar -C /opt/sandstorm -xz.

Well, almost. It was that plus a couple other copies because that server’s config has become slightly complicated, mostly to work around Sandstorm not yet having letsencrypt static publishing. So I needed to pull over sandstorm itself, my nginx config, and my letsencrypt keys and autorenewal cron job. And sniproxy, which I completely forgot about and which prevented the entire thing from working at all, adding about 20 minutes to the 1 hour or so of total downtime.

I started working on some kind of hare-brained scheme to eliminate DNS propagation delay from my downtime using ssh RemoteForwards for port 8080 and 8443 (note: USING SNIPROXY. The human brain would be a dangerous thing if it could correlate its own contents.) Turns out this was completely unnecessary because my whole setup is based on CNAMEs to, which is a dynamic DNS with a reasonably short TTL. But it would’ve worked fine, just saying!

So a bit in advance of the maintenance window, I had a coffee and started writing a couple crude scripts called and I timed it so that the coffee would kick in right at the start of the window. Then I pushed go on and waited while it tarred up the whole stupid thing.

After like fifteen minutes of agonizing waiting, completed successfully. So on the new host, I pushed go on and waited while it untarred the whole stupid thing. Then I started everything up, went to my home page, and… saw nothing! Connection refused. Site’s down boss. T_T whyyyy

So, let’s talk about sniproxy. SNI happened because people like to have one box host a bunch of web sites. So in our post-SNI world, we send the hostname we want in cleartext whenever we ask for a website, so web servers can split traffic for different websites to different virtual hosts without first having to give you an SSL certificate. sniproxy is basically the https equivalent of a tee adapter.

I take advantage of this to host both stuff and stuff on the same machine. So I have nginx doing SSL with letsencrypt for on port 7443, and sandstorm doing SSL with magic sandcats goodness on port 6443. Then I run sniproxy on port 443 with rules that say “if someone’s asking for *, pass to 6443; otherwise, pass to 7443.”

Oh, also, I’m such a big fan of https these days that literally the only thing that anyone ever hears over non-s http from my server is “here is how to get to this address via https.”

Long story short, if you execute an entire backup and restore operation flawlessly, except for sniproxy, then you get a web site that 100% does not work at all, in the slightest.

It gets better.

See, sniproxy for whatever reason isn’t a standard Debian package. You’ve gotta build the deb yourself, from source, in order to get it to work.

I quickly realized what was wrong when I brought everything up and saw nothing working, but now I had “pull down a source package from some guy’s github, install a bunch of prerequisites including an actual C compiler, build the whole thing, and hope it works okay” on the critical path of my downtime window.

Luckily, everything worked okay and this whole process only took about 20 minutes, after which everything just magically worked like it was no big deal. But it was a slightly tense 20 minutes.

Anyway, welcome back. Hope you enjoyed the history lesson. Say hi on the forums! Or the chat! And tune in to my shared music consciousness experiment! 👍