Archive for the ‘admin’ Category

Here’s that photo I promised. This picture is from 2008, when the original machine died and my white MacBook was pressed into service to replace it. It’s been running like that ever since, with a rotating collection of external volumes and considerably more dust.

laptop in a mess of cables and hard drive enclosures

Feorlenstein’s Monster, AKA the server

Well, “fun” isn’t exactly the word that comes to mind right now. In fact, the primary purpose of this post is a test to track down gremlins around redirecting incoming HTTP requests to HTTPS.

Since I was rebuilding the webserver from scratch anyway, now would be a good time to get it set up with a shiny new SSL certificate. I never could quite figure it out with the old system. Plus until fairly recently it meant significant cash to buy a cert from a reputable source.

But thanks to Let’s Encrypt, everybody can get one for free. It’s awesome. Random people using encryption for whatever random thing they are doing is effectively herd immunity. People who really need the protection of encryption to, say, not be murdered by their governments, no longer stand out in the crowd. And it makes it much, much harder for those trying to enact mathematically-challenged anti-encryption laws.

So this is a good thing. And I could make it easier by configuring my sites to switch incoming visitors over to HTTPS. Except my webserver configuration is thwarting me: HTTP connections are rejected rather than nicely switched over. (And I don’t know enough about HTTP/HTTPS/Apache yet to even explain it properly. I’m working on that.)

If you got a weird message when you tried to access the site, that’s what it was about. In the meantime, I’ll be over here buried in configuration and log files.

Earlier this week, I and my 3.8 billion scraper-bot friends noticed that all my websites were down. I’d been meaning for months cough to migrate my elderly and infirm server to something that didn’t look like an 8th grade Biology frog. (There’s a photo I should recover and post, but it’s not happening right now.)

I’ve lost track of how many restores I’ve done for the boot volume, so it’s well past due. This time it was web data. I’ve admittedly been lax in fixing my MySQL backup automation, so the backup I had was laughably old. Yet Time Machine had never failed me before, so I figured if it came to that I’d sort it out somehow. Kinda. More-or-less.

You can see where this is going.

Now, in fairness to Time Machine, I’m not sure if it was the problem. It seems to have been perfectly fine for everything else, and databases are not exactly friendly to backup services. I had directories of healthy-looking frm, MYI, and MYD files, which in theory can be pasted back together.

But it looked as if they were last updated nearly a year ago. (Not sure where those months of posts were stored in the meantime.) For most of my blogs, that isn’t a big deal. This one, however, is a different story. But I’m getting a little ahead of myself.

I first tried to restore via Time Machine to a fresh drive, which involved finding a USB hub so I could have it and the backup volume attached at the same time. I also formatted the new volume on the same machine, to avoid any problems with using a more recent one. (To put this in perspective: the host is OS X Server 10.5, on hardware of its era, with the web volume mounted by FireWire 400.)

It took many hours of semi-skilled poking around to confirm that I had what looked like could be made into a usable database. All the other files restored fine, but MySQL was in a bad way, and took WordPress down with it. I could drop my directory in the default location on the boot volume (/var/mysql), manually start MySQL, and see the data in the mysql interactive shell.

But if I tried to change the configuration to use the real location, I only got errors about a pid file not being updated and then the process quit. And in any case, WordPress had no idea where to find this strange new MySQL.

Everything had been working with ServerAdmin and launchd, the LaunchDaemons plist configured with appropriate options and starting /usr/libexec/mysqld (not something in /usr/bin, but as an OS X Server person I’m used to this.) Except the process wouldn’t start, complaining it couldn’t find a msql-bin.index that was right there in front of it. I even checked with dtruss (OS X strace) and I could see it changing to the directory and then failing to find the file that now had world-everything permissions.

Then I tried a database dump to see if I could import what I had into a test WordPress install. It wasn’t pretty (the original was running 3.8) but it did semi-function. That was promising.

I set up a hosted VM (like I had been planning to all along) to use as a new webserver. Installed a whole ton of things, did some DNS foolery, and then copied over both the db dump and my blog directory. I couldn’t import the db directly into a more recent WordPress version, so I tried to drop the existing 3.8 into place. That failed because the new server (Ubuntu Xenial) didn’t have necessary php5 (!) things the old WordPress expected.


The next option was to import the dump into a new database with similar settings, make a fresh copy of the old WordPress dir, and immediately upgrade a recent version on top of it. This more-or-less worked. I had to manually update urls in the database, now that it was on a different host, and fix mod_rewrite paths in .htaccess.

Just to note here: if you ever find yourself needing to do unlikely things inside the guts of your WordPress database, please tread carefully with the sql commands. It is very easy to screw up, well, basically everything related to your db. Including MySQL itself.

At last I had a functioning blog, although one that was woefully out of date. It may be that there’s a way to recover that other data, but in the meantime I have to live with the db I have. I went searching the Wayback Machine and found nearly everything. Thank you, Brewster Kahle.

ProTip: If you find yourself recovering blog posts from The Internet Archive, view the page source so you can copy the HTML-ified text that came straight from scraping your site. I had a lot of creative formatting in those recent posts, so it helped a lot.

At this point, I have one blog recovered on the new machine, although not with the proper hostname. So it’s time to set up Apache virtual hosts again. I did that, and then switched my DNS to point to the new server. (This involved some more db fiddling, but whatever. One day I’ll learn to update WordPress settings before yanking things out from under it and not after.)

It was all working, until then it wasn’t.

Right now, as I’m typing this, suddenly the hostname that was resolving to the new server is hitting the old one. And I have no idea why. It’s probably some weird DNS propagation thing, so I should not get worried about it for a few more hours. I’m saving this text locally in case it all blows up.

At the end of all of this (going on three days) I’ve got one blog recovered, two domains on the new server, and one other blog that appears to be recoverable by the same process. (Several others are long dead, so aren’t getting moved.) Oh, and in the middle my everyday laptop started acting weird, and so did the refrigerator. Just because.

I’ve lost one non-public page, really annoying because it’s the progress report for a project I did, and a couple blog comments. And it was very much a giant pain in the ass. As soon as I get all this settled, you can be sure I’ll be working on that db backup automation.

The things I was reminded of are:

First, this success story was brought to you by a pile of random parts. The half-dead retired USB hub. FireWire 400 cables. Old machines I could safely test restore Time Machine backups to. If you run old hardware, you have to have similar vintage accessories available to be able to work on it when things go badly. And they will go badly. Newer machines may or may not boot that old volume, and the old one is very likely to choke on something too recent. (Time Machine has cheerfully put up with the many times I’ve restored across multiple major OS versions and other sorts of nonsense.)

Second, The Internet Archive is amazing and you should give them money. I should too (when next budget opportunity comes around.) I wrote a bunch of long blog posts in the past year, and they saved my bacon.

Third: Do Your Damn Backups!! And not half-assed when-you-get-around-to-it, either. Pretty clicky-clicky methods are fine for desktops, but servers are a different matter. Databases are another thing altogether. I know all of this, had several good general backups, and still was lazy and didn’t do enough database backups. Don’t be me.

The past week I started looking at Zulip, an open source group communication tool. It has web and mobile clients, and a Python back end. I ran into a few speedbumps getting my development environment set up, so this is my collection of notes on that process. If you aren’t interested in Linux or Python, you might want to skip this post as it’s full of sysadmin stuff.

The Zulip development setup instructions are good, but assume you are running it on your local machine. There are instructions for several different Unix platforms, the simplest option is Ubuntu 14.04 or 16.04. (The production instructions assume you want a real production box, and Zulip requires proper certs to support SSL. Dev is plain old HTTP.)

The standard dev setup walks you through installing a virtual environment with Vagrant. But I’m using my Ubuntu test box, an Intel Next Unit of Computing (NUC). Many folks use these for small projects like home media controllers because they are inexpensive, low power, and self-contained. But hefty they are not. I have 2 GB of RAM and a 256 GB SSD, so I decided to go with the direct Zulip install without Vagrant. It isn’t complicated, but there isn’t a nice uninstall process if you want to remove it later. (I’m not worried about that for a test machine.)

I installed in my home directory, as my own user, and started with the suggested with no options. The standard configuration listens only on localhost, which was problem number one. I could wget so I knew something was working, but I didn’t have a web browser and I couldn’t access it with one from another machine.

I looked through the docs, which are pretty good on developer topics but have some thin spots, but didn’t find anything that looked like command-line reference. There was one mention of --interface='' buried in the Vagrant instructions, but with a null argument it wasn’t obvious its purpose. I asked in the Zulip dev forum (which is actually a channel, or “stream” at a public Zulip instance) and learned that is where I should specify my machine’s address.

So my start command looks like this:

$ ./tools/ --interface=

This is where I get to speedbump number two. (I’ll skip over some random poking around here.)

The instructions say the server starts up on port 9991. Ok, great. The last part of the log on startup ends with this:

Starting development server at
Quit the server with CONTROL-C.

This, to me, says that it’s running on port 9992. Having seen previous cases of services failing to promptly release their ports, and then working around that by incrementing the number and trying again, I didn’t think much of it. I had stopped and started the process a bunch of times. This is a development install of an open source project under active development. Ok, whatever, I’ll investigate that later. 9992 it is.

Except it wasn’t. The web UI was indeed listening on 9991 as advertised, but I didn’t realize it. The closest thing I saw in the logs suggesting this is the line

webpack result is served from

but since I’m brand new to this project and have no idea what webpack is, that didn’t mean much. It took a couple stupid questions to work out, but eventually I got all the parts together.

So, to summarize:

Read the install directions. They work.

If you are installing on another host, set that host’s IP address with an option when you start the process: --interface=

And look at port 9991 on that host for the web interface.

blah blah blah

I needed a new blog to talk about non-textile stuff. This would be it.