The rest of the trip was uneventful, at least as far as bits were concerned. (Do not get me started on transportation. Or pest infestations either.)

My new SIMs worked as expected, with a recent legal change I can use the ones I bought in Italy to roam at no extra charge in other EU countries. This was super handy for the five hour layover in Germany. It also means (I think) I can keep them active. I go to Europe from time to time, but not Italy every year as would be needed to refresh my phone service and keep the number. (Important note: Italy requires a national tax ID number to buy a SIM. I have one, but most tourists don’t.)

The downloadable encrypted disk worked, and once I got it I was able to access the keys for my server. I was still cut off from everything tied to my US phone, but it was tolerable. Web email only, and I had to send messages from the mail application because of my setup. So that data was still local. But there were only a few things I really had to reply to. Besides, I was supposed to be speaking Italian, not websurfing.

I have a separate laptop login, with restricted permissions, that I intended to use for general web browsing. I mostly used the phone, however. The one helpful thing was my mifi had more bandwidth than my phone, so I could connect to it by wifi and VoIP calls over the VPN were less terrible.

On the flight back home, I deleted the disk image, cleared data in all the browsers I had been using, and shut down. That logged me out of websites, with no way to get access without my US phone. Nothing actually happened at customs, but the point of practicing one’s security plans is so you are more confident they would work (and you can execute them) if actually needed. And, in my case, to write up what I thought about it.

The most unexpected surprise was the reminder that average people have no idea what two factor auth is. They were confused why I could not login to Facebook, when I had a perfectly good phone and laptop right there. I mean, everyone is on Facebook right? It was challenging to explain that I required a message that was sent to a device I didn’t have. (I think I was then deemed one of those “computer people.” Fair enough.)

The VPN set up for always-on worked about as well as it does in the US, so I’m happy with that. (Some websites still reject you tho, boo.) I tried to use public wifi in various locations (the mall, inside train stations, etc) but mostly they did not work correctly and I was stuck with whatever signal I could find on my own. (They were either over-used and not responsive, or blocked my VPN connection.)

Next time I’m going to get a plan for my phone that includes voice service. I couldn’t call taxis, and that was a pain. I was not in a big city where it’s easy to find a taxi.

I’ve been in Italy a few days now, with only travel-specific phone and laptop. Both are set up with a VPN and the laptop drive is encrypted. I’m using web versions of the services I need, with the exception of outgoing email. (My weird mail setup relies on self-hosted SMTP, that essentially forges my From: address.)

I decided to not logout of everything on the phone (my point of entry, Germany, is not known as a hotbed of traveller phone searching.) So no need to involve my spouse to relay authentication tokens from home.

I did have some trouble with iCloud two factor auth and had to resort to using a recovery key. Despite the appearance of auth tied to a device and not only a phone number, I couldn’t get the login token with my new SIM.

Once I got everything working it’s been ok, I just have to not flush browser data and lose my auth. Not having email on my phone is a minor nuisance, but I can live with that for two weeks.

I have no passwords saved locally, instead I made an encrypted disk image with passwords and other important things (like a photo of my passport and server ssh keys.) It’s on my web server, so I can download it from anywhere. For now I’m keeping the encrypted image locally and only opening it as needed. I’ll delete it before I get to the US.

The only real nuisance has been trying to minimize data cached on my phone from web browsing. I try to not open random links without a private window, but on a phone particularly it’s sometimes hard to tell what you are clicking on.

I have Twitter set up in Brave (a security-minded mobile browser), Safari configured without Javascript or cookies, and Dolphin for things that need both. This involves a lot of copying links between browsers, but it’s the same thing I do on the desktop.

I have a much better data plan for the phone than last time, so I’m actually doing most things there. I also have a new SIM for my mifi, although it’s unfortunately still locked to Vodafone. (Someone at Vodafone NZ suggested Italy could unlock it, but they won’t.)

Here’s that photo I promised. This picture is from 2008, when the original machine died and my white MacBook was pressed into service to replace it. It’s been running like that ever since, with a rotating collection of external volumes and considerably more dust.

laptop in a mess of cables and hard drive enclosures

Feorlenstein’s Monster, AKA the feorlen.org server

Well, “fun” isn’t exactly the word that comes to mind right now. In fact, the primary purpose of this post is a test to track down gremlins around redirecting incoming HTTP requests to HTTPS.

Since I was rebuilding the webserver from scratch anyway, now would be a good time to get it set up with a shiny new SSL certificate. I never could quite figure it out with the old system. Plus until fairly recently it meant significant cash to buy a cert from a reputable source.

But thanks to Let’s Encrypt, everybody can get one for free. It’s awesome. Random people using encryption for whatever random thing they are doing is effectively herd immunity. People who really need the protection of encryption to, say, not be murdered by their governments, no longer stand out in the crowd. And it makes it much, much harder for those trying to enact mathematically-challenged anti-encryption laws.

So this is a good thing. And I could make it easier by configuring my sites to switch incoming visitors over to HTTPS. Except my webserver configuration is thwarting me: HTTP connections are rejected rather than nicely switched over. (And I don’t know enough about HTTP/HTTPS/Apache yet to even explain it properly. I’m working on that.)

If you got a weird message when you tried to access the site, that’s what it was about. In the meantime, I’ll be over here buried in configuration and log files.

Earlier this week, I and my 3.8 billion scraper-bot friends noticed that all my websites were down. I’d been meaning for months cough to migrate my elderly and infirm server to something that didn’t look like an 8th grade Biology frog. (There’s a photo I should recover and post, but it’s not happening right now.)

I’ve lost track of how many restores I’ve done for the boot volume, so it’s well past due. This time it was web data. I’ve admittedly been lax in fixing my MySQL backup automation, so the backup I had was laughably old. Yet Time Machine had never failed me before, so I figured if it came to that I’d sort it out somehow. Kinda. More-or-less.

You can see where this is going.

Now, in fairness to Time Machine, I’m not sure if it was the problem. It seems to have been perfectly fine for everything else, and databases are not exactly friendly to backup services. I had directories of healthy-looking frm, MYI, and MYD files, which in theory can be pasted back together.

But it looked as if they were last updated nearly a year ago. (Not sure where those months of posts were stored in the meantime.) For most of my blogs, that isn’t a big deal. This one, however, is a different story. But I’m getting a little ahead of myself.

I first tried to restore via Time Machine to a fresh drive, which involved finding a USB hub so I could have it and the backup volume attached at the same time. I also formatted the new volume on the same machine, to avoid any problems with using a more recent one. (To put this in perspective: the host is OS X Server 10.5, on hardware of its era, with the web volume mounted by FireWire 400.)

It took many hours of semi-skilled poking around to confirm that I had what looked like could be made into a usable database. All the other files restored fine, but MySQL was in a bad way, and took WordPress down with it. I could drop my directory in the default location on the boot volume (/var/mysql), manually start MySQL, and see the data in the mysql interactive shell.

But if I tried to change the configuration to use the real location, I only got errors about a pid file not being updated and then the process quit. And in any case, WordPress had no idea where to find this strange new MySQL.

Everything had been working with ServerAdmin and launchd, the LaunchDaemons plist configured with appropriate options and starting /usr/libexec/mysqld (not something in /usr/bin, but as an OS X Server person I’m used to this.) Except the process wouldn’t start, complaining it couldn’t find a msql-bin.index that was right there in front of it. I even checked with dtruss (OS X strace) and I could see it changing to the directory and then failing to find the file that now had world-everything permissions.

Then I tried a database dump to see if I could import what I had into a test WordPress install. It wasn’t pretty (the original was running 3.8) but it did semi-function. That was promising.

I set up a hosted VM (like I had been planning to all along) to use as a new webserver. Installed a whole ton of things, did some DNS foolery, and then copied over both the db dump and my blog directory. I couldn’t import the db directly into a more recent WordPress version, so I tried to drop the existing 3.8 into place. That failed because the new server (Ubuntu Xenial) didn’t have necessary php5 (!) things the old WordPress expected.

So.

The next option was to import the dump into a new database with similar settings, make a fresh copy of the old WordPress dir, and immediately upgrade a recent version on top of it. This more-or-less worked. I had to manually update urls in the database, now that it was on a different host, and fix mod_rewrite paths in .htaccess.

Just to note here: if you ever find yourself needing to do unlikely things inside the guts of your WordPress database, please tread carefully with the sql commands. It is very easy to screw up, well, basically everything related to your db. Including MySQL itself.

At last I had a functioning blog, although one that was woefully out of date. It may be that there’s a way to recover that other data, but in the meantime I have to live with the db I have. I went searching the Wayback Machine and found nearly everything. Thank you, Brewster Kahle.

ProTip: If you find yourself recovering blog posts from The Internet Archive, view the page source so you can copy the HTML-ified text that came straight from scraping your site. I had a lot of creative formatting in those recent posts, so it helped a lot.

At this point, I have one blog recovered on the new machine, although not with the proper hostname. So it’s time to set up Apache virtual hosts again. I did that, and then switched my feorlen.org DNS to point to the new server. (This involved some more db fiddling, but whatever. One day I’ll learn to update WordPress settings before yanking things out from under it and not after.)

It was all working, until then it wasn’t.

Right now, as I’m typing this, suddenly the hostname that was resolving to the new server is hitting the old one. And I have no idea why. It’s probably some weird DNS propagation thing, so I should not get worried about it for a few more hours. I’m saving this text locally in case it all blows up.

At the end of all of this (going on three days) I’ve got one blog recovered, two domains on the new server, and one other blog that appears to be recoverable by the same process. (Several others are long dead, so aren’t getting moved.) Oh, and in the middle my everyday laptop started acting weird, and so did the refrigerator. Just because.

I’ve lost one non-public page, really annoying because it’s the progress report for a project I did, and a couple blog comments. And it was very much a giant pain in the ass. As soon as I get all this settled, you can be sure I’ll be working on that db backup automation.

The things I was reminded of are:

First, this success story was brought to you by a pile of random parts. The half-dead retired USB hub. FireWire 400 cables. Old machines I could safely test restore Time Machine backups to. If you run old hardware, you have to have similar vintage accessories available to be able to work on it when things go badly. And they will go badly. Newer machines may or may not boot that old volume, and the old one is very likely to choke on something too recent. (Time Machine has cheerfully put up with the many times I’ve restored across multiple major OS versions and other sorts of nonsense.)

Second, The Internet Archive is amazing and you should give them money. I should too (when next budget opportunity comes around.) I wrote a bunch of long blog posts in the past year, and they saved my bacon.

Third: Do Your Damn Backups!! And not half-assed when-you-get-around-to-it, either. Pretty clicky-clicky methods are fine for desktops, but servers are a different matter. Databases are another thing altogether. I know all of this, had several good general backups, and still was lazy and didn’t do enough database backups. Don’t be me.

I attended my first Write the Docs conference last week, which was great fun. I did not fall off the stage, so my presentation success criteria was met. People liked hearing about the exciting Land of Ops, so even better. (I can see you reading the web version! Self-hosting is an awesome way to spy on your friends.)

This was my second trip to Portland and, unfortunately, I still didn’t get to see the city. At least I was there more than 18 hours this time. Oddly, the event was in the exact same building, and this time I discovered the amazing dance floor in the main ballroom — almost makes me want to go to a contra dance again. (My knees have registered their objection to this proposal.)

I scheduled my trip to be able to go on the group hike, but sadly by the time it actually came around I concluded that five miles with 1000 foot change in elevation was not going to happen. (Less sadly, that also means I missed the Saturday afternoon hailstorm.) My presentation was better as a result, so it was not wasted. I am very grateful to friends who provided feedback, and let me practice in their not-a-hostel hotel room.

Sunday I attended a half day workshop on Structuring and writing documentation by Heidi Waterhouse. The original plan was for more of a developer audience, but Heidi shifted it a little towards practical techniques for writers, including some history of software packages specifically for technical writing. As a self-taught sometime writer, it was useful to see how folks who do it full time approach the task and what techniques help. We did some exercises (using templates: Doctor Who Mad Libs!) and talked about how to solve a reader’s problem that these days usually involves a single page at the other end of a Google search. (Annoyed people want fast and complete answers, or they are going to move on.)

One of the more personally relevant presentations was from Sam Faktorovich on how his R&D outpost of a big company built an internal docs team that could handle code-level details of their software. They tried several different approaches, and found the best option was to hire strong writers and train them for the technical information necessary.

Of 120 candidates, only two had both development and technical writing skills (and they hired both of them.) The developers who expressed interest in writing in the end didn’t work out, but the general writers who could handle complex topics were very successful. The team also encouraged more people in their area to be technical writers by partnering with a local university and sponsoring tech writing bootcamps.

As a person who would have been in that set of two unicorns, I’m still thinking about how to apply this information to my own career decisions. There are indeed local companies looking for people with both development and writing skills. I’ve talked to a few, although so far it hasn’t worked out (for other reasons.) More often, I bring up the subject about relevant roles by saying I’m interested in applying my full range of skills and not only writing code.

Some people are open to such discussions, but the LinkedIn messages in my inbox suggest many recruiters already have their minds very fixed about what they want. They see all the engineering positions on my resume, so that is what they want to talk about. Which is fine as far as that goes. But the more I highlight non-code things I’ve also done, the more I get “So, do you code?” reactions. I’m concerned that if I become more widely known as “A Writer” that there’s no going back to Engineering. I haven’t sorted that out yet, besides putting more energy towards having as many wide-ranging personal conversations as my introvert self can stand.

There were plenty of other interesting presentations, many of which I’ll need to watch the videos because I was spaced out working on my slides. One particularly relevant for developers is Even Naming This Talk Is Hard. Ruthie BenDor had many fine things to say about why everyone who makes stuff others have to use/read/interact with should pay more attention to naming the parts involved. The names we come up with that make sense in our context may have an entirely different context (or none at all) for your user. That could be merely frustrating and annoying, or it could be Very Very Bad.

There are a few videos I for sure need to watch, because I had to skip out on the last few hours of the conference to head to the airport early. While I had my phone off preparing for my talk, my spouse was bailing water as representatives from our landlord poked at things. Eventually they found the one that stopped the water pouring into the kitchen, but it’s still a mess. And, yes, someone asked if I had a runbook procedure for that.

This week my internship with Zulip and Outreachy officially ends. I don’t intend to stop working on Zulip, one of the reasons I applied was because it was something I could see myself continuing to contribute to. I’ve learned new technologies, applied well-practiced skills, met new people, and explored things I knew would be a challenge. I’ve used this blog to talk about what I have been working on, but I haven’t said much about why I applied. I wanted to focus on the work at hand.

So now would be a good time for that. But first, a little background.

I’ve known about Outreachy for a while, I briefly considered applying back when it was still Outreach Program For Women. At that time, I was a maintenance engineer — fixing software bugs for a medium-size tech company. I had been there a while and was thinking about different directions I could go in my career.

I started out on a pretty typical CS path, with a degree and jobs on engineering teams. But things rarely go like you plan, and eventually I landed in support and maintenance. I was writing code, but I wasn’t doing what many of my engineering peers were with automated testing, cloud services, and iterative Agile development. I had looked at some open source projects as a way to try new things, but few seemed approachable. And the timing for OPW was inconvenient.

Instead I joined a tiny, tiny startup. I began comfortably in C code but rapidly picked up anything in the product that needed to get done. Server features, network problems, mobile clients, monitoring, you name it. I wrote JavaScript and fixed Android bugs. I did all kinds of things I knew nothing about. Some of it stuck, and some I still can’t explain what I thought was going on.

But the company didn’t go far, and I found it complicated to talk about something where little of my work was visible and the code proprietary. I have a hard time with portfolio projects because I can’t stay excited about an abstract problem solved in a theoretical vacuum. I’m much more interested in how interconnected parts work together, and that’s not something that shows well in a 30 second demo.

I knew Outreachy was not just for students, although mainly students and recent grads apply for it. It’s the nature of the thing: if you are an established working professional, it’s hard to take off a bunch of time to try something new. If you have other responsibilities besides work, doubly so. But I was able to, and saw it as an opportunity to explore a new area and build a visible record of my work. It’s an excellent professional opportunity, one that I’m fortunate to be able to consider. Even better that it improves open source software in the process.

There was one little word, however. “Intern.”

I’ve been an intern before. My school strongly encouraged all engineering students to do two “co-op” semesters, and I did. (I wrote documentation for a software company.) But as a middle-aged professional, sometimes when I mentioned I was applying for an open source internship program I’d get a funny look and a one-word response: “Why?” Wasn’t I a career software engineer already? I’d explain that it’s an opportunity to move into a new area and I’m excited about the possibilities and then everyone understood. But it was awkward. I was already questioning a culture where “rockstar” new grads land huge compensation packages and experienced engineers struggle through interviews about abstract CS theory. So, yes. Awkward. I had to think about that to be comfortable with it.

The application process was challenging, not only because I was learning a new codebase and new tools, but because I had to prepare a proposal for something I knew almost nothing about. I approached it as I would a professional task: spec and estimate new features appropriately scoped for a 3 month deadline. And how would I know what was reasonable? I had no idea. Yet, the experienced people answered my questions and encouraged me to build a solid but flexible plan where the schedule and tasks could be revised later. That was good to know. I was excited to learn I was selected. I was paired with two mentors, Sumana Harihareswara, already an active Zulip contributor, and Tollef Fog Heen, who has experience with services and APIs.

I knew I had signed up to do a lot of engineering work, and was confident I was able to execute to plan (for some value of “plan” at any rate.) There were new things to learn and a new codebase to become familiar with and all sorts of stuff that you deal with again and again when changing jobs. And this was a job, it was a full-time commitment over the course of the program. I wasn’t too concerned about that part.

The other things I would learn, I didn’t really know. Not in a “I have no clue” way, but more in that every new environment has things that come up or happen in unexpected ways. One new part in this was the open source component. I’ve worked on plenty of engineering teams, generally there is an overall design and individual areas are parceled out to developers or small teams to refine and implement. There are many decisions to be made, but most of the big ones are (hopefully) at least sketched out in some kind of architecture plan. Often lead engineers have strong opinions about how and why and where.

My few interactions with other open source projects suggested that outside contributions were a nice thing as long as it wasn’t too taxing for the core team. Clearly this was a different situation and I wouldn’t be left to my own devices, but it took some time to sort out where I was comfortable between working mostly on my own and seeking input beyond basic questions. After all, everyone was busy working on their own tasks, usually between other responsibilities. I was adding new functionality rather than working in an already established area, so I was unlikely to break a core feature. But I wanted it to fit with established standards and match overall goals. This was an area where my mentors were especially helpful: how often to ask busy people for feedback, what sorts of things are generally left to individual developers to handle.

Something I didn’t consider at first, and well into the program really, was learning as a specific goal. Of course, learning was a desired outcome: new skills that can be applied to other projects. Yet I’m accustomed to the task being the focus, and any necessary learning adjunct to that. I discounted the value of the effort I was putting into understanding new tools and environments, and sometimes frustrated about my productivity. Was I hitting milestones fast enough? Sometimes chasing down problems made me question whether I was accomplishing anything meaningful. But then, in conversation with my mentors, I realized that was the point.

The biggest surprise over the course of the program had nothing to do with code. I’ve always been a strong writer, but I am best when I can edit and revise. Sometimes speaking to people face-to-face is challenging, but there is enough room in the back and forth of a live conversation that I can get my point across most of the time. (Stressful situations less so.) Zulip is a group chat system, so I was hardly surprised that I was going to spend a lot of time sending short messages back and forth. At a modest pace, this isn’t a problem.

What I was entirely not prepared for was having status meetings in chat. Attempting to convey complete thoughts about where I was on a task while at the same time tracking questions asked about multiple things was extremely difficult. It was like having an important conversation in a loud room, where so much cognitive effort is required to parse the words that there is little space left to compose a response. Chat is such a central part of the project that I kept trying until everyone was clearly frustrated. It took a phone call to sort things out, and then we agreed to have status reports by email. Any needed discussion can be handled in chat, but most of the information was already provided. That entirely changed the regular meetings from something I struggled to get through to an orderly sharing of information.

There were many other things besides the technical tasks originally in my plan. At the suggestion of my mentors (and to no great surprise) I was encouraged to submit a talk to a conference. It was just a few days ago accepted, so now I can continue on and actually write the full presentation for the event in May. I added career tasks to my plan like updating my resume and attending community events.

The visible github activity will certainly be an advantage when looking for my next job. I’m happy to have found a project I enjoy participating in and now I have several complete features I can show as code samples. I expect there will be more.

I just finished a big task, significantly expanding some documentation. The original page was a summary of several ways to integrate with 3rd party systems, and an example of one of these methods. I used this document when I first tried to create an integration, and found it didn’t cover a lot of things I needed to know. So I wanted to improve it.

The document is now two pages, the example having been moved to its own page. I added more detail to the example itself, and a new section for additional topics it didn’t cover. The revised page is now online, and I’m excited to see it available for people to use.

Getting to that sometimes made me wonder how much I was actually making progress on my goal of a better docs page. There were other tasks to be done while this was happening, so it wasn’t all in one stretch. I had to learn to do the things I wanted to describe, which took lots of questions for more knowledgeable folks and experimentation to confirm my understanding. I made tons of notes, written with the hope they still made sense later when I needed them. Editing involved re-testing code examples to make sure I accurately described how they worked at an appropriate level of detail. By the end, in addition to the written material, I expanded one existing integration and wrote two entirely new ones.

The final result couldn’t have happened without preliminary exploratory work. The necessary information existed mostly in the personal experience of people who had done it before. Only some was documented. And, part way through, we decided my original idea of an entirely new document (with a new example) would be better incorporated into the existing material instead. So some partly completed writing was discarded, and extra work was needed to make the new code itself usable on its own. Wasn’t that inefficient? Does that matter? I see two different ways of looking at it.

One is that “All this is what brought me to this place.” The idea that the exploratory work was not only necessary, but important. Details that weren’t included in the final document shaped those that were, and tangents identified what was not central enough to the topic to merit space. The result wouldn’t be what it is without that work. To reference the joke in my title, this is like Carl Sagan’s apple pie, which could not exist without the entire history of cosmology that preceded it. (“If you wish to make an apple pie from scratch, you must first invent the universe,” from the original Cosmos TV show.)

The other way to think of it is that the preliminary work, or at least some portion of it, was an unfortunate necessity but not part of the actual work. Such yak shaving is often thought best to get through as quickly as possible. (“I want to install an app, but I have to upgrade my operating system first.”)

Sometimes it’s clear which camp something falls into. (I don’t need to write more OS upgrade documentation, for example.) Other times it depends on what the actual goal is. Is the information available, but in a less usable form? Then if the goal were doc updates as fast as possible, experimentation would be less relevant. (Or if less detail were expected.) But if the new working integrations are included, getting that code working would be very relevant. If personal learning is a goal, a wider range of things are fair game. Specific things may need to be time-limited, to keep on schedule.

My work for this doc task is on the pie end of this spectrum. The working code is relevant, not just a side-effect of research. The research was needed in any case because information wasn’t otherwise available. Learning is a specific expected outcome. That hasn’t always been the situation in a traditional job, something I have to remind myself here. My project updates now specifically include research and learning, they didn’t at first.

It sometimes felt odd to be dealing with a lot of “distractions,” or at least things that would normally be one in an average work environment. (“If it’s not in the plan, why are you working on it?”) But this is a different sort of thing, closer to a research project in some ways. Not knowing how tangential tasks would be viewed caused some stress. Yak shaving isn’t considered a good thing. But pies are ok. Particularly after I started thinking of them as first class tasks themselves.

I’m deep into preparing submissions for conference talks. This isn’t the first time, but it’s been a while and these talks have a different focus from the ones I’ve done before. I have a fair amount of prospective content together. But getting it into a cohesive abstract that is engaging, addresses the right audience, and, most of all, is short, has been a struggle. I’m grateful for assistance from folks who are much better writers than I am.

You may vaguely remember from a long-ago high school class the four types of writing: expository, descriptive, persuasive, and narrative.

Expository writing? I’m all over that. I’ve got a thing that I know about, and then I write about what that thing is, or what it does, or how to do it. There’s pages and pages (and pages and pages) of me doing this all over the internet. It’s classic technical writing: explaining something.

Those other kinds? I know about them, and sometimes use elements of them. Most people do. Where it varies is in how effective one is about it.

I want to be persuasive when I try to convince my spouse that we really should go back to Japan. Because it was so much fun, and there are so many neat things we didn’t see. And we’re gonna lose our elite airline status if we don’t get on the stick about it. (That’s pretty persuasive in my household.) I can pull out a bunch of supporting evidence, but mostly it’s that evidence that’s doing the persuading.

Descriptive writing is sometimes close to expository writing. You can explain how something looks or feels in a realistic, physical sense. But that’s a very narrow view of it. It includes the grand, sweeping words that paint pictures in your mind as they set the scene. “Rosy-fingered dawn” is a particularly elegant descriptive statement about a metaphorical sunrise. But someone else wrote that, as I’m unlikely to be mistaken for the progenitor of epic poetry.

I’m not sure how much I can describe my writing as narrative. Yes, I can do the “Who, What, Where, When, Why, and How” and that could be thought of as telling a story. But stories presume one has characters playing their parts in a recognizable plot, and that’s usually where I miss it. Narrative is expected to have a coherent structure, even if it isn’t necessarily linear. (For me, longer writing often starts to look like “paragraph salad,” as if someone dumped my index card notes on the floor.)

So back to editing.

My previous talks have been about explaining things. You want to know about log files? I can explain log files for hours. I might occasionally have an opinion to share, but mostly what I’m going to say is “See, this thing here? Here’s where it comes from and what it means.” My talk submissions were two paragraphs of “This is what I am going to explain to you.” There’s some element of “And this is why you should care,” but only enough to not sound like a documentation-generating robot.

The current conference, however, likes stories, opinions, and feelings to go with the facts. Those are basically anti-expository, so I’m having a tough time of it. I managed to write something, but it was closer to an outline for the entire talk than an advertisement saying “Please accept my talk for your conference.” And it was way, way too long.

I tried. I fiddled with this, and reworded that, and managed to cut about 30 percent. Still too long. I was holding on to the idea that I had to describe what I was going to explain, rather than persuade that I had something practical, interesting or enlightening to listen to. It’s not even that I was so attached to my words that I didn’t want to let any of them go. It’s that I am optimizing for the wrong thing. And the right thing is something I’m still working on even recognizing.

I sat down with a few writer friends, people who do this all day, every day and not just while waiting for a compile to finish. They took my paragraphs, put them in a blender, and out came something that even I could see was better. It wasn’t my voice, and I didn’t just wholesale use their results, but it gave me a way to see what my words could have been. I could then take my original text and move it closer to where it needed to be, even if I couldn’t explain why I was not able to get there on my own.

As people who have “Engineer” in their titles go, I’m a fairly decent writer. I’m proud of that. But that doesn’t mean I’m a great writer. I had gotten to a place where I couldn’t see the way out, and needed a professional intervention. I hope next time this will be easier.

Last weekend I attended a workshop about preparing to speak at technical conferences. It was somewhat more than that, but I’ll start there. That is something I’m interested in right now, as I’m working on proposals to submit to an upcoming conference.

It was organized by Write/Speak/Code, a group of people who do several events around women, technology and open source. (There is also a larger annual conference.) This event, Own Your Expertise, was focused on preparing women to submit talks to conferences and participate in open source communities. This is one of several workshops Write/Speak/Code offers, and thanks to GitHub, tickets were free. There was even a professional photographer so everybody looks good on conference web pages. (Mine is also for foreign job applications, which I’ll not make this post any longer by getting into here.)

So on to the day’s content. Yes, it’s about conference talks. The presenters have a somewhat different path to getting to that however. While it does get on to mechanics like what “CFP” means, it starts with getting yourself convinced you can actually do this. For me, I’ve presented at conferences before so it’s not unknown territory. But I’m hardly jumping at opportunities to do so because, surprise, I have a problem figuring out what I can talk about and convincing myself I have something relevant to say.

There are topics where I’m comfortable with my expertise, but in textiles rather than my professional work. The dynamics of textile communities are different for me than work, first and foremost that I don’t depend on textiles to make a living. My visibility and activity in that community have no bearing on whether or not I can pay rent or buy food, and can vary as circumstances change. (This is not the case for some of my friends.) Without that pressure, it’s easier to talk about what I do. I have trouble carrying that over to paid work however.

The first part of the day was group exercises around speaking more comfortably about one’s own expertise (hence the title), different areas each of us can influence and educate, and words we can use to describe what we have to offer. In honor of the occasion (while many of our friends were at Women’s Marches around the country) one of the exercises was “If you could be nominated for a Cabinet post you were patently unqualified for, which one would it be?” I volunteered for Health and Human Services, given my extensive experience in Yelling At Insurance Companies.

As a less gregarious person, sometimes the exercises seemed a bit silly. (And given limited time, rushed.) But everything was focused on putting together people who don’t know each other and getting them talking about areas of their own knowledge and experience. Sprinkled in with more than a bit of “You Go Girl!” chicks-can-do-this cheerleading. (Which sometimes is too much large group socializing for me, but I got through it.)

The second part gets into details about how to actually go about this, with breakout sessions focusing on different parts of the process. I was in the one about writing a proposal, since that’s exactly what I need now. We read our preliminary talk proposals to the group (about a paragraph) and discussed ways to improve them. Everybody exchanged contact info to keep working together on our talks.

As might be expected of something hosted at a Bay Area startup, there was much socializing, food, and following the programmed events, alcohol. The bartenders also concocted no-alcohol fancy drinks by request, so that was cool. (Yes, GitHub has a full bar in their cafeteria/event space. They are hardly alone in that. And I have Opinions about the role of alcohol in startups. Another time.)

I had a good time, I got some useful ideas in framing my topic, and met a bunch of people. I actually wrote down contact info and followed up with five people. That’s a lot for me. Go me. I hope I can keep in touch with a few (that is often where things fall down, on both ends.) I’ve already heard back from one, and we will probably meet up next week.