While I was writing the previous post, I came across this:

I got hacked mid-air while writing an Apple-FBI story

A journalist, working on a story, was shocked to have a fellow passenger quote back to him emails he had written while using the onboard network. It changed his mind about the “nothing to hide” argument that argues privacy and encryption aren’t a big deal so why make such a fuss about it. (You can likely guess my opinion on that.)

A couple of weeks ago I finally paid for wifi on a flight, mostly to check it out. And the very first thing I did was make sure I could turn on my VPN. Just as on any public network.

Now I’m not always the most diligent about ensuring no unencrypted communications leak out, but I try. Sometimes I forget to shut down apps, and they send and receive data before the VPN finishes comes up. That’s where I need to try harder. Turning off wifi before closing the laptop is also part of it. (I could configure my machine to block anything not using the VPN, but that is annoying when I’m home.)

Now what I don’t know is what is visible when I’m connected to the aircraft’s access point but don’t have a real Internet connection. I do that a lot to check the flight status, but without actual Internet there’s no way to enable my VPN. Other applications may be trying to send data anyway.

There’s a smaller group of possible snoopers on an airplane, but aside from that it’s no different from any other public network. That’s an important point to remember.

I did say that this blog would avoid getting into political issues and stick to practical concerns. But the events of the past week with Apple and the FBI are pretty disturbing and I want to talk about why.

First, nothing about the technical matters involved in the conversation (with one exception) is anything that you or I or any other private individual can do anything about. It’s all taking place in the rarified air of law enforcement vs public policy, from those who believe they know what is good for us, and have the power to change how others are allowed to access our personal data. We can lobby our elected officials and hope somebody can get past the fear mongering enough to listen.

Next, there is one technical thing you can do to protect your personal device: choose a strong passcode. I’m going to assume you already use a passcode, but the default four or six digit number isn’t going to stand up to a brute force attempt to break it. Make it longer. Make it numbers and letters if you can stand to (it’s a real pain to enter, I know.) Do it not because you “have something to hide” but because you will always have something you don’t want shared and it’s not possible to know in advance what, when, or how that might come about. Make protecting your personal data a normal activity. The longer and more complicated your passcode, the more effort it will take to guess. As long as we continue to use passcodes this will be true, and the goalpost is always moving.

Now, on with the real subject of this post. Get comfortable, this will take a while.

Folks who have followed this issue know that Apple (along with other companies) have routinely responded to search warrants and other official requests for customer data. From a practical standpoint, they have to. But they also have been re-designing parts of their systems to make it less feasible. (It’s important to note that recovering data from a locked device is not the same as unlocking it.) Not only is it now more difficult for people outside Apple to access private data on iOS devices, it’s also more difficult for Apple itself to do.

Discussion of the two current court cases, with detail on what is possible to recover from a locked device for various iOS versions
No, Apple Has Not Unlocked 70 iPhones For Law Enforcement

Court order requiring Apple to comply

The reason for this has many parts, and one very important part is of course to make their product more attractive to customers. Apple is in the business of selling equipment, that’s what they do. When it came out that we tinfoil hats hadn’t just been making up stuff we suspected the NSA was snooping on (and they far exceeded our speculations) suddenly US companies had a huge problem: international customers. Foreign organizations, businesses and governments alike, were none too keen to have confirmed in excruciating detail the extent that the US government was spying on everyone. If US companies want to continue to sell products, they have to be able to convince security-conscious customers that they aren’t just a lapdog for the NSA.

When somebody says “Apple is only doing this because of marketing” consider what that means. People don’t buy your product without “marketing.” Unless you have somehow managed a sweet exclusive deal that can never be taken away, your company depends on marketing for its continued existence. And your marketing and the products it promotes have to appeal to customers. All over the world, more and more people are saying “You know, I don’t much like the idea that the US Government could walk in and see my stuff.”

Strong cryptography is not just for protecting business interests. Livelihoods, and sometimes lives, also depend on the ability to keep private things private. For years people have claimed that products built in China are untrustworthy because the Chinese government can force their makers to provide a way in to protected data. It’s better to buy from trusted companies in enlightened countries where that won’t happen. Who is left on that list?

And what about terrorism? Of course, the things terrorists have done are awful. Nobody is contesting that. But opening up everyone to risk so governments have the ability to sneak up and overhear a potential terror plot doesn’t change how threats are discovered. The intelligence agencies already have more data than they are able to handle, it’s the process that’s broken and not that they suddenly have nothing to look at. There have been multiple cases where pronouncements of “This would have never happened without encryption” have been quickly followed by the discovery that perpetrators were using basic non-encrypted communications that were not intercepted or correctly analyzed. “Collect more data because we can” is not a rational proposal to improve the intelligence process, even if the abuse of privacy could be constitutionally justified.

There is no such thing as a magic key that only authorized users are permitted to use and all others will be kept out forever. If there’s a way in, someone will find it. Nothing is perfect, a defect will eventually be found, maybe even those authorized users will slip up and open the door. Also, state actors are hardly trustworthy when they say these powers will only be used to fight the most egregious terror threats and everybody else will be left alone. Even if they could prevent backdoors from being used without authorization, their own histories belie their claims.

The dangers of having “secret” entry enforced only by policy to not give out the key
TSA Doesn’t Care That Its Luggage Locks Have Been Hacked

Intelligence agencies claim encryption is the reason they can’t identify terror plots, when the far larger problem is that mass surveillance generates vast quantities of data they don’t have the ability to use effectively
5 Myths Regarding the Paris Terror Attacks

Officials investigating the San Bernardino attack report the terrorists used encrypted communication, but the Senate briefing said they didn’t
Clueless Press Being Played To Suggest Encryption Played A Role In San Bernardino Attacks

What the expanded “Sneak-and-Peek” secret investigatory powers of the Patriot Act, claimed to be necessary because of terrorism, are actually being used for
Surprise! Controversial Patriot Act power now overwhelmingly used in drug investigations

TSA ordered searches of cars valet parked at airports
TSA Is Making Airport Valets Search Your Trunk

What is being asked of Apple in this case?

Not to unlock the phone, because everyone agrees that’s not technically possible. Not to provide data recoverable from the locked device by Apple in their own labs, which they could do for previous iOS versions but not now. What the court order actually says is they must create a special version of the operating system that prevents data from being wiped after 10 incorrect passcodes, the means to rapidly try new passcodes in an automated fashion, and the ability to install this software on the target device (that will only accept OS updates via the Apple-authorized standard mechanism.)

What would happen if Apple did this?

The government says Apple would be shown to be a fine, upstanding corporate citizen, this one single solitary special case would be “solved,” and we all go on with our lives content to know that justice was served. Apple can even delete the software they created when they are done. The FBI claims the contents of this employer-owned phone are required to know if the terrorists were communicating with other terrorists in coordinated actions. No other evidence has suggested this happened, so it must be hidden on that particular phone (and not, for example, on the non-work phones that were destroyed or in any of the data on Apple’s servers that they did provide.)

How the law enforcement community is reacting to the prospect of the FBI winning this case
FBI Says Apple Court Order Is Narrow, But Other Law Enforcers Hungry to Exploit It

Apple would, first and foremost, be compelled to spend considerable effort on creating a tool to be used by the government. Not just “we’ll hack at it and see what we find” but a testable piece of software that can stand up to being verified at a level sufficient to survive court challenges of its accuracy and reliability. Because if the FBI did find evidence they wanted to use to accuse someone else, that party’s legal team will absolutely question how it was acquired. If that can’t be done, all this effort is wasted.

A discussion of the many, many requirements for building and maintaining a tool suitable for use as as source of evidence in criminal proceedings.
Apple, FBI, and the Burden of Forensic Methodology

Next, the probability of this software escaping the confines of Apple’s labs is high. The 3rd-party testing necessary to make the results admissible in court, at absolute minimum, gives anyone in physical possession of a test device access to reverse-engineer the contents. If the FBI has the target device, it too can give it to their security researchers to evaluate. Many people will need access to the software during the course of the investigation.

Finally, everyone in the world would know that Apple, who said they had no way to do this thing, now does. And now that it does, more people will want it. Other governments would love to have this capability and Apple, as a global company, will be pressured to give in. What would that pressure be? In the US, it’s standard for the government to threaten crushing fines or imprisonment of corporate officers for defying the courts. Governments can forbid Apple to sell products in their countries or assess punitive import taxes. Any of these can destroy a company.

Non-technical people often decry security folks as eggheads crying wolf over petty concerns when there are so many more important things to discuss. That’s fine, and our role as professionals includes the responsibility to educate and explain how what we do impacts others.

I encourage you to consider this issue for yourself and what it would mean if you were the person at the other end of that search warrant. Ubiquitous communication and data collection have fundamentally changed how much of the world lives and works, and there are plenty of governments far less principled than our own who want access to private data of mobile phone users. We should not be encouraging them by saying it’s just fine, no problem, go ahead, just hand over the (cryptographic) keys and everything will be ok.

A while back, I stopped paying attention to anything at forbes.com. It wasn’t on purpose (a friend of mine blogs there) but because without JavaScript it serves up a big, blank, nothing. I tried a few times to selectively allow scripts via the Firefox extension NoScript, but no combination of what I considered reasonable permissions would work. I gave up.

Then a security researcher, casually web browsing with (for a security researcher) a normal setup that includes an ad blocker, found malicious software (malware) coming from an advertisement on the Forbes website.

When easy to use tools to block web ads became available, some bemoaned the end of the Free (Internet) World because sites would no longer be able to rely on ads for revenue. Of course users, subjected to ever more annoying advertisements, disagreed.

But whether or not you believe blocking ads is a communist plot to destroy the Internet, there is another problem that this Forbes experience neatly points out: security.

The trouble is that those ads now usually include dynamic content, code sent to your browser that causes windows to open or move around, stuff to dance on your screen, and generally create a nuisance. But since you can’t know exactly what is sent, there could be other things. Popular at the moment is installing what’s called “ransomware“, software that encrypts files on your computer until you pay up.

Here’s a report of the Angler Exploit Kit, the one found in a previous Forbes malware discovery, being used for just that.

I don’t use a specific ad blocker because I’m already blocking dynamic content with NoScript. It’s basically the nuclear option, and isn’t for everyone. I still get ads, but without the singing and dancing (or malware.) If you want to try an actual ad blocker, here are some resources to look at:

The New York Times tests ad blockers for iOS 9
A survey of ad blocking browser plug-ins
Adblock Plus, a very popular plug-in for Firefox

One of the things I do to protect myself is vigorously restrict disclosure of my physical address. I use a mailbox service and only provide that unless I am compelled otherwise. For example, to register to vote I was required to give my actual residence so I can receive the correct ballot (which arrives at my mailing address.)

Then this happens:

Report: 191M voter records exposed online

Some organization that holds copies of US voter records, through a monumental database screw-up, has allowed public access on the Internet to all of the data. Nobody knows exactly how, or by whom, or even for how long, because the most likely actors are falling over themselves to disclaim any association with the breach.

The California Secretary of State reports that there were 17.7 million registered California voters in 2015. The author of the above article quotes a security researcher who verified access to “over 17 million California voters.” I will leave as an exercise for the reader the percent chance of my information having been exposed.

The problem with secret information is that once it’s released there’s no way to pull it back. Access to voter information varies by state, but many states restrict who can access it and for what purposes. California is particularly strict in that it can only be used for campaign or government purposes. Without question, this disclosure is violating the law. There will be investigations, and charges, and lawyers will wrangle over this for years to come. Maybe, eventually, some person or organization will be held to account.

But for some people, none of that will matter. It’s not just an academic discussion when I have friends and colleagues who regularly receive threats of death and other abuse of the most vile nature. Even for those who have similarly assiduously protected their physical addresses, they will need to face the possibility that the only option to protect themselves from their harassers is to move.

For those friends and colleagues, I can at least report that the State of California has a program that provides a free Post Office Box to qualifying abuse victims, than can legally be used to register to vote and access other government services. So if it comes to that horrible decision, perhaps you can get some help to protect yourself after.

For me, and everybody else, we are on our own. If you live in California and want to express an opinion in this matter, here are some suggestions:

Governor Edmund G. Brown Jr.
Secretary of State Alex Padilla
Senator Barbara Boxer
Senator Dianne Feinstein
Find Your California Representative

For other states:
Find Your Senators and Representatives – OpenCongress

This, friends, is the future.

You may recall my previous post about Apple’s two-step verification and how I reluctantly disabled it for a long trip outside the US. Now I find out that the government of Australia came to the same conclusion. Only one of us seems to be troubled by it, however.

Australian government tells citizens to turn off two-factor authentication
When going abroad, turn off additional security. What could possibly go wrong?

I’m not going to get into any conspiracy theories about why the Australian government might wish to discourage the use of better authentication methods. If they wanted to get into someone’s government services account, I presume they have other ways to do it than hope they can guess at their lousy password.

But putting out the suggestion that two factor auth is something maybe not so important? There’s the real offense. “Go ahead and enjoy your holiday, don’t bother your pretty little head about that complicated security thing.”

Yes, the problems of handling two factor auth when swapping SIMs are a concern. A concern for the people who design these systems that are complex and cumbersome to use and seem to forget that real people don’t conveniently stay put all the time. But how about we talk about that instead of discouraging people from using them?

I wanted a dedicated server to experiment with Swift development on Linux, so I set it up on an Intel NUC (“Next Unit of Computing”) embedded box similar to a Mac Mini. It’s a DE3815TYKHE kit I got from a Tizen developer event a while back. It comes with an Atom E3815 CPU and 2 GB of RAM. I’m not using the onboard 4 GB of flash storage but installed a 256 GB SSD.

Taking advice I found from other users, I updated the BIOS to something known to work as a headless server (without monitor and keyboard) and installed Ubuntu Server 14.04.3 LTS. I could have used the latest 15.10 version, but since Ubuntu has designated 14.04 as a Long Term Support release it’s safe to use for several more years without concern I will be forced to upgrade.

After getting the box set up, next is where to install the Swift dev tools. All the comments I’ve seen seem to expect you will put it in your own home directory, supported by the fact that the file permissions for the contents of the tar package are set to only allow access by the owner. That’s fine if you are doing this on a VM that only you will be using, but I wanted to allow the option of sharing this with another developer on my server. The only reasonable way to do that is put it in a system location and make it owned by root.

The topic of where to actually install a package on a Unix-type server is a religious discussion on the order of which editor to use, so I’ll just say that I put it in /usr/local. (I changed the versioned package directory name to “swift” for convenience.)

The install directions on the Swift download page are good and easy to follow if you are already comfortable with average command-line system administration tasks. (Don’t forget to add the install path to your user’s PATH as described.) Additionally, I installed clang 3.6 as suggested on the github page for anyone on Ubuntu 14.04 LTS.

The directions don’t talk about the install path much. I discovered I had a problem when I got permission errors trying to compile a trivial “Hello, World” example. root could compile, but not anybody else. The solution was to modify all the file permissions so other users can read and execute the needed files. Since I untarred into my install location as root, root already owned all the files so the owner permissions were fine. I didn’t want to universally change everything when adding group and other permissions (plain text files don’t need to be executable, after all) so I did that by hand.

First give group and other users read permissions. Even text files need this, so it’s safe to do it with one recursive command from the top level of my install directory.

chmod -R og+r *

Now locate all the directories and add execute permissions so regular users can traverse the filesystem.

find . -type d -exec chmod og+x {} \;

Finally, identify the remaining files that should be executable by searching for the original owner permissions in a detailed directory listing of everything.

ls -lR | grep rwx

These are the ones I found that only had “rwx” in positions 2-4 indicating permissions for the file owner:

in swift/usr/bin:

-rwxr--r-- 1 root root 56959 Dec 18 23:36 lldb-3.8.0
-rwxr--r-- 1 root root 86318 Dec 18 23:36 lldb-argdumper
-rwxr--r-- 1 root root 927980 Dec 18 23:36 lldb-mi-3.8.0
-rwxr--r-- 1 root root 63672187 Dec 18 23:36 lldb-server-3.8.0
-rwxr--r-- 1 root root 9177 Dec 18 23:35 repl_swift
-rwxr--r-- 1 root root 73808411 Dec 18 23:32 swift
-rwxr--r-- 1 root root 1754089 Dec 18 23:39 swift-build
-rwxr--r-- 1 root root 7683691 Dec 18 23:36 swift-build-tool
-rwxr--r-- 1 root root 856388 Dec 18 23:31 swift-demangle

in swift/usr/lib/swift/linux:

-rwxr--r-- 1 root root 7287250 Dec 18 23:39 libFoundation.so
-rwxr--r-- 1 root root 5037507 Dec 18 23:33 libswiftCore.so
-rwxr--r-- 1 root root 15373 Dec 18 23:33 libswiftGlibc.so
-rwxr--r-- 1 root root 172853 Dec 18 23:39 libXCTest.so

in swift/usr/lib/swift/pm

-rwxr--r-- 1 root root 284768 Dec 18 23:39 libPackageDescription.so

Add execute permissions to these files individually with chmod og+x.

After all this, I was able to compile from a regular user’s home directory.

Recently some folks with Tor, the open source project behind the global decentralized anonymizing network, released a beta version of a new chat client. It’s designed to be secure by default, but usable by normal people. This is something that has escaped many previous efforts, so it’s a welcome development

It encrypts messages with OTR (so only you and the person you are chatting with can see them) and sends them via the Tor network (to hide where you are on the Internet.) These are very, very good things and I’m happy to see user-friendly applications building on the excellent work Tor has been doing.

The difficulty for me is how it fits into the way I use chat, specifically that it’s impossible to save chat transcripts. While that has a benefit for the purest of high-impact security, what doesn’t exist can’t be compromised, it is exactly the opposite of how I use chat.

It seems that many people use instant messaging only for one-off communications. I treat it like email and constantly go back to reference something I’ve sent or information I received. This is a major reason I’m still using Apple’s Messages client, because it makes searching chats trivially easy.

But despite Messages allowing you to use a whole collection of different chat services, it doesn’t provide encryption for anything other than Apple’s own service. (Which I don’t use for reasons too long to go into right now.) I’ve tried other clients, but haven’t been thrilled. Even without getting into if or how they use encryption, I’ve found them clunky. And, most importantly, hard to reference old messages. The best of them, Adium, has a custom viewer only usable from inside the app but the archive chats use a tiny fixed size font that can’t be changed. That makes it useless for me.

Between encryption by default and using the Tor network, I really really want to like Tor Messenger. I dug around and with some help from the Tor folks figured out how to re-enable chat logs, but the results were not usable for several reasons:

First, it creates files in JSON format, something designed to be easily readable by computers. While it’s true that JSON contains text, it isn’t in a human-readable format by any rational definition because it contains a bunch of required formatting and other control structures that get in the way of human understanding.

Next, that file is overwritten every time the program starts. Unless you have your own way to save the contents automatically (and this is a far more difficult problem than it sounds) you lose your history anyway.

Finally, it’s located deep inside the app’s install directory. This is not a problem for me, but would certainly be an issue for anyone not very familiar with technical aspects of OS X. And that also means it’s excluded from Spotlight, Apple’s disk searching tool.

I still have hope, because it’s early and also because it’s open source. When they are able to release the Mac build instructions, I can just go change what’s annoying myself. (And if I’m going to choose an open source project to work on, I’m thinking I might prefer the more security-focused Tor over Adium. Sorry Adium friends.)

But for the moment, unless I’m willing to forge onward into the wilderness of creating my own custom version of something, I’m still stuck with the choice between secure and annoying, or insecure but fits into how I work.

I wish I could make a joke and say this is some new country music dance I’ve invented. But authorization problems are not very funny, particularly when it’s with something that is supposed to be helping me.

I’m going out of the country for a while, so in addition to the usual figuring out how to fit 10 pounds of travel gear in a 5 pound suitcase, I’m preparing my digital equipment as well. It started off simply enough, making sure I have the latest operating systems on all my devices. (Well, not really, but I’ll spare you the tedious Genius Bar conversations.)

The real problem is with my Apple ID and Apple’s two-step verification.

I have been using two-step verification, what the security world calls two-factor authentication, which means when I do certain things involving logging in with my Apple ID, I have to enter a code that is sent to my phone. That’s all well and good, to make sure the person logging in is actually me.

But what happens when you don’t have that phone? Or, relevant to my situation, when you’ve replaced your usual SIM with one you’ve bought in another country. Suddenly you can’t get those messages anymore, and you aren’t allowed to do whatever it was you were trying to do.

In theory, I could just register my other SIM as a “new device.” But to do that you need to have access to both devices at the same time, the old one to login to your account to make changes, and the new one to authorize it. But I don’t know what my phone number will be when I get there (my SIM from the last trip might have expired) so I can’t do it before I leave. And my home SIM may or may not work (or be hideously expensive to use) in my destination country. And in either case, since it’s only one physical phone, I can’t have both of them active at the same time. I have other devices, but this process requires one that can receive SMS and the wifi-only devices can’t.

Because of all this, I decided to disable two-step verification while I’m away.

Hugely Important Reminder: you should make any updates to your Apple ID before you leave, while you still have access to your regular phone number.

So I login, and disable two-step verification. Now that I’m not using it, I’m required to set security questions for my account. Security questions are horrible, and the way they are used make your account less secure, not more. (Here’s an article about that: Study: password resetting ‘security questions’ easily guessed.) But this is what Apple requires, so here I am making up yet more passwords that I have to remember.

I pick the set of questions I’m going to answer, open up my trusty password manager, and generate a bunch of random text strings like I do for passwords. I copy the first string and paste it into the appropriate field. I copy the second string, but when I switch back to the browser, it resets what I just put in for the first one. This means that I have to actually type each security question answer. That is a recipe for fail if I manage to mess up the complicated string I’ve just generated. So it’s back to using real words that I can (usually) get correct the first time.

If you want to know why I don’t like to do this, you can read this on Wikipedia about a particular type of password cracking: Dictionary attack.

I then compose a phrase for each question and save those in my password manager. I go to type it into the form and I can’t because it’s too long. The answer field is size limited, so my carefully crafted phrases are useless. I have to come up with shorter (less secure) phrases, that I can type without errors, and they must be unique. I make one, and then start sticking numbers in it for each question. Of course, I can’t do anything helpful like include information about the individual question, because that reduces the randomness. In a short string particularly, if any part of it is less random that severely reduces its password strength.

Now I decide to set a recovery email, which Apple will use to notify you of authentication matters regarding your Apple ID. It’s a good idea, because if somehow you lose access to your primary email you can check a different account and get the alerts. I make up a new email address (because I can do that) and save everything.

I’m not really done, because I haven’t responded to the recovery email verification message, but I’ll get to that in a minute. Now I get to repeat the process for my second Apple ID. (FYI: very not recommended, it has been nothing but problems and I wish I hadn’t been forced to.)

I get through everything for my second Apple ID, and go to set the recovery email. I use the same email address that I created earlier, which it happily accepts. Now I go look at the emails generated by this process: most are confirmations, but the ones about the recovery email need to have the address verified. Ok, fine.

First ID recovery email: go to the link in the message, type in Apple ID number one, it says that email address can’t be used. What?? So now they tell me that I can’t use the same recovery email for multiple accounts. And because of this, I can’t verify it. I have to go back and login as each ID (answering the security questions which, thankfully, can indeed be pasted from the password manager) and change the recovery email addresses to something else. Yet another thing that has to be saved in my password manager, in such a way so I don’t later confuse them between the two accounts.

NOW finally I’ve disabled two-step verification. I have six new unique passphrases and two new email addresses to keep track of, and my accounts are less secure than before. Win?

I’m packing for a trip and came across this article about RFID blocking wallets and such:

The Skimming Scam: RFID-blocking wallets can work. But do you really need one?

They block RF signals from reaching passports, credit cards, and other contactless data sources that can, in theory, can be accessed remotely by anybody nearby with the appropriate reader. I have a bunch of shielded stuff, and I use it. Why bother?

“What’s less clear is whether RFID skimming is a threat worth worrying about in the first place. For all the hype about the theoretical danger, there have been few if any reports of actual crimes involving RFID skimming. The technique appears to be far more popular among security researchers than it is among thieves, and for good reason: There are much easier and more effective ways to steal people’s money and data.”

I don’t think they are completely a waste for average people, but it’s certainly a marketing thing for the manufacturers. I do it because I’d rather share less data than more and, more importantly, because I hang out in places with security researchers.

Now I do buy bags with security features, many of which come with RFID blocking pockets. I like that they do, but it’s the other locks, clips, security straps and so on that are the reason I’m willing to pay more for them. (I look for them on sale or discontinued.) These kinds of physical security features are the primary interest, and are absolutely worth it for me.

And I thought a single highly disturbing security story was enough for one day. I’m not even all the way through reading the article from The Intercept about how GCHQ and NSA have the keys to decrypt a huge swath of the world’s mobile phone communications and I have the urge to throw away all my computers and hide under a rock.

The Great SIM Heist: How Spies Stole the Keys to the Encryption Castle

Normally I’m not prone to hyperbolic statements like “There is nowhere to hide” but for people who use any communication technology it’s more and more true. You are being monitored and archived. Maybe you are boring and uninteresting to government spooks. At the moment. Maybe forever. But how does it make you feel knowing that could, by deliberate action or entirely by accident, change at any time? It certainly doesn’t make me happy.