…whoa, heavy stuff, dude… What are you smoking?
Safety is one thing, security is another. Does that seem logical to you? No? Welcome to the club. I don’t explain the little things, not because I can’t but because I shouldn’t – what might seem straightforward to me now, might not seem that way to me in the future. Defining something now, at this moment in time, might mean I’m more likely to stick to it and miss the inherent change of things. Everything changes. Everything. So why are we so keen on sticking labels and definitions on to everything? The psychological reason is safety – we fear the unknown and we do everything we can to make it less scary, less hidden. The logical reason is efficiency – we don’t like to describe things in detail, it takes a long time and it’s easier and faster to just name it and reference that name.
Programmers have tools to perform the same task – it’s way faster to create a reusable code than to reinvent the wheel every time we need one, so they turn to libraries. Libraries are pieces of code one can reference over various programs, pieces of code that doesn’t have to be written every time it’s needed and by sticking it into a file somewhere and making it accessible by other software it cuts down the development time of new programs. It’s safer to create one function or procedure once the right way than to continuously rewrite a piece of code to do it, every time. Not to mention the fact that the more you write the code to perform the same task, the more likely you are to get it to crash, it’s like programmer’s version of a Murphy’s Law: “if something can go wrong, it will”. Safety means making sure failure is less likely, by reducing the risk of unwanted crap happening. You take one set of actions, you stick a name on it and whenever you want to describe it you just reference the name. That’s actually how definitions work. You know exactly how to describe a pig, for instance, but doing it every time you have to is slower than just saying “pig”.
Now this is all rather nice if we’re talking about things that change very very slowly, but doing this to the ever changing world of technology means we’re doing it wrong. But we are doing it, actually. We’re trying to describe the indescribable, to define something that can’t be defined – because by the time we’re agreed on the definition or the description it’s already changed forms. Communication is like that. Why do we communicate? We do it to transmit messages. Do we have to? Yes. No man is an island, no man should be cut off from society. We are social animals, we are herd animals, with all the benefits and vulnerabilities that particular choice gives us. Society changed so much, I’m having trouble reading TechCrunch for instance – I still don’t get what’s all the hubbub with startups, angel investors and founders stuff. I mean, it’s just words but the definitions somehow make no sense. It has to have context and since I’ve been cut off from that world for more than a decade, it’s not exactly transparent or intuitive. I’m not referencing the same libraries. And what do you know, turns out it’s a global thing. People don’t understand what’s right in front of them.
This text I’m writing now is about something else, actually. It’s something to disprove the “we need a master key for encryption” view of the governments, all in the name of finding terrorists before they blow something up. Right. There’s something to be said about that. You see, 1 to 10% of people have no issue with that. There’s another 1 to 10% who would be trying to do something bad, and I’m not talking only about terrorists here, I’m adding those thieves monitoring your social media for clues about your location (since it’s easier to rob somebody if they’re not home) or similar bad guys. The rest of people, the vast majority, is undecided – they are the problem. Encryption won’t do a thing to stop bad guys, but it will stop the vast majority of the undecided to act upon their temptation. Encryption won’t stop true hackers or spies from reading your emails or your online secrets, but it will stop abusive/jealous husbands or stalkers. It will prevent stupid kids from doing stupid things just because they can. Nothing more, nothing less.
Now let’s say we’ve done it, added a government only encryption backdoor to emails, social media, internet traffic and so on. It’s public, out in the open, common knowledge. Do you really think those bad guys would use something that can be traced back to them? I mean, if they were that dumb, they’d be already in prison, right? Or dead, I seem to remember a drone strike on some idiots getting selfies inside their hush-hush terrorist compound. Idiots don’t get old, they die, fast. Old terrorists are smart. So our encryption is cleartext to NSA, what now? Nothing. Threats adapt, change, they adopt new ways to achieve their objectives. You really think for critical messages the bad guys will use twitter? Or RSA, Diffie-Hellmann, polymorphic encryption? Right. The best encryption ever is still not broken – and it can’t be broken because it uses one-time pads. I mean, you wouldn’t issue a kamikaze the code book for the entire operation, right? You’d give him a few sheets of paper with one key each, each sheet different for each idiot, then use those. There’s no way to break that, not in time to stop attacks. Technology is only used to spy on allies or friends, for economic espionage and /or some blackmailing the diplomats or politicians, because they’re the ones using it. Encryption is not the issue when it comes to anything else, because the weakest link has never been technology. Ever.
It’s the people using it. Like having a 23 character long password for a file encrypted with DES, the one with 64 bit blocks. This last sentence would probably mean nothing to you, but let me just say this – that encryption algorithm uses only the first/last 8 characters of the password. 8×8 = 64 bits, in theory, disregarding parity. So there goes your bloody effort for an uncrackable password. If you didn’t know it, you’d fool yourself into thinking you’re safe. You ain’t. People use the biggest, safest, most secure tools on the market on computers without active firewalls and antispyware tools. People reuse usernames and passwords. This last one can do the biggest damage ever, easily. For instance, creating a website is easy and fast. So is creating a membership, or a login system to that site. If you know what to use, what your target’s interests are, you can create a website to target that particular individual and lure him/her into registering. Hell, use 2 or more websites, see what usernames your target makes and how he/she creates the password. Chances are, they’ll use the same password, even if it’s 100 characters long. Then you use that knowledge to steal their identity or hack their computers. Encryption? Never heard of her. Quantum chips and brute force attacks? Somebody in the NSA must be rolling on the floor laughing, right now. Target the people not the technology, it’s the most cost effective way to do it.
When reading such statements, be them from the government or the media, always ask yourself this: what message are they really sending? What are the assumptions here? Asking for a front door to encryption assumes bad guys use standard encryption, or hell, that they use encryption at all. Do they? Even more terrifying is the other part – the assumption the government will use that front door only to track down bad guys. What’s their definition of bad guys? I wonder.. Why not assume bad guys exist in governments too? No, not really to oppress the people as most conspiracy theorists would like you to believe, but for personal gain. Think Snowden, then assume he didn’t do it to expose unlawful practices but to sell the information he had access to, since the most used companies are Facebook, Yahoo, Google, Microsoft, and so on. Knowing their opponents would try to avoid censorship by using foreign resources for email, like Gmail, how much would China pay for the emails of activists? Would Russia pay? You betcha.
Safety is about minimizing risk, security is about preventing threats. Well, that’s what I make of it, anyway. SigInt is overrated. Humint is the future. Combine those two and you have 60% of a good prevention strategy. The rest? Now, that’s not something for the weak of heart – it’s behavioral analysis. You know why the Jerusalem Airport hasn’t had much terrorism in the past decade? Because they’re smart. Because they use their brains. Because they know people are the weak link, even if and especially because we’re talking about terrorists. Unless you’re a spy, and even then that’s saying something, there’s no training avaiable for blending in, for looking innocent when you know you’re a bad guy. That’s even more accurate if we’re talking about kamikaze, about people sent to blow themselves and a big chunk of what’s around them, up. Think of them as disposable tools, it’s not cost effective to train them for that and such training has to have a “practice” component, which means parading them in and out of secure areas, which means a big chance of detection. So… by identifying guilty behavior you might actually be catching more of them than by intercepting their emails, assuming they use such things.
I might be preaching to the choir, truth be told. Old timers might understand that, but people used to the technology we now have can’t possibly imagine a world without it. I mean, pen and paper? Bullshit, right? But that’s what bad guys use. And if they aren’t, they’re paying experts well to get the right stuff – other than that, how can North Korea hack Sony? Think about it, a country with only 4 class C IP address range, hacked Sony. Holy hell, what’s next? But they did have the people to do it. Now since a lot of idiots run off and join terrorist organizations, not to mention the people we trained for it, I don’t like the chances of them getting their own experts in hacking and crypto. But I’ll bet not one of them is trained to resist a good ol’ fashioned spy infiltration, even if it means leaving a “prepared” USB stick somewhere it can be found or using russian sparrows to get them to switch sides. You can’t remotely hack an airgapped network (actually, you can, but … it ain’t easy) but you can do it by planting special viruses in nice-looking USB sticks and leaving those where the users of those networks can find them. Because… how can you use one of those if you don’t plug it in?
Yeah, scary stuff. Only it’s real. Actual bad guys who actually pull the trigger on something that hurts or kills don’t use much technology. Their PR guys do, but targeting those guys isn’t hurting the big organization much. But the reverse is rather viable, actually, since most of the civilized world is so plugged in and lacks the knowledge or the desire to learn to understand what they’re using. Low tech can hurt us, instead. Hell, it’s quite real. Killing North Korea’s internet won’t hurt them much, but they managed to hurt a big company and indirectly their employees and those depending on them.
Other than that, I’d say we’re royally screwed. No, not by this controversy but by our inability to adapt, by our inability to cover our largest weakness – the human factor. Sun Tzu said it best:
“Should the enemy strengthen his van, he will weaken his rear; should he strengthen his rear, he will weaken his van; should he strengthen his left, he will weaken his right; should he strengthen his right, he will weaken his left. If he sends reinforcements everywhere, he will everywhere be weak.”