Security, Snowden, and Schneier
I’m a really big fan of security analyst/guru/cryptographer Bruce Schneier. I’ve been reading his blog for years and actually got a chance to meet him in November at a talk he did for a very small room of us on the NSA and just about anything else anyone wanted to talk about. Schneier is one of the people Edward Snowden allowed access to his documents, which obviously gives him a particularly interesting point of view on the subject. His basic take was best summarized in three statements: (1) This isn’t overly surprising and won’t be going away anytime soon, (2) the very best thing that happened out of all this is that the private companies involved have been exposed and some, like Cisco, have seen their business fundamentally hurt, and (3), everything else aside the one thing to know about everything the NSA was/is doing is that it doesn’t work. The last is obviously the most damming (and Schneier is definitely not the only one saying this). This method of collecting everything with hope of finding something just doesn’t work as well as good, old-fashioned, detective work.
Interestingly I was talking about the Snowden/NSA stuff with a friend from DC who mentioned that the story hadn’t gotten a ton of coverage there (as compared to government shutdown or Healthcare.gov) because it’s perceived as an issue people don’t really have a problem with. Basically we have seen over and over again that we’re willing to throw away liberties for our “freedom” and to fight “terrorism.” Not much to say on this one, just an interesting take.
Finally, and actually the real point of this post, was to share two interesting quotes from an interview Schneier did with Motherboard. The first is about our general perception of what’s secure and what’s not:
Probably the biggest problem with the public’s perception of security is that things are secure as a default. We see this a lot in the voting industry. The voting machine companies will come up with an internet voting machine or electronic voting machine and the onus will be on the security company to prove that it’s broken. It’ll be assumed secure, and that’s just nonsense. When you see a new system, you have to assume it’s insecure, unless you can prove it’s secure. The public perception is reversed. “I have a door lock, it’s secure unless you show me you can break it.” That’s not right—it’s insecure unless you can show me that it is secure.
The second is on the sort of security threats Schneier finds most threatening:
I’m most worried about potential security vulnerabilities in the powerful institutions we’re trusting with our data, with our security. I’m worried about companies like Google and Microsoft and Facebook. I’m worried about governments, the US and other governments. I’m worried about how they are using our data, how they’re storing our data, and what happens to it. I’m less worried about the criminals. I think we’ve kinda got cyber-crime under control, it’s not zero but it never will be. I’m much more worried about the powerful abusing us than the un-powerful abusing us.