Shields Up: Developing Security Skepticism
How to become the right kind of cautious, when it comes to security news
The CIA can hack televisions and phones all over the world! WhatsApp is backdoored! Everything’s on fire!
A little fear can motivate us to take action. But as consumers of security news, even the most well-intentioned reporting can scare us into paralysis—or worse, encourage us to adopt behaviors that promote a false sense of security. Giving readers information that is both meaningful and helpful requires a delicate touch. As journalists and readers, we should therefore become careful interpreters of security news.
To give ourselves a realistic idea about strong practices, we need to cultivate our sense of skepticism when reading related reporting. Let’s walk through some lessons from recent stories.
“Breaking” Security News
In the rush to break news, media organizations sometimes run headlines too quickly. Some stories need more time to develop. Let’s unpack one example.
Wikileaks’ recently released a large cache of internal CIA documents revealing hacking techniques, including system exploits for Android devices, some versions of Windows and iPhone operating systems, as well as OS X El Capitan. In their own analysis, Wikileaks suggested the CIA’s techniques allow them to “bypass the encryption” of secure messaging apps like Signal, WhatsApp, and others. Likewise, they promptly promoted this line on Twitter.
CIA hacker malware a threat to journalists: infests iPhone, Android bypassing Signal, Confide encryption https://t.co/mHaRNCr3Df— WikiLeaks (@wikileaks) March 7, 2017
Guess what happened next? News organizations ran with it.
Countless news organizations, including the Boston Globe, the Associated Press, and others, began publishing stories on the CIA’s ability to “bypass encryption” and immediately took to Twitter to call into question the encryption behind secure messaging apps. Reporters immediately began asking readers how this enormous security threat will affect them.
Are you worried about WikiLeaks' revelations that confidential messaging apps are not actually secure? Or not? Email me at email@example.com— BarbaraOrtutay (@BarbaraOrtutay) March 7, 2017
Much of the press skipped the part where we ask how we know this is true.
When you look closely at the documents, there is no evidence of undermining Signal or WhatsApp’s encryption. In fact, there’s not one mention of the apps anywhere in the documents. However, the cache did describe CIA exploits for holes in mobile operating systems. And if you control the phone, you win. You get the phone calls, games, video streaming apps, and yes, even the encrypted messaging apps.
If you absolutely have to play up the encryption angle, here’s the story: Because of strong encryption, the CIA is forced to go through the laborious task of hacking highly targeted devices, one at a time. (Probably not yours.)
Countless news orgs were forced to soften claims of broken encryption. The New York Times first led with a similar slant as WikiLeaks, and later backpedalled to correct their tweets on the topic:
Minor inaccuracies in the news don’t shock specialists, and the security community generally does not dwell on the delivery of one story or another. But when they do, something’s gone very wrong.
Nicholas Weaver, a security researcher at the University of California, Berkeley, suggested the press ought to “step back and wait an hour” to get all the facts before reporting. I’d extend this advice just a bit further. As readers, we should take a step back and wait an hour, especially when it comes to large document leaks that don’t lend themselves to off-the-cuff analysis.
To get started on our journey toward skepticism, this story teaches us a few things:
Wait for independent experts to weigh in.
If a story reveals security holes, ask who is most likely to be affected.
Beware the language used in reporting (e.g., to “bypass” encryption is misdirection).
How Serious is the Security Threat?
When leaving home in the morning we lock our doors, and get on our way. If we’re feeling cautious, we might lock the windows. We don’t usually preoccupy ourselves with the possibility that someone will break in by ramming a Toyota through the doorway.
When we worry about what’s technically possible, rather than what’s likely to affect us directly, it’s easy to be afraid.
For example, the Guardian recently claimed a backdoor was discovered in WhatsApp which uses the same encryption protocol as Signal. The Signal protocol was designed to prevent even the company from reading your messages. (Spoiler alert: there is no backdoor.)
The Guardian framed WhatsApp’s implementation as a “loophole” that could open the service to government snooping. Here’s how it works.
To send an encrypted message, you have to encrypt to a specific key. In essence, WhatsApp needs to know which locked safe deposit box they should use to drop your message. To make sure messages are delivered to users who may log off the service when getting new devices or reinstalling the app, WhatsApp allows encrypted messages to reroute to a new encryption key so they can be unscrambled by the recipient. This affects new and future messages, but not old messages.
WhatsApp isn’t doing anything crazy here. There are good reasons to implement its encryption this way, but not without tradeoffs. This could hypothetically allow WhatsApp to forward offline users’ messages to new keys and new devices belonging to someone else, such as WhatsApp the company, its owner (Facebook), or a government that could compel WhatsApp to hand over messages.
In the end, there are very specific scenarios where WhatsApp’s implementation could be a problem, but the security community condemned the article, asserting its claims were overblown. Bruce Schneier puts it well:
How serious this is depends on your threat model. If you are worried about the US government—or any other government that can pressure Facebook—snooping on your messages, then this is a small vulnerability. If not, then it’s nothing to worry about.
Moxie Marlinspike, the author of Signal’s and WhatsApp’s encryption protocol, promptly shared a blog post to defend their deliberate implementation decisions. Dozens of security researchers co-signed an open letter organized by Zeynep Tufekci asking that the Guardian take down its story (full disclosure: I signed as well).
Thanks to Guardian's irresponsible & baseless WhatsApp reporting, I'm flooded w reports of vulnerable folk switching to less secure options.— Zeynep Tufekci (@zeynep) January 16, 2017
In the weeks that followed, the Guardian rescinded the language of “backdoors” and included the perspectives of other security experts that took issue with how the story was characterized. It’s clear that the news organization got savvy to the article’s shortcomings, but despite significant changes, the Guardian never did pull the story. In fact, they seemed to double down on it. The Guardian went on to give the author a second article to explain the problem in his own words.
We put a lot of trust in reporting from major news organizations. The damage of questionable security reporting is two-fold; it convinces unfamiliar readers to be less safe, and it convinces familiar readers to divest themselves of the news organization’s reporting.
Misinformation is harder to catch when it contains a shred of truth. These kinds of stories can easily scare us, and often require a close reading to learn what’s true and what is misdirection. There are a few straightforward lessons here that will help us become sharper readers.
What does this story tell us?
Don’t lean on one opinion. Look for the consensus of experts within and across stories.
Ask how “expensive” the threat really is (time, effort, financial, legal, technical resources).
See Through the Hype
So far we’ve been talking about reasons not to panic, but what about reasons to be skeptical of security claims when they come from developers, and when those security claims get press coverage? Let’s look at one more example.
In the early months of Donald Trump’s presidency, White House staffers are using a messaging app called Confide to share sensitive information, including with the press. Confide’s website describes the tool as a “confidential messenger” that “allows you to have honest, unfiltered, off-the-record conversations…just like when you’re talking in person.”
Confide’s website also touts the use of “military grade cryptography” and Transport Layer Security, which sounds fancy, until you realize Transport Layer Security is the standard you use to secure your connection to regular old websites like Facebook.
Okay, so Confide’s creators want you to believe their tools are secure. But how should we know whether to believe them?
First, we want to understand the motivations of the creators of the software we rely on most to keep us secure. Why are they building these tools, and what are their political leanings? What other things do they work on? Do they have a history of building privacy- and security-supportive software? It’s only a quick search away. If it’s not a quick search away, that’s useful information too.
Even if you aren’t familiar with the creators of your favorite app, is it possible to look “under the hood”? Invest more trust in open source software, compared to closed source software that can’t be independently analyzed. Why? Open source software allows security researchers to look for potential security holes. If it’s closed source, you need to place a lot of trust in the developer to get the security right.
Most of us are not in a position to conduct a thorough vulnerability analysis ourselves, but it’s worthwhile to use some Google-fu and find out if the software is open source, and if so, whether it has been publicly audited.
Audits are intended to surface vulnerabilities so developers can patch them immediately. Even when audits surface serious vulnerabilities, it’s promising when developers respond quickly. Don’t judge developers on whether they have vulnerabilities in their software. No system is perfectly secure. Judge them on how they respond once they learn about the vulnerability.
For example, a quick search would tell you that Confide is closed source and independent researchers have identified serious security flaws allowing the service to arbitrarily add keys to a user’s account without any notice to the user. Whether it makes sense for you really depends on how much you trust the developers.
If you don’t trust a developer, and can’t verify their claims on the security of their applications, consider putting sensitive data and communications somewhere else. Likewise, be cautious of news articles that uncritically repeat such claims.
Security News Consumer’s Handbook
In the spirit of On The Media’s handy Breaking News Consumer’s Handbook we have condensed this article into a short tip sheet. Here’s how we can sort practical, accurate information out from the noise.
Click the image below for your own PDF copy of our Security News Consumer’s Handbook.
Martin Shelton is a user researcher working with at-risk groups and the press on digital security hygiene.