Features:

A Guide to Practical Paranoia

Security technologies come and go, but habits of suspicion are evergreen


(Carlo Scherer)

When we talk about security online, we use terms like hashes and keys, updates, and encryption. We might talk about how complicated your password is, or whether you’ve enabled two-factor authentication or not. Security certainly involves all those topics and more, but it boils down to two things: a way to keep information private, and a way to protect our ownership or control over that information.

In most cases, before we lose either privacy or control, the first thing we lose is our paranoia.

How to Catch a Fox

There’s an old story that goes something like this: The fox is a clever creature, clever enough to avoid capture through many conventional means. A fox knows what a trap is and will avoid it, even if baited with extremely enticing food. You can camouflage the trap, and make all attempts to remove your scent from the trap, but you won’t reliably catch the fox.

However, there is a process, the details of which may vary, called “step trapping.” To catch a fox, you must first convince the fox that there is no danger; that there is no trap.

On the first step, you put a bit of meat, on the ground in the location you believe the fox to frequent. You wait a few days, and when you are satisfied the fox has retrieved the meal and thought herself very clever, you proceed to the next step.

On the second step, you set out a bit of meat out in the same location, but this time you put up one wall of a cage near it, as devoid of your scent as possible and you leave it be. The fox will be suspicious, but over time it will feast on the meal you laid out, ignoring the wall structure entirely.

On the third step, you add the next piece of the wall, and continue on in that fashion; waiting and adding, until you are left with a fully functioning cage with the door wide open. On the final step, you trigger the door to latch when the food is snagged. The fox will be trapped—and, for the first time, aware of the trap’s existence.

Step trapping is an exercise in control, played out through a level of behavioral conditioning. The fox normalized the danger until it became invisible. By the time it realized there was something to worry about, it was too late.

Welcome to the Trap

The digital realm is an array of normalized dangers. A text box constantly asks, “What are you up to?” Social norms increasingly demand that we put more and more of our private lives on display. And an abstract sense of overwhelming complexity that creates a false sense of safety. Add the fact that there are entire industries built to violate your privacy and security, and the mountain of vulnerabilities grows.

Here’s a hypothetical a little closer to home.

I’m a malicious attacker and I have targeted a journalist. A quick web search shows that they run a personal blog that predates their mainstream journalism career. I can see from the source code that they are running a popular CMS, and I know that I can query the site in a way that will reveal their username.

My query works and I take that username and search a database of darknet password dumps from the major website hacks of the past few years. An earlier search for my target’s employer-affiliated email had turned up nothing, but a search for their CMS username turns up several promising results. Four accounts have been compromised using this username, and three use the same password. The odds are in my favor that this username/password combo are repeated throughout this person’s accounts. I also notice that the compromised accounts all share the same personal email—one not available anywhere else I searched.

Their personal email account gets me access to an online document drive, which gets me notes about sources, names, and dates. I find several photos in this documents area, many of which are personal and still contain their EXIF data. That EXIF data has location information in it, from which I am able to discern this person’s home address. The scenario only grows more troubling from here.

Have you felt your perspective shift a little bit? Have you started to spot the behaviors we’ve been conditioned into by social norms or dark design patterns?

Are you feeling a little paranoid?

Good. Let’s build on that.

All of the above could have been subverted with a change in behavior.

  • The target went with a popular CMS and set it up out of the box, assuming that its popularity and ease of use meant that security was handled at all the important levels.
  • In order to keep up with their various credentials, they reused usernames and passwords.
  • They did not have a regular check-in routine to see if any of their accounts might have been compromised. There are tools that help with this.
  • They mixed business and personal file uploads in an online cloud account.
  • They put files in the cloud in the first place, when it’s possible they might not have intended or needed to.
  • They left EXIF data intact on the files and did not check whether the cloud service stripped it out or not.
  • They had an overall assumption that these services, run by large corporations, are safe because of their widespread usage.

You could look at these mistakes and put together a technical checklist to follow, but technology is constantly changing and no list will ever stay current. Often, even within a single individual’s online life, the context of security changes from scenario to scenario. A better approach is to create differing scenarios of caution and care based on your own personal circumstances and a baseline of healthy information paranoia.

And remember: No matter your job title or role in the newsroom, you’re a target.

How to Think Like A Security Professional

To the average user, the term “compromised” generally means that someone/something malicious has gotten into your account or hardware. To a security professional, it’s a bit broader than that. Trust is handed out very cautiously and must be evaluated and earned, often through complex methodologies or research.

For example: Windows 10 collects users’ keystrokes and sends them off to an unknown entity, which makes it a black box—you can see what goes in and what comes out, but the middle part is a mystery. Security professionals don’t trust black boxes. Therefore any computer running Windows 10 is considered compromised by default, in the mind of a security professional. An entire segment of security professionals only use open source software because of the low occurrences of any black boxes.

As a developer or journalist in the newsroom, consider how this approach might work for you.

For example, if you are building an application that handles image uploads, you wouldn’t assume that an image file is safe—or that it is in fact an image. You’d instead operate as if every upload could be a potential attack and you’d structure the application accordingly. You wouldn’t treat disaster as an edge case or a rarity, but as the default.

There are certainly varying degrees of paranoia that one is willing to live with, but even for non-technical journalists, a few behaviors are universal and can be adopted without having to dig deep:

  • Consider all data provided by anyone else dangerous until you can verify otherwise.
  • Beware of what is hidden or opaque.
  • Don’t trust app developers, mobile providers, or your own IT department to make your security a priority. It’s always your responsibility, in the end.

Beyond these general points, there are a few specific areas that you may want to concentrate on as you bring a paranoid eye to your information habits.

Beware of Trading Convenience for Security

It doesn’t have to be this way, but the way our systems are designed, architected, and set up means that for every inch of convenience we gain, we are paying for it with a loss of privacy—which can, in turn, lead to a loss of security.

Consider voice-activated speaker devices: You speak aloud, your device recognizes you, and then you can give it commands. It’s convenient, and also insecure.

To know when you’re talking, your device listens 24/7. To know that you are speaking a command, it needs to process your speech—often in the cloud. That means audio in your home is transferred over the wire, out into the world, onto a server you have no access to, where it may or may not be saved, for who knows how long. This is a classic case of a privacy violation that can directly affect your security. There are configurations to safeguard against this, but they generally aren’t on by default and also require you to be paying close attention.

As an exercise, consider the digital conveniences you use every day. Then, pull back a little bit and begin to let the privacy implications unfold: Where is your data stored? For how long? Who has it? How much is recorded? How much personal information is inside of that data? How might plausible privacy breaches affect the security of your professional accounts and sensitive information?

Most of us can’t pull out a typewriter and Polaroid and go off the grid, but we can consider living with a little less convenience for the sake of security.

Find Safety in the Analog

Many US laws, as presently interpreted, do not grant digital data the same protections under the law as analog media. The big issue here is passwords and the documents, data, and sources which they may be protecting.

Save sensitive info in the way that is most protected by law. For example, in most cases, something locked behind a password is not fifth-amendment protected. The issue has already turned up in a number of different articles, but you can be ordered to provide a known password.

So consider paper, or even memory where necessary, to avoid being forced by law to give up sensitive information that may endanger you, a source, or your publication. Use your best judgement in keeping data as secure as possible and don’t default to digital out of habit or convenience.

For the thoroughly paranoid, consider not knowing your passwords. Instead keep them in a written and secure location. A coded methodology that you can employ is something like Off The Grid. It provides you with the ability to memorize a pattern that lets you decode passwords that you otherwise do not know. This puts several layers of obfuscation between you and your passwords in such a way that may provide a deeper layer of protection. While a warrant may be issued to grant access to the place you keep your paper encryption tools, there is a much longer legal process to gain access than if you simply know your passwords. You can’t provide what you don’t know.

Don’t Assume that Popularity Equals Security

There is no guarantee that the larger or more profitable a company is, the more secure the products are. In fact, the company’s very business model may rely on violating your privacy for their own financial gain, as has been the case with Google, Microsoft, and Facebook, to name a few. The list of data breaches at large companies, just in the last five years, is tremendous.

Popularity also means being a larger target—there’s a reason that Windows has historically had more viruses than MacOS. It’s not merely because Apple or Microsoft is inherently better at security than the other. A pretinent example of this is WordPress: It’s one of the most targeted CMSes on the internet, and there are countless automated compromise kits aimed at it alone, because it so widely used. That is not to say that you should avoid popular technologies or services, but you should know that they come with security risks.

Don’t Build Bridges

Think of your data, your devices, and your digital spaces as a collection of islands. Each time they can interact with each other or an outside group of islands, that is a point of danger: Using your work computer at home. Using your home computer at a public WiFi access point. Receiving a file from another person’s computer. Visiting a website. Each is a chance for something to be compromised. To combat this, develop a mental habit of making conscious choices each time one of these interactions plays out. If you are going to send a file between devices, run good anti-virus software, and consider proper end-to-end encryption.

Consider creating clear divisions within your technological interactions: Don’t use your work devices for personal use. Don’t use your personal accounts for work use. The levels of security you have put in place in one scenario may not be as hardened as they are in another, and when you cross lines into new territory, you run the risk of allowing something secure to leak.

Remain Suspicious of Your Contacts

Malicious attackers often play off your established systems of trust to get you to violate your own security. A Skype hack last year let bots take over hundreds of users and use their legitimate accounts to send malicious links to their contacts. If the person on the receiving end were to blanket-trust their contacts, they’d end up compromised.

To that end, consider “two-factor communication verification,” A.K.A.“use the phone.” If you get an email from someone with an attachment, or from someone you haven’t heard from recently, call or text to verify that they sent it. Do not trust that a friend texting you a link is indeed your friend. Start with suspicion and then verify.

This also goes for people in positions of expertise or authority. Be nice to your IT, but be vigilantly suspicious. They may not be doing all that needs doing. Don’t depend on them to keep you safe.

Run Security Check-ins and Disaster Drills

So far we’ve talked through some perception shifts and highlighted some areas where more suspicion is not just advised, but a must. However, the day will probably come when you’re compromised.

The first thing you can do is make sure you have a system in place where you regularly check in on the security implementations across your various accounts, services, devices, etc. You could make this a time each week or month, but often the sooner you are aware of a compromise, the more quickly you can contain the damage. These checks also give you the opportunity to tighten things down, make sure that your settings stay secure (services often love to reset them to what they think is best), and clean up accounts that you do not use anymore.

The next thing you can do is make yourself aware of the various recovery protocols for your services and products. Be aware of what it will take to regain control of an account or clean a device, should a compromise happen. It is much easier to avoid panic if you already know everything you need to do.

And lastly, for the extra-paranoid, I would suggest running disaster drills. This is especially helpful at the newsroom or organizational level. Fire drills and active shooter training are very common at large businesses, but security-related drills are not as common. Consider asking your leadership and your IT folks to setup and run regular disaster drills with mock scenarios that play through a compromise and what the protocol for handling that is. It’s very helpful for finding hidden leak points or critically analyzing how up-to-date you and your team are with your behavior in a security situation. All the tools in the world won’t stop accidental human behavior, but practice helps.

The Extra Mile for Extra Security

Education is good prevention. Take a few CS courses on security. Sign up for some seclists (Security Watch Lists). Visit developer forums. These often give a good heads-up on vulnerabilities before they hit the mainstream news and the exploit goes extra wild. Read through what your security updates are doing when you install them. Skim the TOSes you agree to for data retention and storage protocols. The more informed you are about the digital tools and services you interact with on daily basis, the higher the level of safety you can deploy.

It is also very important that you look ahead to where things are heading. Machine Learning is a rapidly growing field with a plethora of open source and free tools. It is not that hard to begin feeding in data points about people and finding exploitable information. Expect malicious actors to only get better at taking otherwise innocent bits of data, and mapping them together into something larger and more threatening.

Whatever behaviors and practices you put into place will need to grow over time to keep up with the evolution of threats. You can never have a perfectly secure system, but you can try to adopt more responsible behaviors and attempt to live outside a world of ignorance and trained behaviors.

Hold on to That Suspicion

Journalists already think critically and analyze deeply. Aim that approach at the digital realm you interact with every day. Give your apps a second glance. Question your newsroom’s security protocols. Make choices rather than acting out of habit.

We’ve made a downloadable PDF packed with things to consider as you think about your security plan. It’s not comprehensive, and you probably can’t do everything on the list, but spend some some time going through it and formulating a realistic list for you and the scenarios you commonly interact within. The dangers are only going to grow, and you and your newsroom need a solid plan to shift perspectives. This is a place to start.

Credits

  • Stephen Lovell

    Stephen Lovell writes spells that bind silicon beings to his will. He’s also a cartoonist, writer, illustrator, and amateur scientist. He knows things, doesn’t know things, and is willing to learn.

Recently

Current page