I recently managed to christen our car with by rear ending another car at exceptionally low speeds (3mph), in the process I finally took a look at the data our dash-cam contained. I quickly found the footage of the accident, complete with the audio of me talking about the burger place that I was distracted by (and now will never get to try). What I found on the dash cam was a trove of data far beyond what you'd expect. GPS, in car audio, speed, in five minute blocks for the last few weeks, and a huge cache of archived files related to every time the device thought we'd stopped a little too fast. There were dozens of files, all about five minutes long, and all containing data that could be a problem if I were in the field assisting with a story.
This is one example of what are hundreds of secondary locations of source identifying information that deserve your attention as you're working a story. The Uber from the airport, an office security camera, your source's mobile device, the AirPods you forgot to update, a forgotten cloud backup, or a good old fashioned infostealer on any machine from a team member. Defense in depth has become half a cliche in most security circles, however I think aiming for comprehensive and layered protection for sensitive information is a bare minimum where source protection is a priority.
Data classification can become cumbersome quickly, protocols should be a match for the risk involved, and implemented in a way that isn't a constant disheartening drag on the story. Airgapped systems, code names, encryption in transit, and at rest are all layers that can and should be implemented when needed. Making your team deal with them during collaborative script writing can be hell (looking forward to newer tools that make this easy!). Just logging where and how you're communicating with each other seems superfluous, until it comes to legal discovery.
Updating your threat modeling as the project goes on, a clear initial risk assessment, and a well trained team are your best chances.
Comments