Your logo and branded vulnerability aren’t helping: How to disclose better
Video: Dropbox says new vulnerability disclosure rules will protect researchers
In 2000, I leapt out of journalism and in to security communications. I was relocating to the San Francisco Bay Area and, despite the downturn, tech was king. I also wanted to lend my unique albeit non-technical skill set to a technology that protected people or, at the very least, attempted to reduce harm caused by malicious behavior.
Read also: Trump administration releases rules on disclosing security flaws
Instead, in the close to 20 years since, I’ve seen marketers fail on many high-profile occasions to properly extol the merits of a capable research team, and choose to sensationalize risk and, as a result, fail to reduce harm. Too many compromises have been made in disclosing bugs with grandiose antics that show how smart a research team may be, but also diminish an organization’s credibility and leave users and systems more susceptible to attack.
During a keynote last month at Hack in the Box (HITB) Amsterdam, I dug into the role of the marketer, or non-technical business leader, in reducing harm. It’s a topic I’ve long been passionate about, and have discussed at length in blogs, rants, and talks, but this time was different — I had a chance to deliver this talk to a technical and research audience.
I broke it down into four basic questions and answers:
- What do manufacturers do? Try to make stuff that doesn’t harm
- What do security vendors do? Sell stuff intended to reduce harm
- What do researchers/engineers/practitioners do? Reduce harm
- What do marketers often do? Create risk or sensationalize in order to sell
Of course, there are caveats to all of the above. Manufacturers inadvertently create the risk of harm; not all offerings sold by vendors do what they say they will; researchers, engineers, and practitioners are humans who make mistakes; and not all marketers create chaos.
Read also: The Dark Web is the place to go to find bugs before public disclosure
But those who do have on occasion created additional harm, or at a minimum lent to confusion, fear, uncertainty, and doubt that distracts or disrupts a security practitioner’s ability to do her or his job. I used Heartbleed and its marketing circus and the CTS vs. AMD disclosure disaster as two examples of very wrong disclosure processes. But I didn’t have to go back that far, because the recent eFail flaws were a prime example of everything that could go wrong in a poorly executed disclosure, starting with broken embargoes, rampant FUD disseminated through media coverage before the official and much less alarming technical report came out.
Some coordinated disclosure fails are, quite simply, mistakes. Others are caused by willful ignorance. Sadly, many are also created by a lack of ethics and accountability for marketers, and non-technical business leaders, who are decision makers in how vulnerabilities are disclosed.
We need to change this. We are going to change this and, by we, I mean myself and other marketers and non-technical business folks, working alongside security researchers, engineers, and practitioners. We are going to change the way business and marketing leaders interact with researchers and analysts, and raise the bar for ethics. We are going to empower researchers and analysts to advocate business and marketing leaders for better practices.
Read also: Notifiable Data Breaches scheme: Getting ready to disclose a breach in Australia
We can do this by allowing ethics to be our guide; to let the pursuit of reduction in harm to be at the forefront of every decision. If you work in the security industry, or if you work in a role in any organization that purports to protect, you are just as responsible as practitioners to do your part to protect users and systems from malicious intent. We can still market, we can still sell, but we can mature our organizations to consider not only the benefit to business, but the impact of our actions on the very people we are trying to shield (and, let’s be honest, also sell things, too).
During my HITB talk, I created a potentially over-simplified decision tree that could serve as a guide for marketers, or non-technical business folks, to determine what action to take with a disclosure. The reasons to stop must be considered before any decision is made:
You might ask how any marketer can do this, or should do this, and how can she or he do it quickly? One thing they can do is create a good relationship with the technical leadership in her or his organizations. Another idea is to create an ethics review or standards board (or process, if you’re a smaller organization) to ensure adherence to coordinated disclosure. At my current company, while we don’t (yet) have a formal review board, my team has worked closely with our intelligence team to reduce harm through our marketing, as well as educate the marketing team on how to spot content where the benefits do not outweigh the risk of publicizing it. My team also:
- Double checks with multiple internal folks to ensure statements are true
- Does not use statements that include scare tactics
- Never compromises operational security (OPSEC)
- Protects all sources, as well as personas (where applicable)
- Triple checks to ensure Investigations are not negatively disrupted
Again, the above works because we partner with our intelligence team and researchers. For the broader security community, we marketers see your tweets screaming about us and bad public relations and we know you are right when you call out a dumpster fire in motion. Imagine, however, how much better we all could be if you proactively engaged with your marketing teams or PR agencies and offer education on disclosure? Who knows the implications of security better than those who work hands-on to secure, harden, and protect? Your voice is strong, and we need it to impact change. Here are just a few ideas:
- Speak with your management about creating an ethics or standards board
- Express the end state you want is more truth and better security
- Share that you are willing to support on a committee to provide guidelines
- Company doesn’t have a coordinated disclosure policy? Build one
- Require credit for your work
- Call out marketers, but focus on sharing how to do better vs. focusing just on what sucks
The industry’s best minds will be at Black Hat US in a few short months — yes, technical folks and marketers alike. Let’s talk about why logo disclosures aren’t the best path forward, and how things such as a standards board can guide the industry through the next internet-wide vulnerability disclosure without creating more risk. It has to be more than a panel discussion or a birds-of-a-feature session. It needs to be an interactive dialogue between technical and non-technical professionals, with next steps and outcomes. I will build it. Will you come? Tell me on Twitter (hashtag #reduceharm).
Previous and related coverage
Lawsuits threaten infosec research — just when we need it most
Security researchers and reporters have something in common: both hold the powerful accountable. But doing so has painted a target on their backs — and looming threats of legal action and lawsuits have many concerned.
Dump the snake oil and show security researchers some respect
Hacker Summer Camp kicks off this weekend, and with many conferences, there’s a very noticeable “race to first” by marketing teams. In that race, marketers need to first revere the research and respect the researchers, especially heading into the next 10 days. Here’s why.
Have security conferences become an ‘army of noise’?
Of the hundreds of security conferences, the vast majority are interchangeable in terms of content. Some up-and-coming events are reducing the noise, providing better opportunities for learners, and booking fresh faces on the main stage.
READ MORE HERE