Bug bounties: The good, the bad, and the frankly ridiculous ways to do it
feature Thirty years ago, Netscape kicked off the first commercial bug bounty program. Since then, companies large and small have bought into the idea, with mixed results.
Bug bounties seem simple: a flaw finder spots a vulnerability, responsibly discloses it, and then gets a reward for their labor. But over the past decades, they’ve morphed into a variety of forms for commercial and government systems, using different payment techniques and platforms, and some setups are a lot more effective than others.
Commercial bug bounties spread slowly at first, and the idea was initially fraught with danger for researchers. Some companies sued outsiders who found problems with their software.
In 2005, Internet Security Systems (ISS) researcher Michael Lynn and the organizers of the Black Hat security conference in Las Vegas were served with a restraining order over his planned talk on serious flaws in Cisco’s IOS router software. Lynn quit ISS and delivered the presentation anyway, while Cisco reps spent eight hours physically tearing pages describing the talk out of the conference handbook.
But that same year, Tipping Point started the Zero Day Initiative, paying for high‑impact vulnerabilities with working proof‑of‑concepts. The practice went into turbo mode when several tech giants picked up the practice, led by Google in 2010, Facebook a year later, and then the biggie – Microsoft – in 2013.
While some companies chose to run their own bounty programs, others outsourced it to platforms like HackerOne and Bugcrowd, both started in 2012. But the choice of which one to pick depends very much on the size and focus of your organization.
Sorting it out in-house or outsourcing?
For the biggest organizations with a large user base, the best option is to primarily run their own scheme.
Katie Moussouris, who convinced Microsoft to go down the bug bounty route after a three-year fight (a process she described akin to “boiling a frog”), ran the Pentagon’s first hacking competition, and was chief policy officer at HackerOne. She’s now the CEO of Luta Security, a bug bounty consultancy.
“If you are somebody like an Apple, Google, or Microsoft, where the sensitivity of your bugs is so high, you do not want [third-party] platforms triaging your bugs,” she told us. “You also don’t really want them housing the bugs at all. Any vulnerability in that third-party platform exposes your bugs.”
Larger organizations have a number of other advantages, she explained. Most bug bounty programs throw up a huge number of false positives or minor flaws that aren’t really serious, and the biggest organizations, or those with a security bent, have the IT staff to sort out the wheat from the chaff.
In addition, they have legal departments to handle the non-disclosure agreements that are an essential part of bug bounty programs. With Microsoft, she explained, the company originally set up a pilot called Project Tango (because it takes two to) where individual researchers would work for Redmond under an NDA not to release findings until Microsoft had checked them and issued a fix.
In-house bug-bounty programs can also create a nice recruiting pipeline.
“I used to run the one for Barracuda Networks,” Eric Escobar, red team leader at Sophos Advisory Services, told us. “And in quality, 90 percent is trash, but the 10 percent that you got were like gold, it’s incredible.”
He continued, “We hired several people out of the bug bounty program because they were finding such good stuff regularly. We did the cost analysis and thought if this person’s finding this many bugs consistently per month, we’re losing money if we don’t hire them; they’re making more than a solid AppSec engineer’s salary would be for just finding bugs.”
But hiring dedicated bug finders isn’t really an option for smaller software companies that don’t have the budget.
“It’s very rare that you have an entirely research-focused security person, you want them doing other things,” Moussouris said. “These people might not be programmers. They might not be able to tell the developers how to prevent those bugs in the future. They might just be really good at finding them.”
There’s also a cultural issue, she said, since not everyone in the field wants to work in a corporate environment with endless meetings and team sessions. A lot of people like finding security holes but relish their independence.
Increasingly, companies are hiring skilled pentesters on a per-contract basis for specific jobs. This solves the NDA issue, since non-disclosure would be part of the contract, the researcher gets money without having to make it a full-time career, and management is usually happy because they don’t have to take on the financial overhead of a new staff member.
The other alternative is to use a commercial platform like HackerOne or Bugcrowd. For smaller companies, or those less focused on security, this provides a way to get bugs in, screen them, and pay the finder’s fee with minimal fuss, Moussouris said.
HackerOne CEO Kara Sprague explains one big strength of the platform approach is the breadth of talent out there.
“Going to a proven provider is not a bad idea for a company whose core competency is not offensive security,” she told us. “The global independent researcher community is a phenomenal source of talent,” she said, explaining that self-motivated bug finders are often more effective than in-house red teams or pentesters.
In practice, many companies take a hybrid approach, running their own bug bounty programs internally but also using the platforms.
Moussouris also warned that some companies were rushing into bug bounty programs for the wrong reasons – particularly public relations. Last month, when Paradox.ai was caught using “123456” as an admin password that would allow an attacker to access the personal details of about 64 million McDonald’s job applicants, it apologized and promised to set up a bug bounty program.
“I call that bug bounty botox, when they just want to be pretty on the outside,” Moussouris joked.
Motivation for hunters: Fortune, fame, and fixing things
One common misconception is that flaw finders are just in it for the money, but it’s more complicated than that.
It’s true that money is a factor. In the last decade, the amount of money up for grabs for bugs has exploded. At ZDI’s forthcoming Pwn2Own contest, a lucky hacker can earn $1 million for a zero-click remote code execution attack on WhatsApp. And despite its initial reluctance Microsoft has become a firm supporter of the bounty system, and paid out $17 million last year to independent security researchers, Tom Gallagher, head of the Microsoft Security Response Center (MSRC), told The Register.
A few vulnerability hunters have become millionaires from these platforms, and others have made a lot of money selling to third-party businesses who harvest high-value flaws to either exploit for government-sanctioned spyware or to sell premium detection services.
The tactics to earn big bucks aren’t what you might think, Moussouris opined.
“The first hacker to make a million dollars total on HackerOne was asked ‘did you just find a bunch of criticals? He’s like, no, I don’t look for criticals at all, they’re too hard and take too long. I go for automation to get more efficient low- and medium-severity bugs.’ The mediums are the sweet spot, because they pay more.”
But it’s a long tail situation – most people don’t make a massive amount of money, and the majority of bug hunters don’t use it as a main source of income, Escobar explained.
Microsoft’s Gallagher explained that fame is another very effective way to attract bug hunters. MSRC maintains a league of the most useful flaw finders and competition is fierce, even if it’s just for a t-shirt, which he kindly modeled for The Register.
Companies also court top vulnerability researchers with access to in-house engineers and exclusive forums that are designed to woo the best talent into investigating their code.
“One of the things that we try to do through the bounty program is we build a relationship with the researchers,” he told us.
“We try to build a community around it. We have a program where we call the most valuable researchers, and so we’ll work with them. There are some people that are interested in working full time, there are other people that have a full time job. It may not even be security, and so it really depends on the individual.”
The other, often overlooked, motivation is the desire to get things fixed. A few security researchers still submit bug reports, not especially for the money, but to make sure that applications and processes are safe.
“There are some researchers who care more about the bug being fixed than care about the payout, and that’s still true,” Moussouris opined. “Some researchers are like, ‘No, I just want you to fix this bug,’ that’s their motivation.”
These unicorns are few and far between, however, and any company relying on the goodwill of such strangers as a primary source of fixes is foolish, she suggested.
AI slopping over?
As with many jobs in the tech industry, there’s the spectre of machine learning and AI doing tasks previously reserved for humans.
So far, AI is a mixed blessing. Yes, it’s finding more flaws, faster, but on the other hand, machine-generated flaws and the reporting on them are flooding out bug analysis.
As industry veteran Mikko Hyppönen pointed out at this year’s Black Hat security shindig, there’s been a huge increase in volume of reports, often written by script kiddies with prompt skills.
“They call it AI slop,” Moussouris commented. “It creates a lot of noise for maintainers, especially in the open source world, who can’t afford the triage services. That’s going to hurt all of us in the ecosystem long term.”
The bounty platforms are on the front lines of this trend, and HackerOne’s Sprague says two-thirds of the reports they are getting these days “end up being valid vulnerabilities.” To filter out the spam, the platform will be using AI moderation to examine reports and determine if they are valid exploits.
But she said that software slipup seekers are gearing up as well, investing in automated scanning kit that will take a lot of the fingerwork out of scanning code.
“We do see many of the bug hunters that I meet with have already adopted some amount of automation to facilitate their hunting,” she said. “There’s even other researchers that have developed their own hack bots, or are contributing to commercially developed bots. So we’re seeing the researcher community increasingly invest in these sources of automation to scale their capacity.”
Will AI eclipse the human hacker? This hack doubts it. LLMs work on past information, and the intuitive leaps needed to spot serious problems appear to be beyond it, for now. ®
READ MORE HERE