Your Crypto Project Needs a Sheriff, Not a Bounty Hunter
On April 18, Avi Eisenberg was convicted of fraud for his October 2022 exploit of Mango Markets. The case has grabbed particular attention because Eisenberg quickly acknowledged executing the $110 million attack, and characterized his tactics not as a crime, but as a “highly profitable trading strategy,” leaning on his interpretation of the dictum that “code is law.”
Steven Walbroehl is the co-founder and chief technology officer of Halborn, a cybersecurity firm specializing in blockchain companies.
Eisenberg also tried to justify his activities in a second way: by framing the proceeds as a “bug bounty,” or a reward for identifying a vulnerability. That’s how the parties characterized a deal in which Eisenberg returned about $67 million to Mango, but kept the remaining $47 million, in exchange for a promise not to press charges. That would have made it the largest bug bounty in history.
I’ve been a cybersecurity professional for 15 years, and I’ve done some bug-bounty hunting myself. So believe me when I say: that’s not how bug bounties work.
Mango leadership later disavowed the agreement with Eisenberg, understandably saying the deal was made under duress. The courts didn’t take the “bounty” framing seriously, either. That’s good, because the idea that a thief can just return some of his loot and suddenly be a hero creates dangerous incentives.
But the incident also illustrates why even proper bug bounties are controversial among cybersecurity experts. While they have their place in a comprehensive security approach, they can merely create an illusion of safety if used in isolation. Worse, they can create perverse motives and bad blood that increase risk instead of mitigating it – especially for crypto and blockchain projects.
‘Retroactive bug bounties’ or plain old ‘blackmail’?
Many other crypto attackers have returned funds after taking them, for instance in the Poly Network and Euler Finance attacks. It’s a uniquely crypto phenomenon that some have called “retroactive bug bounties.” In vague principle, the idea is that the attackers have found a vulnerability in a system, and that the money they take is somehow a just reward for their discovery. In practice, though, these incidents are more like hostage negotiations, with the victims hoping to cajole or pressure the attacker to give the money back.
I don’t approve of hackers taking financial hostages, but as a former bug-bounty hunter myself, I can’t deny a certain poetic justice to it. More than once, I’ve alerted firms with bounty programs of serious or critical vulnerabilities, only to have them dismiss or ignore the risks for months, or even years. I can fully understand the frustration that might lead a young or naive security researcher in that situation to simply enrich themselves with their knowledge – to pull a Breaking Bad and go from “white hat” sheriff to “black hat” bank robber.
See also: DeFi Needs Hackers to Become Unhackable | Opinion (2021)
A core problem is that projects offering bounties have many incentives to pay them out as infrequently and as cheaply as possible. Obviously there are the financial costs, but you’d be surprised how often a team will deny the severity of a reported bug simply to protect their own reputations, while leaving users at continued risk. That denial can take numerous forms, such as declaring bugs “out of scope” for a posted bounty. Sometimes thin-skinned developers will even threaten legal action against researchers who have properly approached them with serious bugs.
It can be incredibly frustrating for a researcher to put in endless hours in pursuit of a “bug bounty,” only to have their findings dismissed, or even turned back against them. Doing something destructive, such as stealing a lot of money, might even seem like a reasonable way of getting results when you’ve been ignored. That’s the twisted logic behind Avi Eisenberg trying to position his theft as a “bug bounty” – losing $47 million is a pretty big nudge to fix a vulnerability.
The frustration of some bounty hunters is inextricable from another shortcoming of bug bounties: They generally do invite a lot of submissions that aren’t useful. For every genuine bug reported, a project could get dozens or even hundreds of reports that lead nowhere. A team could honestly overlook high-quality submissions while sifting through all that dross. More generally, searching for a needle in the bug-bounty haystack can take up so much of staffers’ time and energy that it offsets the cost savings a bounty program might seem to offer.
Bug bounties are also uniquely risky for blockchain projects in a few ways. Unlike, say, an iPhone app, it’s hard to fully test a blockchain-based tool before it has actually been deployed. Mainstream software projects often let bug-hunters try to break pre-production versions of software, but in crypto, vulnerabilities can emerge from a system’s interactions with other on-chain products.
Eisenberg’s Mango hack, for instance, relied on price oracles, and would have been hard or impossible to simulate in a test environment. This can leave bounty hunters trying out attacks on the same systems where real users have money at stake – and putting that real money at risk.
I also worry about the fact that so many blockchain bounty programs allow for anonymous submissions, which are much rarer in mainstream cybersecurity. Some even distribute rewards without identity checks; that is, they have no idea who they’re paying the bounty to.
See also: Calling a Hack an Exploit Minimizes Human Error | Opinion (2022)
This presents a really ominous temptation: a project’s coders might leave bugs in place, or even introduce critical bugs, then let an anonymized friend “find” and “report” the bugs. The insider and the bug hunter could then split the bounty reward, costing the project big bucks without making anyone safer.
You need a sheriff, not a bounty hunter
Despite all of this, bug bounties still have a role to play in blockchain security. The basic idea of offering a reward to attract a huge diversity of talent to try and break your system is still solid. But too often, I see blockchain projects relying largely or even exclusively on a mix of bounty programs and in-house oversight for security. And that’s a recipe for disaster.
There’s a reason, after all, that bounty hunters in movies are so often morally ambiguous “gray hats” – think of Boba Fett, Clint Eastwood’s “Man With No Name,” or Dr. King Schulz from “Django Unchained.” They’re mercenaries, there for a one-off payout, and notoriously indifferent to the bigger picture of the problem they’re solving. At the very far end of the spectrum, you can get an Avi Eisenberg, eager to adopt the cover of a “bug bounty” when they themselves are the actual villains.
That’s why old-time bounty hunters ultimately reported to a sheriff, who has a long-term duty to the people he’s protecting, and makes sure everyone is playing by the rules. In cybersecurity terms, the role of sheriff is played by professional code reviewers – people with public reputations to protect, who get paid regardless of what they uncover. A review by an outside firm also mitigates the misguided defensive impulse of in-house developers who might reject real bugs to protect their own reputation. And blockchain security specialists can often foresee the kinds of financial interactions that gutted Mango Markets, before there’s real money at stake.
The vast majority of bug bounty hunters, to be clear, really are trying to do the right thing. But they have so little power within the rules of that system that it’s little surprise some of them wind up misusing their findings. We can’t normalize that behavior by giving exploiters like Avi Eisenberg the stamp of approval implied by a “bounty” award – and projects that really care about their users’ safety shouldn’t leave it in the hands of the crowd.
See also: Ogle Catches the Crypto Crooks