The Register

Pwn2Own Automotive 2026 uncovers 76 zero-days, pays out more than $1M

infosec in brief T’was a dark few days for automotive software systems last week, as the third annual Pwn2Own Automotive competition uncovered 76 unique zero-day vulnerabilities in targets ranging from Tesla infotainment to EV chargers.

A record 73 entries were included in this year’s competition at Automotive World in Tokyo, and, while not all were successful, Trend Micro’s Zero Day Initiative still ended up paying out more than $1 million to successful competitors. 

For those unfamiliar with the structure of a Pwn2Own competition, ethical hackers and security experts enter with plans to perform a certain exploit, which they must do in a limited time. Cash prizes are awarded for successful attempts, as are points, with both increasing based on uniqueness, impact, and complexity. 

The largest single-exploit payout (and point award) of the three-day event went to the eventual winners, a trio of security researchers from Fuzzware.io, on the first day. The team took home $60,000 and earned six points by exploiting a single out-of-bounds write vulnerability in the Alpitronic HYC50 EV charger.

Fuzzware hackers ended up earning the Master of Pwn title with a total of 28 points and total winnings of $215,500 over seven successful demonstrations. 

In addition to Fuzzware’s successful attack on the HYC50, another team also managed to exploit a Time-of-Check to Time-of-Use vulnerability in the charger, which they leveraged to install a playable version of Doom on the charger’s screen, earning the $20,000. The HYC50 was also hit by another team that exploited an exposed “dangerous” method in the charger.

The Tesla infotainment system was also fully taken over by the Synacktiv team by chaining an information leak with an out-of-bounds write vulnerability, and Automotive Grade Linux was compromised via a trio of vulnerabilities.

Here’s hoping all the affected vendors will move quickly to address the many vulnerabilities discovered during the event.

France fines mystery company €3.5M for privacy violations

French privacy regulators have fined an unnamed company €3.5M for sharing customer loyalty data with another unnamed social network without explicit and informed consent. 

The National Commission on Informatics and Liberty reported the fine last week, which was imposed on December 30, for actions taking place since February 2018. 

According to the Commission, the company had been transmitting email addresses and telephone numbers of customers to the social network for targeted advertising purposes. This happened to more than 10.5 million Europeans from 16 countries, the Commission noted.

The actions of the unnamed firm amounted to multiple violations of both the EU General Data Protection Regulation and the French Data Protection Act. The Commission noted that it didn’t name the company because, although the widespread scale made it necessary to inform the public, it didn’t feel the need to name the outfit. 

Gemini can be tricked into spilling your calendar secrets

Runtime security outfit Miggo spotted a vulnerability in how Google’s Gemini AI parses Google Calendar events that could expose a user’s daily schedule through a malicious calendar invitation.

If a Google Calendar user asks Gemini for a rundown of their day, the AI reviews the user’s calendar and reports back, but an invite containing a carefully worded prompt-injection payload hidden in the event description can cause Gemini to write a summary of private meetings into a newly created calendar event that, in many enterprise configurations, is visible to the attacker, without clearly disclosing that it has done so.

While Google has already patched the exploit, Miggo said that it points to the need to think of AI as an entire new application layer that merits new security considerations, thanks to AI’s ability to interpret language without being able to reason about intent. 

“Effective protection … must employ security controls that treat LLMs as full application layers with privileges that must be carefully governed,” the company said. 

Hackerone is totally fine with you attacking AI, as long as you follow the rules

Bug bounty platform Hackerone published a new safe harbor document last week laying out rules it hopes will help set a new standard for good faith AI security testing.

Per the company, security testing of AI models doesn’t necessarily fit neatly into traditional vulnerability research or disclosure frameworks, leading to ambiguity that not only hampers effective research, but also leaves testers unwilling to take risks. 

“Organizations want their AI systems tested, but researchers need confidence that doing the right thing won’t put them at risk,” said Ilona Cohen, chief legal and policy officer at HackerOne. “The Good Faith AI Research Safe Harbor provides clear, standardized authorization for AI research, removing uncertainty on both sides.”

Organizations that adopt the agreement commit to treating good-faith AI research as authorized and to refraining from legal action against security researchers who test their AI systems, provided researchers follow conditions similar to traditional security programs, including not withholding findings for payment, exfiltrating data, causing unnecessary damage, or reverse-engineering systems to build competing services.

Even cybercriminals fail security basics

If you’ve ever felt bad because a cybercriminal nabbed your data, don’t worry – breaches happen to everyone, even them.

Cybersecurity researcher Jeremiah Fowler shared the discovery of more than 149 million unique login/password combinations in 96 GB of raw credential data that he found completely exposed online.

With data in the file including accounts from multiple social media platforms, dating apps, streaming services, financial services, banking and credit-card logins, and even government credentials from multiple countries, Fowler said the dataset appeared to have been harvested using infostealer and keylogging malware and left exposed online.

Fowler noted that the database appeared to have been compiled from keylogging and infostealer malware that was “different from previous infostealer malware datasets that I have seen.” 

It took Fowler nearly a month to get the host to secure the data, and because the database was publicly accessible during that time, the credentials could potentially have been accessed by others, which, if nothing else, is a timely reminder to reset your passwords regularly. ®

READ MORE HERE