Could Someone Hack My Microchip Implant?

Illustration for article titled Could Someone Hack My Microchip Implant?

Illustration: Benjamin Currie/Gizmodo

Giz AsksGiz AsksIn this Gizmodo series, we ask questions about everything and get answers from a variety of experts.

These are, absolutely, very bleak times, but in a sense we should cherish them: we’re living through maybe the last stretch of history before employers start mandating microchips en masse. You’re going to miss living in fear of contracting a deadly virus, once HR brings out the scalpel—and then you’re going to miss the day you had a productivity tracker roughly sewn into your forearm, because of course that’s just the beginning. If I’ve learned anything from the news, it’s that rival foreign superpowers are obsessed with hacking things. And you can bet they’ll set to messing around with your body-chip, when the time comes. At the very least, they’ll try—there’s always the chance they might fail. For this week’s Giz Asks, we reached out to a number of experts to find out whether someone—the nerd down the street, the Russian government, whoever—could actually hack your microchip.

Advertisement


Matthew Green

Associate Professor, Computer Science, Johns Hopkins University, whose research focuses on techniques for privacy-enhanced information storage, among other things

Anything can be hacked. In theory, medical device software is supposed to be held to higher quality standards, but to date this mostly applies to safety and reliability concerns, rather than security. I’ve witnessed recent examples where researchers were able to take control over life-saving devices, like implantable cardiac defibrillator devices, and send commands that could potentially stop a patient’s heart. The security protecting these devices was much less sophisticated than what’s protecting your phone.

Assuming your microchip implant has a wireless connection (a pretty good assumption), there’s every chance that it will have some vulnerability that can be exploited—either to issue commands that the device is designed to process, or else to take over the device and cause it to operate in ways it was not designed to.

The real question in my mind is whether anyone will actually want to hack your implant. The big difference between a theoretical hack and one that gets exploited in the real world is usually money. Concretely: can someone find a way to turn a profit by hacking you? Whether that’s possible here depends very much on what that microchip is going to do for you, and how valuable it is to you for everything to keep working.

Advertisement

Kevin Warwick

Emeritus Professor of Engineering at Coventry and Reading University, described elsewhere as ‘The World’s First Human-Robot Hybrid’

It depends what the microchip implant is. If it is an RFID/NFC type then this is certainly possible but a hacker would need to know that you have such an implant and where it is on your body. They would also need to know what you use it for, otherwise there might be no point. So technically it is possible but highly unlikely. For just about all other types of implant I cannot imagine any hacker would, at this time, have such skills, abilities, reason or knowledge to carry out such a hack.

Chris Harrison

Associate Professor at the Human-Computer Interaction Institute at Carnegie Mellon University and Director of the Future Interfaces Group

Yes—implantable microchips are going to be hackable (spoiler alert, implantables like pacemakers already are). Why? Because if something is compelling enough that you are willing to install it permanently into your body, you’re going to want it to be updatable. After installation, there will be settings and calibrations that need to be tweaked to your physiology, new features will come along, and probably some bug patches too. Unless you love incisions, you’re going to want it to be wirelessly updatable—which means it’s also hackable. The first rule of cyber security is that there is no such thing as a 100% secure system.

Given the discomfort (and legal liability) of swapping out implantables, I think we’ll see a model more like Android Auto and Apple CarPlay. Cars are sort of like bodies—they are surprisingly painful to upgrade—and for that reason, 99.9% of people never upgrade the dash electronics in their cars. Rather than consumers being stuck with the (likely outdated at the time of purchase) touchscreen interface that shipped with their car, we’re moving to a more future-proof model where the screen in your car acts as a portal to your newer smartphone. Even still, you’ll probably upgrade your car every 5 or 10 years, because at some point hardware upgrades are needed to support software upgrades. I can easily foresee something analogous for implantables to reduce the need for invasive upgrades.

Also like bodies, cars are fragile/deadly things you’d think would be made unhackable, and yet automobiles are routinely hacked today. Given that cars are hackable and that their being hacked could kill you, does that mean you are going to give up your car? Nope. And that brings me to my last point: value vs. risk tradeoff. People drive cars despite the risks because they bring so much value and convenience. The same will be true of future implantables—there will be risks, both medically and from hacking. In order for the segment to succeed, it will have to bring a commensurate amount of value to consumers.

Advertisement

Avi Rubin

Professor of Computer Science and Director of IoT Security Lab at Johns Hopkins University

Whether or not an implant can be hacked is a question of what technologies it utilizes. On the simple end of the spectrum are passive transponders that emit a code when queried and have no other functionality. These can often be found, for example, in pets, and are used for identification if the animal is lost. There are stories of humans implanting such things in themselves to replace their key fob to their apartment or office. These devices are unlikely to be hacked, although their owners might want to get their heads examined.

On the other end of the spectrum are sophisticated medical devices that utilize wireless communications protocols such as NFC, BLE and IEEE 802.11 and have full-fledged processors. The susceptibility of an implant to hacking is entirely based on which components comprise the device. Generic software utilized by the microchip increases the attack surface. So, for example, if an implant is running full fledged Linux with 25 open source software packages and three different wireless protocols, I would say it’s a prime candidate for being compromised. On the other hand a custom ASIC with a special-purpose functionality and a narrow communication capability that requires close proximity, is safer.

Ultimately it comes down to well-known security principals. More software means more bugs and thus more vulnerabilities. So unless you really need that microchip implant, you might consider passing. And if your doctor tells you it will save your life, ask if there are choices, and all other factors being equal, pick the one with the simplest design and the smallest code base.

Advertisement

Michael Zimmer

Associate Professor, Computer Science, Marquette University

Some of this concern depends on both what is meant by “hacking” and the purpose/functionality of the microchip itself. If I had a medical implant assisting with my heart rhythm or regulating my insulin, for example, I’d certainly be worried about the potential for someone to remotely interfere with the proper functioning of the implant.

But my larger concern is the potential misuse of embedded microchips pitched as a convenience for identity verification. Take the example of Wisconsin-based Three Square Market (32M) who recently announced plans to offer voluntary microchip implants for its employees, enabling them to open doors, log onto their computers, and purchase break room snacks with a simple swipe of the hand.

While there might be some convenience gained by no longer having to remember to bring your access card or wallet to work (and employers don’t have to worry about employees improperly sharing access credentials), the potential for “function creep”—where the stated purpose of a technology ends up spilling over into other uses—is much too great. Often what appear to be simple technologies to provide benign conveniences shift into becoming infrastructures of surveillance or control used for purposes far beyond what was originally intended.

In the case of implanted RFID chips, we cannot predict if employers might start tracking how much time someone spends in the break room or the bathroom, or whether one is purchasing too much junk food from the vending machines, or whether one is loitering too long at some other employees’ workstation. Employees might be automatically disciplined simply based on what their microchip data reveals, without even knowing the extent of the data monitoring and collection taking place. With an increased interest in tracking employees activities—at work as well as at home—during the covid-19 pandemic, embracing workplace-based microchipping opens up a Pandora’s box of increased surveillance and control.

Some have argued that we’re already being tracked (and increasingly judged) based on data collected via our smartphones and wearable devices. True, but I can turn my smartphone off. I can leave my Fitbit at home. I can manage which apps have permission to track my location or activities. But with an embedded chip, I no longer have any ability to control when it gets scanned or by whom. Future advances might even allow reading “passive” chips at greater distances. It easily can become a pervasive surveillance technology completely outside my control (short of slicing it out of my flesh).

Advertisement

Moran Cerf

Professor, Neuroscience and Business, Northwestern University, who is working on smart chips for the brain

Anything that is connected to the internet is hackable. In fact, even some things that are not connected to the internet are hackable, since it is easy to persuade a person to connect things and open the gateway for incoming content to penetrate any vault.

It used to be the case that at least the ‘vault’ ceases to exist and becomes unhackable when you cease to exist (i.e., when you die) but even that is no longer a certainty as recent works from our lab are exploring the possibility of accessing information in the brains of organisms that already died (by interacting with the brain as long as it still able to process information). This means that if the information exists, someone can access it. Heck, if you can access it, it means others can too.

Whether it is in your brain, or in the safest storage you can imagine—as long as there is a way to get to it, this way will become a reality.

But… this is not new. Nor is it surprising. Accessing information and penetrating vaults has been with us for millennia. We just change the tool names, the methods, the scale, or the speed with which we do it. Changing people’s thoughts was done through propaganda and manipulation since the dawn of rhetoric. It wore different guises when information manipulation (i.e., deepfakes, fake news) became a reality, and it is part of our existence routinely in the digital world. We learned to live with it and work around it as part of learning how to evaluate content throughout life.

Our senses and our cognition are our brain’s way of dealing with incoming information and filtering the relevant and coherent ones from the noise. The brain allows information that aligns with existing ideas, with concepts that lead to positive outcomes, with entities that reinforce predictions we made about the future, and with content that helps us exert more control of the world around us.

While hackers like myself are showing time and again that any system we encounter that can be hacked, will be hacked, we also know that hackers are confronted by another challenge besides ‘getting in,’ which often is the real barrier to a successful penetration—to remove the traces of the hack happening. It is not enough to get into the bank and steal the money. Good hackers need to also be able to escape with the money, deposit it in a different place that they can use easily, and make sure there is no way to trace the route of the money to them. While numerous hackers are able to break into the vault, only a few are able to actually not be caught later on.

Our brain is no different in this regard. Hacking into a brain and changing thoughts requires both the action of getting in as well as the removal of the traces of the hack. This turns out to be more challenging as any change of thought or neural process will also generate a gap in the thought trajectory that can be spotted by surrounding modules.

As a former hacker and a current professor of neuroscience I know that the merger of the two files of penetrating thoughts and changing minds is a critical moment in our understanding of reality. Our brains have evolved to trust internal processes. Our Visual cortex trusts the input that comes from the eyes. Our Hippocampus trusts the cascade of processes that directed a memory to be registered. We assume that anything that happens within the brain is ‘safe’ because some system must have filtered the false data before. Using digital terms—the communication within the brain is not encrypted because the neural circuits assume that there is no way to get passed the senses without a critical/skeptical process evaluating the reliability of the data. When neural implants are placed in our brain—this assumption is no longer a certainty. We will then have to learn how to not trust our own brain. And since the ‘we’ that has to ‘learn’ that type of skepticism is the brain itself—the learning might be challenging. We have never had an experience where we could not rely on our mental processes or trust our exhibited reality. In fact, the only individuals who experience this challenge are ones with delusions who see things that are not there or suffer from conditions where their reality is not align with the one experienced by the majority of viewers. Up to now, we used to treat people who lived under these conditions as suffering from a disease. When a large subset of the population becomes that, we may have to alternate our expectations.

How to solve that?

While hackers like myself who are the ‘immune system’ of communications are working to improve the security and the way we interface with the brain, to ensure that accessing it via microchip or other methods is challenging, there is a solution that we all have to get used to. A solution that every hacker will tell you is the only solution to protecting ourselves—to assume you have been already hacked and act as if information inside the vault should be re-evaluated occasionally because it cannot be blindly trusted.

Hackers do not trust and data in the network blindly, but occasionally perform sanity checks’ that ask, “Is there a chance that data in my system was compromised?” They change passwords frequently or choose different modes of accessing information even if they have no evidence of a successful hack into their system. They change just because they know that a successful hack may have occurred and their job is to change in order to make it harder for the malicious acts to propagate within the network.

We should do the same. Occasionally we should ask ourselves if we can trust thoughts that are running through our heads blindly, or maybe consider revising our views on things—just to see if our ideas are aligned with our existence. We should use others as ‘backups’ for our ideas—by sharing our views with close friends and trustworthy allies who will reflect back to us on occasion their views on the alignment of our current views with those they think we had prior. If all your friends tell you that “something has changed in you” this might be a good time to ask yourself—can I find it in me to consider that they might be right.

Most of us, currently, find it hard to do it. But here is my advise on how to practice this. Assume it’s “April Fools’ Day” occasionally and see how you feel about incoming information. Ironically, April Fools’ is one of the few days in the year where people are genuinely skeptical of incoming information. Otherwise, we tend to blindly trust content that comes from sources that we feel are reliable (our friends, colleagues, or our past selves). However, on April Fools’ if a friend says something that momentary does not make sense you might ask yourself… should I trust this or maybe I should try to view it objectively with no bias and see if it holds with my other experiences.

While living life in such a world is challenging (it is hard to constantly doubt every incoming data), it might be a necessity in a world where neural implants are placed in our mind and have the potential to alter the core of our personality—our thoughts.

Advertisement

Andrea Matwyshyn

Associate Dean for Innovation and Technology and Professor of Law and Engineering Policy at Penn State Law

In the situation where “hack” means “accessed by an unauthorized third party,” the answer is potentially yes. The fact that a chip is inside the human body doesn’t necessarily alter the ability of an attacker to interact with it. These kinds of chips are frequently vulnerable to many of the same kinds of attacks as other contactless technologies that don’t reside under the skin. Depending on the sophistication of the implant and the technology used, an attacker might cause the leakage of identity information from the chip or, in more sophisticated implantable chips and devices, an attacker might be able to corrupt the information contained either on the implant itself or in external places reliant on the implant.

Chip implants are generally used for the purpose of uniquely identifying your body and sharing information about you in particular situations. For example, some cryptocurrency enthusiasts have implanted chips (that rely on integrated circuit devices or RFID transponders) to act as currency wallets. These implants might allow them to access the assets in their accounts with the swipe of a hand. Other implanted chips might store medical files, unique identification information, contact information, and other data. Some employers have asked employees to embed chips in their hands instead of using badges to gain access to corporate networks, open doors, and use corporate vending machines. In other words, implanted chips are designed to interact with sensors in the physical environment outside the body, “announcing” that your body is nearby. Particularly because the information on the chips has often been stored unencrypted for ease of communication, the properties that make the chip implants convenient to use may also make them vulnerable.

As the Department of Homeland Security recently explained in a white paper, RFID technologies in particular present security and privacy risks. Technologies that use RFID are potentially vulnerable to several modes of physical attack, including counterfeiting attacks that clone identifiers. That might happen through the use of a “rogue reader” to communicate with the embedded chip, tricking it into revealing its information and then duplicating it. The cloned information might then be used by an attacker as part of another bigger attack to gain access to systems that rely on that information for user authentication and then impair their confidentiality, integrity, and availability.

For example, if your employer asks you to implant a chip in your body in order to open security doors, log on to your computer, and to activate the corporate smoothie machine, someone with the right equipment could potentially generate a copy of the information on the chip in your body and use it to “impersonate” you and attribute activity to you that you didn’t actually cause. In other words, after reading the chip copy, the security mechanisms on the smoothie machine may mistakenly believe an attacker ordering 40 smoothies to be you, potentially automatically billing your account. Or, your computer might give the attacker access to sensitive corporate documents, believing him to be you.

But, apart from these third-party attacks, implants and other “Internet of Bodies” devices—body-attached and embedded devices that use the human body as a tech platform—have recently become a sticking point in labor negotiations: employees are raising concerns that involve the use and repurposing of the information generated by their bodies to their detriment. For example, your employer might install stealthy chip readers in your workplace (in addition to the ones you know about) near the bathroom to try to determine how often and how long you are in the bathroom each day. Because the chip in your body may be triggered by external sensors that you might not always know are there, the chip in your body may “leak” information about your location and movement in ways you no longer fully control. Alternatively, your employer might aggregate your body data from the device and share it with consultants or insurers to decide if you are perhaps “inefficient” to keep on as an employee. Especially if the employer owns the chip under the terms of your employment, your employer may claim that the information generated by your body (and communicated by the chip) is available to them to use for any purpose they choose. While the employer certainly wouldn’t view this as an “attack” because of the ostensible consent from the employee, employees might view this type of situation as unauthorized access to hardware in their bodies that exceeds anything they anticipated at the time of getting the implant.

Advertisement

Do you have a burning question for Giz Asks? Email us at tipbox@gizmodo.com.

Advertisement

READ MORE HERE