Former White House CIO Shares Enduring Security Strategies

Theresa Payton explains the strategies organizations should consider as they integrate layers of new technology.

Infosecurity ISACA North America Conference and Expo – New York, NY – In her two-and-a-half years as White House CIO, Theresa Payton learned valuable long-term lessons about securely adopting new technology, which she shared today with an audience of cybersecurity industry pros.

“Every year we’re layering in new technologies to be considered, and it feels like we have to change our strategy every year, but [we] don’t,” she said in her opening keynote talk at the Infosecurity ISACA Conference and Expo, held here this week. Payton, the first woman to hold the position of White House CIO, is now the founder and CEO at Fortalice.

There are 3,000 staff supporting the Executive Office of the President, Payton continued, and they don’t all fit in the White House. “They’re flying all around the world. My job was to make sure that I could extend the desk in Washington, DC, to them wherever they were so that they could do their jobs.”

In the age of the smartphone, employees wanted everything to be simple – to work like an app. And not only did they want the latest technology, they wanted it secured.

As Payton pointed out, the challenges she faced then are similar to the challenges her audience of security professionals face every day. Today’s employees are also “mobile and global,” and it’s the responsibility of security practitioners to ensure the technologies they’re using are secure.

“We have never actually designed security with the human in mind,” Payton said, pointing to strong passwords as an example. Nobody ever thanks the security team for enforcing strong passwords, which have to be complex and regularly changed. Thinking about human-centered design, Payton would say to her team, “What’s the warm hug around the user?”

“How do we basically assume that technology will fail us, that humans will fail us, and therefore what are we going to do differently?” she asked. “What are the safety nets we’re going to put around everybody?” It’s “up to us,” she said, to break the tradition and think differently.

More Tech, More Problems
Payton pointed to the Internet of Things (IoT) as an example of new tech challenging security pros and encouraged the audience to think about incident response playbooks. As an example of why, she described a client running transportation centers undergoing a renovation that involved installing IoT lightbulbs and collecting travelers’ data and preferences from a mobile app. An engineer called it an “incredible customized experience.” Payton called it “really creepy.”

And what if an incident like the 2016 Mirai botnet occurred and knocked the connected devices offline? It’s an important factor to consider in an incident response playbook: What will you do? “It will happen again,” she said. “There will be a virus, there will be malware that puts your operations at risk.”

When it happens, what limited functionality will your company have left? Payton encouraged attendees to reconsider their playbook and discuss it with their teams.

Another factor to consider in an incident response playbook, she said, is what to do if your company’s data is discovered on the Dark Web. As an example, Payton told the story of a client whose payment card vendor had its source code for sale on the Dark Web. A cybercriminal had stolen the source code and was seen on Reddit, bragging that it was for sale. Using a Dark Web alias, Payton communicated with the criminal online. Her research revealed the code was legitimate, albeit a few versions old, and the attacker had discovered vulnerabilities in it.

To pay or not to pay? This is a tricky subject to broach. As Payton pointed out, she doesn’t advise paying cybercriminals in ransomware attacks. But as she told her client, if they could get the source code off the marketplace it would be for “the greater good.” They paid the client extra for exclusivity; no other attackers bought the code, and it didn’t become a bigger problem.

Companies should be asking themselves, what is their position for being in a situation like this? Will you consult an outside firm or do it yourself? Do you have alternate identities to communicate on the Dark Web, and have you discussed this with your legal team? Using alternate identities on the Dark Web is a difficult topic to address with legal, she added.

Blocking BEC Attacks
As technology evolves and deep fake AI grows popular, business email compromise (BEC) attacks are growing more common and sophisticated. Payton gave the audience a piece of advice: Do not use your public-facing domain name for moving money.

Cybercriminals do their open source intelligence. They know your CEO. They know your CFO. They can figure out who your vendors are and your marketing campaigns. With knowledge gleaned from an Internet search, they have enough to send a social engineering email and transfer money.

“Get a domain name that is not your public-facing domain name,” Payton said. Get a set of email credentials only for people who are allowed to move money. Tell your bank you’re no longer using the public-facing domain name for anything to do with wire transfers and money movement. From there, create a template to be used among employees sending and fulfilling financial requests.

Decide on a code word you text to each other that isn’t a term shared on social media, Payton advised. This way, a request that doesn’t come with a code word will appear suspicious. A large healthcare provider adopted the method, she said – and it has already worked. The same strategy can be used for transferring intellectual property.

“If you have this creativity in your design, you make it very hard for the cybercriminals to figure it out,” she said.

BECs are also growing more sophisticated with deep fake artificial intelligence (AI), Payton continued, noting four cases in which an attacker created a deep fake of a CEO’s voice. It’s a trend that is likely to grow: CEOs are public figures. Think about all the time they spend talking to media and speaking in public. “Creating a deep fake AI of a voice is not that hard to do,” she added.

Most of these cases aren’t successful, but in one the attacker was wired £250,000.

Related Content:

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance & Technology, where she covered financial … View Full Bio

More Insights

Read More HERE

Leave a Reply