The security community is continuously changing, growing, and learning from each other to better position the world against cyber threats. In the latest Voice of the Community blog series post, Microsoft Product Marketing Manager Natalia Godyla talks with Daniel Cuthbert, Global Head of Security Research at Banco Santander. Daniel discusses how to use application security testing and testing standards to improve security.
Natalia: What is an application security test and what does it entail?
Daniel: Let’s say I have a traditional legacy banking application. Users can sign in using their web browser to gain access to financial details or funds, move money around, and receive money. Normally, when you want an application assessment done for that type of application, you’re looking at the authentication and authorization processes, how the application architecture works, how it handles data, and how the user interacts with it. As applications have grown from a single application that interacts with a back-end database to microservices, all the ways that data is moved around and installed—and the processes—become more important.
Generally, an application test makes sure that at no point can somebody gain unauthorized access to data or somebody else’s money. And we want to make sure that an authorized user can’t impersonate another user, gain access to somebody else’s funds, or cause a system in the architecture to do something that the developers or engineers never expected to happen.
Natalia: What is the Open Web Application Security Project (OWASP) Application Security Verification Standard (ASVS), and how should organizations be using the standard?
Daniel: ASVS stands for Application Security Verification Standard1. The idea was to normalize how people conduct and receive application security tests. Prior to it, there was no methodology. There was a lot of ambiguity in the industry. You’d say, “I need an app test done,” and you’d hope that the company you chose had a methodology in place and the people doing the assessment were capable of following a methodology.
In reality, that wasn’t the case. It varied across various penetration test houses. Those receiving consultancy for penetration tests and application tests didn’t have a structured idea of what should be tested or what constituted a secure robust application. That’s where the ASVS comes in. Now you can say, “I need an application test done. I want a Level 2 assessment of this application.” The person receiving the test knows exactly what they’re expecting, and the person doing the test knows exactly what the client is expecting. It gets everybody on the same page, and that’s what we were missing before.
Natalia: How should companies prioritize and navigate the ASVS levels and controls?
Daniel: When they first look at the ASVS, many people get intimidated and overwhelmed. First, stay calm. The three levels are there as a guideline. Level 1 should be the absolute bare minimum. That’s the entrance to play if you’re putting an application on the Internet, and we try to design Level 1 to be capable of being automated. As far as tools to automate Level 1, OWASP Zed Attack Proxy (ZAP) is getting there. In 2021, an application should be at Level 2, especially if we take into consideration privacy. Level 3 is unique. Most people never need Level 3, which was designed for applications that are critical and have a strong need for security—where if it goes down, there’s a loss of life or massive impact. Level 3 is expensive and time-consuming, but you expect that if it’s, say, a power plant. You don’t want it to be quickly thrown together in a couple of hours.
With all the levels, you don’t have to go through every single control; this is where threat modeling comes in. If your application makes use of a back-end database, and you have microservices, you take the parts that you need from Level 2 and build your testing program. Many people think that you have to test every single control, but you don’t. You should customize it as much as you need.
Natalia: What’s the right cadence for conducting application security tests?
Daniel: The way we build applications has changed drastically. Ten years ago, a lot of people were doing the waterfall approach using functional specifications like, “I want to build a widget to sells shoes.” Great. Somebody gives them money and time. Developers go develop, and toward the end, they start going through functional user acceptance testing (UAT) and get somebody to do a penetration test. Worst mistake ever. In my experience, we’d go live on Monday, and the penetration test would happen the week before.
What we’ve seen with the adoption of agile is the shifting left of the software development lifecycle (SDLC). We’re starting to see people think about security not only as an add-on at the end but as part of the function. We expect the app to be secure, usable, and robust. We’re adopting security standards. We’re adopting the guardrails for our continuous integration and continuous delivery pipeline. That means developers write a function, check the code into Git, or whatever repository, and the code is checked that it’s robust, formatted correctly, and secure. In the industry, we’re moving away from relying on that final application test to constantly looking during the entire lifecycle for bugs, misconfigurations, or incorrectly used encryption or encoding.
Natalia: What common mistakes do companies make that impact the results of an application security assessment?
Daniel: The first one is companies not embracing the lovely world of threat modeling. A threat model can save you time and give you direction. When people bypass the fundamental stage of threat modeling, they’re burning cycles. If you adopt the threat model and say, “This is every single way some bad person is going to break our favorite widget tool,” then you can build upon that.
The second mistake is not understanding what all the components do. We no longer build applications that are a single web server, Internet Information Services (IIS), or NGINX that is stored in the database. It’s rare to see that today. Today’s applications are complex. Because multiple teams are responsible for individual parts of that process, they don’t all work together to understand simple things like the data flow. Where’s the data held? How does this application process that data? Often, everyone assumes the other team is doing it. This is a problem. Either the scrum master or product owner should own full visibility of the application, especially if it’s a large project. But it varies depending on the organization. We’re not in a mature enough stage yet for it to be a defined role.
Also, the gap between security and development is still too wide. Security didn’t make many friends. We were constantly belittling developers. I was part of that, and we were wrong. At the moment, we’re trying to bridge the two teams. We want developers to see that security is trying to help them.
We should be building a way for developers to be as creative and cool as we expect them to be while setting guardrails to stop common mistakes from appearing in the code pipeline. It’s very hard to write secure code, but we can embrace the fourth generation of continuous integration and continuous delivery (CI/CD). Check your code in; then do a series of tests. Make sure that at that point and at that commit, the code is as robust, secure, and proper as it should be.
Natalia: How should the security team work with developers to protect against vulnerabilities?
Daniel: I don’t expect developers to understand all the latest vulnerabilities. That’s the role of the security or security engineering team. As teams mature, the security engineering or security team acts as the go-to bridge; they understand the vulnerabilities and how they’re exploited, and they translate that into how people are building code for their organization. They’re also looking at the various tools or processes that could be leveraged to stop those vulnerabilities from becoming an issue.
One of the really cool things that I’m starting to see with GitHub is GitHub insights. Let’s say there’s a large organization that has thousands of repositories. You’ll probably see a common pattern of vulnerabilities if you looked at all those repositories. We’re getting to the stage where we’re going to have a “Minority Report” style function for security.
On a monthly basis, I can say, “Show me the teams that are checking in bugs—let’s say deserialization.” I want to understand a problem before it becomes a major one and work with those teams to say, “Of the last 10 arguments, 4 of them have been flagged as being vulnerable for deserialization bugs. Let’s sit down and understand how you’re building, what you’re building toward, and what frameworks you’re trying to adopt. Can we make better tools for you to protect against the vulnerability? Do you need to understand the vulnerability itself?” The tools, pipelines, and education are out there. We can start being that bridge.
To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.
READ MORE HERE