Building secure applications has to start with the IT department. We have fought the battle for far to long in trying to have a different group responsible for securing our applications. Whether you call it application security, product security, devsecops, or something else. It just doesn’t work. These different groups can help in identifying and implementing some processes, but secure code starts with the development team.
If we really want to start building more secure applications we have to say what no one wants to say. That is that the development team is responsible for the security of their applications. This spreads across the entire product team, I am not just talking about developers. Starting at the management level when discussing the budget for an application all the way to considering how an incident is handled. We currently have this stigma that security is someone elses problem so it is left out of the conversation. Sure, we have seen where some teams are starting to include security into design sessions or they are working to get tools into their pipelines, but they are still relying on the security team to do the actual work. This is ineffective. Want proof, just look back over the last few decades.
You might be thinking that we need a security team because they are the security experts. I am not saying that a security team doesn’t provide value or we need to get rid of it. Instead, we need to look at how we use our security teams. We need to take a hard look at what we should expect from those that are directly involved in building our applications. Let’s take a look at a few examples.
Budget
There is a cost to building applications. Of course we know it will take time and resources, but how often do we see the IT group consider the cost of security tooling or requirements? The security tools should be agreed upon and laid out for the development teams to know what is required for the development process. These tools should be built into the developlment budget for that application. There should be budget for any required 3rd party testing or compliance activities. For example, does the application need an annual pen test, will you offer bug bounties, or do you have certain compliance requirements? It should not be abnormal for the IT leadership to be considering these things and building this into their plan. This also leads into planning for keeping systems current with ongoing maintenance.
Development
Building a quality application doesn’t just mean that it functions as expected. It also means that it has proper security practices embedded in it to not only provide functionality but to also protect the users, company and the data from mis-use. Is it overreaching to expect that a developer understands the risks around building applications and different functionality? Shouldn’t we expect that a developer knows the risks around parsing XML? What about the understanding that access to data must be controlled to authorized users only? These are basic concepts that every developer should know. Unfortunately, these are not concepts we actually require them to know. We don’t ask these types of questions during interviews or require them for being hired. We do ask that they know OOP, agile, CI/CD, our specific language and a host of other things around development. But never do we ask about application risks and how they avoid them.
When product owners, developers, testers understand these concepts, there is a better chance appropriate controls will be built in.
Testing
Almost all organizations have a testing team to help verify the application before it is released. Often the goal is to verify that the application meets the requirements of the company or the end user and doesn’t break. Currently, those requirements never include security, but why not? Why do we not require QA testers to have basic understanding of security risks within an application? It is not difficult to look at a function to access data to then ask the question, “who should and should not have access to this”. Then to follow that with “let’s create a test case to verify the authorized user has access, and the unauthorized user doesn’t”. Again, we don’t have any requirement for testers to have this basic knowledge when we are hiring them.
Instead, we build external security teams that we task with securing our applications. We tell ourselves that if we give the same yearly security training to the developers that we are making progress. Maybe it is a little progress, but it is tough to see the needle moving. It is a constant game of cat and mouse with throwing bugs into the backlog. Building anamosity between the IT and Security teams because neither have any control over the other. One team arbitrarliy picks tools, training, and creates processes the other has to follow. The other builds applications in a vaccum and expects the other team to have a budget to cover the security costs needed to cover the tools and processes.
The security teams still exist in this scenario, they are just not solely responsible for the security of the applications. Instead, they are focusing on new risks, guiding the product teams around security practices and auditing that the guidance is being implemented properly. We need responsibility and accountability on those that actually have the control. When they start becoming accountable, they will look to engage with security more collaboratively.
I have always said, if the directive comes down through IT, it will be followed. If it comes from an outside group, like security, it is an uphill battle.
Let me know what you think.