In a previous post I discussed how QA plays a critical role in the security of an application. As with QA and developers, the business analysts and product managers are also crucial to a successful security development lifecycle. Not to add any pressure, but it is these two roles that feed into the other groups for the security requirements.
When designing an application the focus is usually placed on ensuring that the end user functionality, the functionality to solve a specific problem, is working as expected. To use a simple bank application as an example, there may be a need for a customer to view their account online, or on a mobile device. There may be a need for a user to transfer money between bank accounts or other banking functionality. The business analysts job is to identify this needed functionality and define how it should work.
A lot of what we do in security involves looking deeper into how the application “should” work. It is more than just ensuring that when I pull up my account I see my account info. If we dig a little deeper a question may be “What happens if I attempt to view another user’s account?” By going a little farther with our questioning we can start to flush out the details of how we expect the system to react in these scenarios.
One of the biggest issues in security is the ability to view other user’s information. We have seen this in many breaches where just modifying a simple query string value allowed viewing the private details of another user. When we create a design requirement that states: When trying to view an account you are not authorized to see, you must receive a HTTP Status code of 403: Access Forbidden. This requirement helps ensure that the developers are thinking about this during the development phase. It also gives the QA testers another test case to test for. Of course, the example requirement may raise other concerns if there are issues with harvesting flaws, but that is beyond the point of this post.
The point here is that by doing what we have been doing all along, just digging a little deeper, it is possible to start adding simple requirements that relate directly to adding security into the application. With the example above, we even stop looking at this as a separate classification of flaw, no longer a “security” bug but just a functional bug. It starts getting tracked with all other issues and should get remediated in a timely fashion.
When we don’t define these types of requirements then it is up to the developers to implement this which makes it more of guesswork. How does a developer know if an account should be limited to one person or not? Depends on the application. How would QA know to test it or not if there are no requirements. Of course we can make assumptions as to how it should work, but having the requirements defined ahead of time makes it concrete.
Another aspect that could be better defined by the business teams are the input fields for the application. One of the critical components of a good application security program is strong input validation. The better fields can be defined, the easier it is to implement stronger validation as well as create test scripts for those definitions. Here are a few examples:
- What are the types of data that should be accepted?
- Is there a max length for that field?
- Can that date be before a certain time period?
- Can that number be negative?
Developing secure applications is a team effort. No single group can do it alone. There needs to be a strong union between each group to ensure that the product is the best it can be. Stay tuned for more posts on how all of the teams in the SDLC can work together and play a critical role in the overall security of the application.
Leave a Reply
You must be logged in to post a comment.