Successful web application attacks and the data breaches that are resulting from these attacks, have now become everyday news, with large corporations being hit constantly.
Our article covering major security breaches in well—known companies, clearly demonstrates that there are many gaps in web security, which are causing multi-million dollar damages to companies world-wide. In this article we analyze the best security practices and principals to help increase your web application security.
While security experts are adamant that there is still much to improve in most web applications’ security, the gaping security holes that attackers are exploiting, are still present, as can be confirmed by some of the latest string of attacks on Yahoo and several departments of the government of the United States.
These attacks, as one can imagine, are the cause of financial loss as well as loss of client trust. If you held an account with a company that suffered a data breach, you would think twice before trusting that company with your data again. Recently, developers have been brought into the fold with regards to web application security; a field that a couple of years ago was only relevant to security professionals whose jobs revolve around security. Nowadays, security has become a requirement that has to be implemented, for a web application developer to meet all the necessary deliverables. Security needs to become a part of the development process, where it is implemented in the code that is being written, and not just as an afterthought that becomes relevant after an attack.
Security has to be a part of every step of the software development life cycle due to its importance. A chain is only as strong as its weakest link, as is a web application - a low level vulnerability can provide an attacker with enough of a foothold that will allow the attacker to escalate the exploit to a higher level. Below are some principles that every web application developer should follow throughout the SDLC, to ensure that they are writing code that is secure enough to withstand any potential attack.
The Defense in Depth Approach
Defense in depth is a concept whereby a system that needs to be secured, will sit behind multiple layers of security. Here, redundancy is key, so that if a security mechanism fails, there will be others that will catch the vulnerability or block its exploitation. It is important that these layers of security are independent from each other and that if one is compromised, the others will not be affected. It would appear that integrating the mechanisms with each other can make for a better security system, such as if one security mechanism detects a vulnerability, it will alert the others so that they can be on the lookout for anything that the first mechanism might have missed. This is not the case, as it will only make for a weaker defense. If the first layer is compromised, it could lead to the other layers being compromised as well, due to their integration which leads us to the fact that having separate and independent mechanisms is the best implementation to go with.
One such example of implementing a defense in depth approach would be to restrict an administrator panel to be accessed only from a particular IP address. Even though there is enough protection, for most cases, by using credentials in the form of a username and password to log into the admin panel, the added layer of protection will come in handy. If the password is disclosed to an attacker, the protection will no longer be valid, therefore making the login setup irrelevant. By implementing another small but robust security feature, you will be moving towards making your defense infallible.
On the other hand, a security feature should not be a complete inconvenience to the user. For example, allowing access to an admin panel from one IP address makes sense, but requiring the user to pass through too many security checks, will lead the user to take certain shortcuts that will render all the security features that have been set up, futile.
For example, if you request a user to change their password every day, you can be sure that these passwords will be written down on a piece of paper, thus making the environment less secure than what it was to begin with. Which is why there needs to be a balance of making sure that a system is secured, while still allowing users to utilise the system.
Filtering User Input
The key principle is not to trust the end user, since one can never know for sure if the user’s intent is malicious or if the user is simply using your website for its intended purpose. Filtering user input is a good method that will allow your web application to accept untrusted inputs while still being safe to use and store that input.
There are many ways to filter input, depending on the vulnerabilities that are being filtered against. The problem that comes with not filtering user input does not end at the web application itself, since this input will be used subsequently. If the malicious input is not filtered, certain vulnerabilities such as SQL Injection, Cross Site Request Forgery and Cross-site Scripting can be exploited.
Cross-site Scripting (XXS) works since the browser or web application, depending on the type of XSS, will execute any code that it is fed through user input. For example, if a user enters:
And this input is not sanitised, this snippet will be executed. To ensure that this input will not be executed, the data needs to be sanitised by the server.
Principle of Least Privilege
This principle applies to both applications and users, where the amount of privileges that are provided need to be equivalent to the privileges that are required for them to fulfill their purpose. For example, you would not provide a user who uses their machine for word processing with the authority to install software on that machine.
The same goes for applications - you would not allow an application that provides you with weather updates, with the authority to use your webcam. Apart from the obvious issue where the user (and application) cannot be inherently trusted as they can have malicious intent, the user can also be fooled into performing actions using the allowed authority. For example, the best way to prevent a user from unintentionally installing malware, would be to not allow the user to install anything in the first place.
If a web application will be handing SQL queries and returning the results, the database process should not be running as administrator or superuser, since it brings with it unnecessary risks. If the user input is not being validated and an attacker is able to execute a query of their own, with enough time and the appropriate privileges, the attacker can perform any action that they wish, since they would be running as admin or superuser on the machine hosting the database.
Whitelist, not Blacklist
This choice will generally depend on what is actually being protected or what access is allowed. If you want the majority of users to access a resource, you will use a blacklist approach, while if you want to allow certain users, a whitelist approach is the way to go. That being said, there is the easier way and the safer way. Whitelisting is considered safer due to the ambiguity of blacklists.
In a blacklist, everything is allowed except those that are not, while in a whitelist, anything that is not listed is not allowed by default. This makes whitelisting more robust when it comes to controlling user input, for example. It is safer to explicitly allow a set of characters that can be inputted by a user, so that any special characters that can be used for an attack, are excluded automatically. By default blacklisting will allow anything, so if the list of exclusions does not include every possible attack parameter and its different variations, there is still a chance of a malicious user input being accepted and passing through the filter.
The amount of variation and obfuscation techniques that have become widespread make the whitelisting approach more desirable. Blocking <script> from user input will not be enough since more advanced techniques are being implemented that are being used to bypass filters that normally search for <script> tags.
For example, if you have a registration form, where a user is prompted to enter their designation, it is much safer to allow all the possible designations (Mr, Mrs, Ms, Dr, Prof., etc.) than having to block all the possible attack parameters that an attacker could use instead of actually inputting their designation.
Finally, the most important principle of all, is that from all the precautions and security measures that are taken, they are still not enough. This is due to two factors, the first being that thinking highly of your web application’s security will leave you complacent and with a false sense of security, where you are sure that your web application is secure from any potential threat. This can never be the case since every day, new advanced threats emerge that could bypass all the security that has been implemented. This leads us to the second point, where successful security techniques are ever evolving even on a daily basis. It is the developer’s responsibility to remain updated with emerging security techniques and threats, since there is always room for improvement when it comes to security.
We left this principle for last; you never know enough. That’s right, we never know enough. Web application security, like any other IT security related subject is evolving on a daily basis. Keep yourself informed by reading and following industry leading web application security blogs.