8 more Scala security issues you should know about
We recently published a list of 9 Scala security issues every Scala developer should be aware of. Here’s a list of 8 more:
1. Avoid implementing dangerous Regexs
Regular expressions (regexs) are frequently subject to Denial of Service (DOS) attacks (called ReDOS). This is due to the fact that regex engines may take a large amount of time when analyzing certain strings, depending on how the regex is defined.
For example, for the regex: ^(a+)+$, the input “aaaaaaaaaaaaaaaaX” will cause the regex engine to analyze 65536 different paths (example taken from OWASP references).
Therefore, it is possible that a single request may cause a large amount of computation on the server side. The problem with this regex, and others like it, is that there are two different ways the same input character can be accepted by the Regex due to the + (or a *) inside the parenthesis, and the + (or a *) outside the parenthesis. The way this is written, either + could consume the character ‘a’. To fix this, the regex should be rewritten to eliminate the ambiguity. For example, this could simply be rewritten as: ^a+$, which is presumably what the author meant anyway (any number of a’s). Assuming that’s what the original regex meant, this new regex can be evaluated quickly, and is not subject to ReDOS.
2. Prevent XML parsing attacks
XML External Entity (XXE) attacks can occur when an XML parser supports XML entities while processing XML received from an untrusted source.
Vulnerable Code:
SAXParser parser = SAXParserFactory.newInstance().newSAXParser(); parser.parse(inputStream, customHandler);
Solution using “Secure processing” mode:
SAXParserFactory spf = SAXParserFactory.newInstance(); spf.setFeature(XMLConstants.FEATURE_SECURE_PROCESSING, true); SAXParser parser = spf.newSAXParser(); parser.parse(inputStream, customHandler);
Solution disabling DTD:
SAXParserFactory spf = SAXParserFactory.newInstance(); spf.setFeature("http://apache.org/xml/features/disallow-doctype-decl", true); SAXParser parser = spf.newSAXParser(); parser.parse(inputStream, customHandler);
3. Prohibit Blowfish usage with short key
The Blowfish cipher supports keysizes from 32 bits to 448 bits. A small key size makes the ciphertext vulnerable to brute force attacks. At least 128 bits of entropy should be used when generating the key if the usage of Blowfish must be retained.
If the algorithm can be changed, the AES block cipher should be used instead.
4. Prohibit custom message Digest
There are already well tested and safe algorithms to choose from. No need to reinvent the wheel.
NIST recommends the use of SHA-1, SHA-224, SHA-256, SHA-384, SHA-512, SHA-512/224, or SHA-512/256 as algorithms to generate a MessageDigest.
However SHA1 first signs of weaknesses appeared (almost) ten years ago. In 2012, some calculations showed how breaking SHA1 is becoming feasible for those who can afford it. SHA1 is being deprecated.
Upgrade your implementation to use one of the approved algorithms. Use an algorithm that is sufficiently strong for your specific security needs.
5. Prohibit RSA usage with short key
RSA Laboratories currently recommends key sizes of 1024 bits for corporate use and 2048 bits for extremely valuable keys like the root key pair used by a certifying authority.
6. Prohibit SELECT *
There are really three major reasons to avoid SELECT *:
- Inefficiency in moving data to the consumer. When you SELECT *, you’re often retrieving more columns from the database than your application really needs to function. This causes more data to move from the database server to the client, slowing access and increasing load on your machines, as well as taking more time to travel across the network. This is especially true when someone adds new columns to underlying tables that didn’t exist and weren’t needed when the original consumers coded their data access.
- Indexing issues. Consider a scenario where you want to tune a query to a high level of performance. If you were to use *, and it returned more columns than you actually needed, the server would often have to perform more expensive methods to retrieve your data than it otherwise might. For example, you wouldn’t be able to create an index which simply covered the columns in your SELECT list, and even if you did (including all columns [shudder]), the next guy who came around and added a column to the underlying table would cause the optimizer to ignore your optimized covering index, and you’d likely find that the performance of your query would drop substantially for no readily apparent reason.
- Binding Problems. When you SELECT *, it’s possible to retrieve two columns of the same name from two different tables. This can often crash your data consumer. Imagine a query that joins two tables, both of which contain a column called “ID”. How would a consumer know which was which? SELECT * can also confuse views (at least in some versions of SQL Server) when underlying table structures change — the view is not rebuilt, and the data which comes back can be nonsense. And the worst part of it is that you can take care to name your columns whatever you want, but the next guy who comes along might have no way of knowing that he has to worry about adding a column which will collide with your already-developed names.
7. Prohibit weak message Digest
NIST recommends the use of SHA-1, SHA-224, SHA-256, SHA-384, SHA-512, SHA-512/224, or SHA-512/256 as algorithms to generate a MessageDigest.
However SHA1 first signs of weaknesses appeared (almost) ten years ago. In 2012, some calculations showed how breaking SHA1 is becoming feasible for those who can afford it. SHA1 is being deprecated.
Upgrade your implementation to use one of the approved algorithms. Use an algorithm that is sufficiently strong for your specific security needs.
8. Prohibit weak Random
The use of a predictable random value can lead to vulnerabilities when used in certain security-critical contexts. For example, when the value is used as:
- a CSRF token
- a password reset token (sent by email)
- any other secret value
- A quick fix could be to replace the use of java.util.Random with something stronger, such as java.security.SecureRandom.
Vulnerable Code:
String generateSecretToken() { Random r = new Random(); return Long.toHexString(r.nextLong()); }
Solution:
import org.apache.commons.codec.binary.Hex;
String generateSecretToken() { SecureRandom secRandom = new SecureRandom();
byte[] result = new byte[32]; secRandom.nextBytes(result); return Hex.encodeHexString(result); }
Edit: We just published an ebook: “The Ultimate Guide to Code Review” based on a survey of 680+ developers. Enjoy!
Enforcing them with Codacy
Codacy is committed to help you save time in code reviews and we believe that these security checks should be enforced automatically.
That’s why you can now enforce these and other patterns automatically with Codacy. Just go to your project and enable them by selecting them in the code patterns section.
About Codacy
Codacy is used by thousands of developers to analyze billions of lines of code every day!
Getting started is easy – and free! Just use your GitHub, Bitbucket or Google account to sign up.