Pinpointing application security issues
So much productivity in software development comes from tools that allow pinpointing issues in the code. There's no need to hold whole codebase to high standards when the tools can quickly locate the few places in need of developer's attention. Why couldn't it be that simple with security?
Just think how accurately we can target specific problem areas in the code these days:
- Reliability issues can be pinpointed with exceptions, stack traces, and debuggers.
- Performance issues can be pinpointed with CPU and memory profilers, recently also HTTP profilers.
- UX issues can be pinpointed with heatmaps and conversion rate analysis, perhaps even simple usability test sessions.
- Attempted heap corruption can be pinpointed with object-level memory isolation in new programming languages like C# or Java.
It is highly desirable to have the same level of specificity when addressing security issues. It's not feasible to apply heavy security audits to the oceans of application code produced every day. Not to mention that tons of opensource and commercial components get integrated in applications with little concern for their security impact.
Transparency
It goes without saying that opensource tools and libraries are less of a risk, because they are easier to scan and any discovered security issues are easier to patch. Opensource code is best paired with reproducible builds and cryptographic hashes and signatures for both sources and binaries.
Transparency would be actually nice to have across the whole economy to prevent hardware-level attacks in production, distribution, and in data centers, but that's sci-fi at the moment.
Sandboxing
Sandboxing helps a lot. Servers are naturally sandboxed. Recently, local apps are getting better sandboxing. There are "safe" languages, which can sandbox libraries as long as some additional scanning is performed. Sandboxing essentially focuses security measures on the relatively small sandbox definition.
Scanning and logging
Automated code scans, especially in combination with programming languages amenable to static analysis, can put an upper bound on what the code can do. Scanning is therefore a form of sandboxing. It can be either hard, breaking builds when the scan finds something unexpected, or soft, merely reporting potential security issues. Scanning attracts attention to code that is likely to have high security impact. It is similar to how profiling attracts attention to inefficient code.
Logging is a runtime equivalent of scanning. Logging can be either selective, drawing attention to suspected security incidents, or bulk, aiding future incident analysis.
Banning insecure practices
Another high-impact and low-effort approach is to avoid insecure practices and insecure tools. Insecure code-level practices include dynamic typing (or reflective access to static types), stateful code, and home-grown cryptography. Insecure tool-level practices include use of vulnerable distribution channels, questionable suppliers, and tools with known vulnerabilities.
Avoiding insecure practices involves change of habits and a bit of market research to find suitable libraries and tools. That's a small investment compared to trying to audit all application code line by line. Labeling and shaming insecure libraries and tools is a big part of this strategy.
Penetration testing
In the end, the only definite antidote to the hacker is the hacker himself. Pen test could be as big as several days long simulated attack or as small as a thought experiment. If the application survives automated and manual hacking attempts, it is reasonable to believe it is secure.