So much productivity in software development comes from tools that allow pinpointing issues in the code. There's no need to hold whole codebase to high standards when the tools can quickly locate the few places in need of developer's attention. Why cannot it be that simple with security?
Just think how accurately can we target specific problem areas in the code these days:
- Reliability issues can be pinpointed with exceptions, stack traces, and debuggers.
- Performance issues can be pinpointed with CPU and memory profilers, recently also HTTP profilers.
- UX issues can be pinpointed with heatmaps and conversion rate analysis, perhaps even simple usability test sessions.
- Null pointer and out of bounds errors can be pinpointed with object-level memory isolation in new programming languages like C# or Java.
It is highly desirable to have the same level of specificity when addressing security issues. It's not feasible to apply heavy security audits to the oceans of application code produced every day. Not to mention that tons of opensource and commercial components get integrated in applications with little concern for their security impact.
For security holes that merely cause downtime and temporary screwups, the security problem can be reduced to reliability problem, perhaps with slightly different tooling.
Unfortunately, learning over time or solving security attack one at a time is not a viable strategy against fatal security breaches that kill the product or the whole company with no opportunity for post-attack recovery.
I will name two broad examples of fatal breaches. Firstly, leaks of confidential information cannot be fixed after the attack, because no additional protection can revert the leak. Secondly, damaging changes to 3rd party systems cannot be easily reverted (credit card abuse, email spam).
Sure one could use tools and libraries that maintain high security standards the hard way: making sure that every single line of code is secure, running lots of tests and having lots of audits. This is however a rare luxury limited to the most basic and most widely used tools/libraries, if properly used.
Sandboxing helps a lot. Local apps/libraries shouldn't be used where cloud service suffices. Applications should be stateless except where state reuse is needed for performance. Separate servers, user accounts, and OS processes should be used instead of libraries whenever possible within available performance envelope. Treating alien code as alien code is always a sensible thing to do.
In the end, the only definite antidote to the venerable hacker is the hacker himself. If the application survives regular automated and manual hacking attempts, it is reasonable to believe it is secure.