This question should have been answered a long time ago, but most of the high-ranking articles on the web are reiterating the same set of flawed arguments. I've actually read only one article about dynamic languages that actually got it right. I don't have the link anymore, so here's a short summary. It's thoroughly mixed with bits of other sane points I've gathered from numerous other sources.
An important distinction exists between ultimate potential of a language and its present state. As for ultimate potential, dynamic language is a superset of static language. It can do everything its static counterpart can do and more.
It's not just about expressive power, but let's address expressiveness first. All static languages incorporate dynamic features in some way, at least in the form of dynamically linked libraries that could have been generated after the application was launched. Recently the dynamic features got more prominent and easier to use in static languages. The difference is that static languages strongly discourage use of dynamic features by making them hard enough to use and by encouraging style of coding that avoids dynamic features. So even with these features present, static languages end up being static in actual programming practice.
I've said it's not just about expressiveness and I meant it. Dynamic languages can actually deliver higher speed, lower memory usage, superior compile-time (or edit-time) error checking, accurate code navigation, and nearly flawless refactoring. Now this is admittedly quite distant experience from state of the art dynamic languages. I will try to show that it's at least theoretically possible.
A lot can be gained from clever static analysis, effectively turning large portions of dynamic code into type-inferenced code. But that's weak approach that at best gets close to static language experience. We want something that's better than what static languages can offer.
The trick is to shift code analysis from coding time to runtime. Let's watch what happens during code execution, especially during tests. Partially execute code that is being written. Encourage editing of live program. Tricks like that allow the runtime to gather much more detailed data than what static compiler can ever extract from type information. That moves the information advantage to the side of dynamic languages.
Static languages could be augmented with similar dynamic analysis, but the point is that it would make the static type system rather useless. Why bother with all that extra syntax when there are better sources of information? Not all static type information can be replaced with dynamic analysis, but what really matters is the overall influence of various sources of information on programming experience. Once the influence of type system becomes negligible, it is bound to be removed from the mix.
The runtime analysis process introduces overhead of its own, but there is a whole new world of optimizations specifically targeted at this problem. Just caching everything is likely to make the runtime hard to notice. Efficiency of cache itself can be improved by sharing analysis results via cloud service, especially in mass deployment scenarios. Static analysis (inferencing) can complement dynamic analysis whenever it is more efficient. Caches don't need to be always 100% accurate, for example in case of auto-complete. Lowering accuracy criteria increases the amount of data that can be cached and reduces the cost of algorithms gathering the data.
I am kidding you a little, of course, for reasons of brevity of the argument. I propose complex solutions without having any proof that these solutions will work. Nevertheless, for purposes of this argument, it is sufficient to make you believe that such runtime analysis has a fair chance of being viable. Many things can go wrong while trying to implement it, oftentimes having nothing to do with the underlying theory. I just want to make you think in terms of what's possible, so that you don't get blinded by current state of the art of so called dynamic languages of today.
As you have been reading my description of runtime analysis above, you might have realized where the problem with dynamic languages lies. It might be possible to develop high quality tools for dynamic languages, but such tools are going to be way more expensive than static language IDEs.
That leads me to the practical difference between static and dynamic languages at this point in time. Dynamic languages have the best language design, neat & numerous libraries, but they have awfully crappy tools. Static languages have awesome tools at the cost of making the language and anything written in that language quite complicated, really verbose, and often seriously ugly.
Tools are essential for long-term maintenance of large projects. No wonder dynamic languages shine mostly in one-time prototyping and in small scripts. Big projects require an IDE for efficient development and that presently means static languages. Diligent developers might be able to maintain large project in dynamic language, but productivity gains of dynamic languages are completely lost in such scenarios.
Maybe that's why Microsoft loves static languages. Static languages allow MS to shift their platform and IDE development costs onto application developers who have to live with the limitations of the static language. That's probably where the push for TypeScript comes from.
I've heard innumerable arguments dismissing dynamic languages. There's not enough space in this blog post to address all of them. Most of them boil down to the assumption that there's some simple interpreter loop sitting somewhere and executing everything via some giant switch statement. That's how simple in-house dynamic languages work, but it would be a shame for popular dynamic language to work like that. V8 has proven that dynamic languages can be fast. In this post, I was trying to suggest that they can be competitive with static languages in all areas previously thought to be exclusive to static languages.