Robert's blog
Robert Važan

My experience with ZeroC Ice

Don't use ZeroC Ice. Use HTTP2 and REST. It's easier, it works everywhere, it will perform better, and API consumers will like it more.

At WBP online, we use ZeroC Ice middleware for some of our communication needs. This post is a summary of my experience with Ice. My overall opinion of the software is rather negative, but it's more interesting to know why.

Ice is essentially an evolved version of CORBA from 90's. It's a strongly typed binary protocol. Interfaces and on-the-wire data structures are defined in Slice, an Ice-specific interface definition language. Slice is then compiled into proxies and stubs in multiple target languages.

Here we get to my first issue. C# and most other modern languages are equipped with powerful reflection capabilities that allow application developers to define interfaces in their native programming language as POCOs (or their equivalents in other languages). Json.NET as well as many (most?) modern serialization frameworks work that way.

Yes, Slice allows you to define the interface once and compile it into multiple languages, but this comes at a cost. Code navigation and refactoring no longer works since IDEs don't understand Slice. IDE plugin is needed to integrate Slice files into the build system, which brings its own issues with IDE compatiblity and build customization. Slice files aren't accessible at runtime, preventing applications from reflecting on them.

Another problem that arises straight from Ice architecture is static typing. Ice protocol is extremely brittle. Just adding or removing a field from some data structure will result in cryptic error messages after deployment. Ice won't attempt to match similar data structures. Renaming whole interfaces is out of question, so no refactoring, sorry.

ZeroC recommends implementing the interface twice to support two versions of the interface, but that's a productivity disaster that nobody can afford. So shut down the whole data center, upgrade everything, then start the whole data center again. That's how Ice servers get upgraded at WBP. Everything must be at the same version of the protocol, no exceptions.

Success of HTTP can be largely attributed to its openness that allowed people to implement all sorts of proxies, routers, caches, and interfaces to non-HTTP systems. These systems depend on metadata embedded in the protocol. Ice suffers from the WCF disease: plugging custom logic into the stack is hard and often impossible. Proxies don't understand anything about the data that flows through them. Message interception is difficult on server side and impossible on the client.

Apparently someone at ZeroC thought that static type system has the advantage that metadata can be stripped from the wire protocol, making it faster and more compact. This is however only partly true. As HTTP 2.0 shows us, it is possible to create highly compact and performant encoding for metadata-rich protocols. Not to mention that Ice's encoding isn't that efficient after all. It has full-length integers, method names embedded in the protocol as strings, and heavy binary header attached to every message.

Fast forward from 90's to 2014. Nobody is using RPC and middleware stacks. Why? Everyone has already switched to HTTP and all innovation goes into HTTP and HTML5. ZeroC cannot keep pace. As an example, at WBP, we are developing reactive application that needs to keep data fresh on all the clients. How does Ice compare to HTTP in this respect?

HTML5 developers can use WebSocket and an emerging array of tools utilizing WebSocket that make implementing reactive applications a child's play. ZeroC's recommendation is to make server-to-client calls via interface that client opens up on connection that was previously established using client-to-server call that transmits the ID of client's interface. Then manage connection issues and congestion on application layer. Easy, eh? I somehow doubt anyone has implemented such logic.

Ice is more than a protocol. It has its own cloud computing framework called IceGrid that includes server lookup, deployment, and configuration. All-in-one systems look attractive at first glance, but in reality this is what is killing ZeroC. They have created their own closed ecosystem that they have to support alone against giants like HTTP. Extensibility by application developers is limited. 3rd party components do not exist. Opensource edition feels like limited shareware, which discourages opensource contributors. Support is provided exclusively by ZeroC.

No wonder that ZeroC is struggling to keep the whole thing maintained. A bug in Ice is causing our servers to leak hundreds of megabytes of RAM every day. This bug was reported two months ago and it still isn't fixed. Sure, paid support would speed things up a lot. How much would such support cost?

So, one day, I reported a bug in ZeroC forums. After a lot of help from my side to track down the bug in their code, I received an email from ZeroC stating that this was the last time they helped me (huh? who helped whom?) and that further support would cost 11,500€ per year for 5-member team. Uff. And that didn't even cover our full team size nor did it cover any commercial redistribution license that would release us from GPL restrictions.

At such price point, I am going to get asked by management to reimplement the subset of Ice functionality we actually need, perhaps utilizing existing free libraries in the process. I am rather smart developer and reimplementing Ice would cost me a couple of weeks at most. That estimate already includes multiple iterations of prototypes that eventually resolve all the issues I have described in this post. Since my salary isn't stellar high yet, no ROI calculation could ever justify purchase of commercial ZeroC license.

While discussing licensing with ZeroC, it turned out that, in ZeroC's interpretation at least, GPL doesn't even allow you to hire contractors to work on your internal projects since handing them the source code is redistribution under GPL terms. ZeroC's business model apparently consists of first trapping prospective clients in ZeroC's closed ecosystem, then requesting a high ransom under time pressure to release clients from licensing trouble. That doesn't sound like fair business to me. But then I have already written why freemium is expensive and why software libraries cannot be cheap.

These are the main issues. I have encountered tons of other problems: no way to reliably send continuous stream of ordered messages, no easy mapping for .NET builtin types (Nullable, DateTime), useful connection semantics hidden from the application (is it still the same server on the other side or did it restart between calls?), extensive scripting required to manage the cloud, painful encryption and authentication setup. The list goes on.

So what's the alternative? I have already mentioned it: HTTP 2.0. It's extremely efficient, yet highly flexible, rich in metadata, and fairly easy to work with. Json.NET can serialize at 60MB/s per core. Highly efficient binary serializers can be transparently substituted where necessary. WebSocket is another alternative if access from HTML is desired. It's a raw protocol though, begging for some high-level wrapper. So unless AJAX APIs get extended to allow chunked processing of incoming data, I expect HTTP 2.0 to be tunneled through WebSocket pretty soon.

Comments

Thanks for the long post, I am working in a team under a senior architect that want all our services to expose websocket endpointusing ICE. I am really against this option but I am still very junior compare to other team member. In term of API publication idl is nice and for interfacing multi language services I can understand the benefit of ICE. But I see this design as a CORBA revival nightmare. I am about to propose creating service written in node.js wrapping our old c++ code or using thrift instead of ICE it's free after all. The node option is more hardcore but I think this will be lighting fast. I am curious about your opinion regarding this approach. I really do not want to end-up doomed by paying ICE forever.
Anonymous
I haven't switched our Ice interfaces to alternatives yet. I am using protobuf, json, HTTP/REST here and there. I am considering use of Thrift as well as custom in-house libraries that handle some corner cases that are important for us.
Robert
Comments are closed for this post.