Continuing the series on life in a managed world, I promised you Developer Deathmatch. It was honestly more of a friendly challenge between Rico Mariani and Raymond Chen. This is only one application and your mileage will vary from application to application, but the main point here is to show that managed doesn’t necessarily mean slow. Now onto the main event…


Raymond wrote a series of articles about perf tuning an unmanaged application, a Chinese/English dictionary reader. Rico ported the same application to managed code and also perf tuned it. (Links to all of Rico’s and Raymond’s blog posts on the topic can be found here on Rico’s blog.) The results are staggering. Remember, both developers are at the top of their game. So this isn’t a naive unmanaged application versus a highly optimized managed application. Let’s look at the execution times:


































Version Execution Time
(seconds)
Unmanaged v1 1.328
Unmanaged v2 0.828
Unmanaged v3 0.343
Unmanaged v4 0.187
Unmanaged v5 With Bug 0.296
Unmanaged v5 Corrected 0.124
Unoptimized Managed port of v1    0.124
Optimized Managed port of v1 0.093
Unmanaged v6 0.062


It was only after Raymond pulled out all the stops with unmanaged v6 and wrote a custom allocator to replace the new operator that the unmanaged version surpassed Rico’s first try. This is not suggestive that Raymond is a poor coder – far from that – he’s probably one of the best.


In the end, Raymond won, but at what cost? It took six versions of the unmanaged code and who knows how much effort analyzing and tuning to beat the managed version. As for the managed version, at 0.093 seconds a large portion of the time is the startup overhead of the CLR. So you’re hitting a performance limit of managed code, but for most reasonable, and long-running, applications, ~50 milliseconds firing up your runtime at the beginning of application execution isn’t going to make a dent in your overall perf.


Why do people think that managed code is slow? One reason is that .NET makes it really easy to do colossally stupid things. Things like sucking 1 GB XML file off your hard disk and parsing it into an in-memory tree.


XmlDocument dom = new XmlDocument();
dom.Load(“someHonkingHugeFile.xml”);


Just because you can do this in two lines of code doesn’t absolve you of the fact that you need to understand what you’re doing. You need to understand your toolset. It is not the fault of the toolset designer if you use it improperly. Let me tell you that XmlDocument.Load isn’t going to be returning control to you anytime soon if someHonkingHugeFile.xml is 1 GB. I haven’t tried it, but you’ll probably throw an OutOfMemoryException on 32-bit Windows if you try something dumb like that.


Next time I’ll talk about some of my own personal experiences in developing performant managed applications.