The importance of using - well using

by Fredrik Ljung  

In our company, like many others I'm sure, we have a lot of small utility apps filled with legacy code. For us legacy code means code that lacks unit tests, and is full of anti-patterns. Global variables, functions with 100+ lines of code, static classes doing the brunt of the work, and a naming strategy that includes such marvels as MyFunction(), litter the code. It's very easy to judge the code on these failures, but the code has been in production for many years. It might be held together with staples and duct tape but years of small necessary tweaks means it works well for what they are used for. Occasionally, however, I get reminded of just how fragile this code base is.

The update loop

About a year ago I was out on site at a customer helping out with the implementation of our application. The application is an average size handheld application running on Windows CE. While testing I came across some issues which were easily solved by using newer hardware drivers that the OEM ships for the handheld. I decided to update the drivers, rebuild the solution, and publish the new files through our update solution. The update solution  is a rather simple service that basically keeps an application folder on the handheld in sync with a dedicated folder on the server. The client application achieves this by firing up an update utility and then shuts itself down and lets the updater handle it from there. The update utility has been used in plenty of our solutions for years without any problem, and it had worked fine for the past six months at the current customer location. However after applying the new update to the client it stopped working. Something in the process failed and the updater initiated a rollback and restarted the client. The problem though was that the client would restart the update and the entire process would repeat itself. The error messages where vague at best and no logging existed. My only clues were that the first two files updated ok then the process would freeze and after 30 second or so it would fail and rollback. As my day at the customer was all but over I didn't have time to find out why it wasn't working. I decided to update all handhelds manually and promised myself I would look at the problem at the first available moment.

Reminded of my procrastination

Fast forward one year and obviously I never got that available moment to fix the update problem. It never seemed important enough to take care of. Instead necessity hit me at a most inconvenient time. The customer had been stuck in a delayed change of ERP systems and so our small sub project also got put on hold. Recently things started moving again and I returned to the project to get the application up to speed with the rest of the project. As I'm sure you have guessed by now, I came to a point where I needed to publish updates to the application that consisted of more than two files. Since I hadn't touched the update utility in the past year the update yet again failed as it tried to update more than two files. I realize I have to put my current work on hold and go digging around in some legacy code.

Disposing of the culprits

The code was a shining example of the state of the legacy code we have amassed. The update applications Main function was about 60 lines and the rest of the code was all in static functions, ranging 20 to 50 lines of code. It's not a large project by any means, consisting of one class, Program, at about 400 lines and an easy enough structure to understand. I spotted the first hint of the problem on the third row in Main(), where the creation of the reference to the web service was not wrapped in a  using() statement, and no dispose in sight. I spent about an hour refactoring the code and found another few undisposed objects. Those were just a reflection of the actual problem, and below is a snippet from the function that broke the utility.

HttpWebRequest request = (HttpWebRequest)HttpWebRequest.Create(new Uri(url));
HttpWebResponse response = (HttpWebResponse)request.GetResponse();
Stream s = response.GetResponseStream();
using (FileStream fs = File.Create(file))
{
    ...
}

The code above resided in a function that was called for every file to be updated. Once I added using() for HttpWebResponse response and for Stream S the problem disappeared. My assumption is that the lack of calls to Dispose() for response and s kept the connections to the server active. With a limit of two concurrent connections, the application timed out trying to create the third, and then initiated a roll-back.

And that is why using using()is best practice when possible.

No feedback yet