Its D-day, the project is ready functionality wise, everyone’s beaming, code is checked in, bug count is in single digits and the project is ready to release. Then out of the blue, the project gets swamped by swarms of last minute glitches relating to performance. From memory leaks, to database bottlenecks, to pure response time, series of firefightings & heated meetings ensues, causing embarrassments and slipped schedules all around. Sounds familiar?

However as in most such cases, true improvements or fixes are rarely possible due to the effort it represents. Rarely is the architecture found flexible enough to accommodate improvements to the tune of an order of magnitude, without major rewrites. Instead, cover ups and shove it under the seat approaches or spec reductions and upscaling of system requirements is what is attempted at. Depending on the soft skills and magnanimity of the team involved, the product is sometimes pushed out to face endless cycles of customer support or better still it is killed off before being released.

Is not premature optimization evil (advocated first by Tony Hoare and restated by Donald Knuth)? So what gives?

Turns out that for most people, ‘functionally complete’ does not implicitly translate into works-fast. I wonder if the same folks would consider a car functionally complete if 10 miles is the top speed at which it can go.

In this particular case, the primary culprit was that none of the engineers who had worked on the project had much experience with the particular database that was being used. But the architecture was designed around a centrally important database to hold and serve all the data that flowed through the system . Consequently bad database coding practices threw the worst punch when it came to performance. There was no way the product could be fast, without major rewrites. The net output was pretty, but somewhat un-usable & could not be justified to be pushed out.

The keyword here is the term “premature”. Knuth or Hoare never meant the advise as an abandonment of basic sanitary performance levels. Given the nature of the advise, one would expect people to not worry about the teeny tiny levels of optimizations like moving that extra if-check out of the loop or manually unrolling instructions etc unless it was measured and found to be warranted. But not considering performance during the initial and intermediary stages is nothing but negligent design.

However it is hard to think in terms of basic engineering once you start implementing code, in terms of alien frameworks, tools & libraries. Implementation is hidden and the net effect comes out only towards the end. (case for open source?) Unless someone with expertise in each of these closed areas is present, its difficult to get things right, the first time a new team attempts using these technologies.

It is unpractical though, to expect to have an expert in each of the scores of tools / frameworks (count your acronyms) that are part of any modern projects. Normal IT shops should therefore seriously consider premature planning / architect-ing for optimization. Getting extremely serious about performance from the very beginning, would be a good idea. You can wait to get paranoid, towards the end. Adopting best practices & making it a point to adhere to these across the entire code base, before you start churning out code is sensible engineering and prudent planning. Anything short of this is plain and simple negligence.

Googling premature optimization, surprisingly presented similar thoughts echoed around the web. Wise advises abound in the collective wisdom of the web.

Another take on same idea – cpu cycles might be cheap, but customer time waiting on that slow product of yours a’int.

Advice from Joeldon’t start a new project without at least one architect with several years of solid experience in the language, classes, APIs, and platforms you’re building on.

Advertisements