This past year I was tasked with writing an snmp data collector framework for our network management tool. There was a chance that this framework *might* be extended for collecting new types of data too. However the minimum required code had to be in place, to get a working version off the block in 2 months time. How big do you think the initial library that i could create in two months time could be ?

Programmers love to create. Especially so, if it is their own world view that they are creating. In this particular case, given a chance i might have perhaps added in more than just the hooks to collect data from different data sources & save them in multiple formats. The end result would no doubt have created a wasteful monstrous entity, that a few years later even i might not recognize for my own offspring. Luckily for us, we have deadlines and scope restrictions, namely what the software needs to do, and what its target group of customers are and by when this the task ought to get done.

Create only what you require

The first version of my data library therefore featured a bare minimum of data inspection facilities and meta data specifications. When we started with the next set of iteration, i found that i had no facility in my library to change the data corresponding to a virtual row of data that i had collected using the meta specifications on what to collect.

main () {
collector.LoadMetaData();
collector.AssociateVistitorFunctionForDataCollected(Visitor); collector.Run();
collector.Save()
}

Visitor(CVirtualRow sRow) {

value = sRow.GetAttribute(“InterfaceDescr”);

If ToBeChanged()

sRow.Modify (“InterfaceDescr”, newval); // New Req

}


 

For the set of features we had targeted during the initial phase, this feature was not encountered and hence not added. But going forward, this was required. However the understanding of how the virtual row class is constructed from the actual underlying data structures is essential without which the base functionality cannot be improved upon. Without this understanding, a hack would be the only resort.

Hash Modified;main () {
collector.LoadMetaData();
collector.AssociateVistitorFunctionForDataCollected(Visitor);

collector.Run();
collector.Save()
}

Visitor(CVirtualRow sRow) {


value = sRow.GetAttribute(“InterfaceDescr”);

If ToBeChanged()

Modified.Add (val, NewVal);

}

Collector.Save() {

If (Modified.find(val)

{
SaveModifiedValInstead(newVal);
}

}

which brings us to the next observation,

No Hacks

Had the team, not spend the time to understand the underlying framework, the hack would have been the natural course of action. In the course of time, the no of hacks would increase to a point where, soon, the base source would be a veritable land mine of sorts. A single hack would break ten other previous hacks. This makes the source unusable and extremely buggy with no other recourse but to stop maintaining the body of code once the cost benefit ratios exceed a threshold. The quality of the patches applied would determine how fast the source gets there. Getting out of this quagmire requires more than triple the effort in creating those hacks in the first place.

Maintenance

If a body of software has to live, as in change and evolve, basic and meaningful changes are the order of the day. Any development process that does not factor in the time and effort required to do things in the right way is only creating a cul-de-sac and had better have decided to abandon the source anyway before resorting to a very low overhead fixes only maintenance mode.

Fixes only = will be killed.

Not allowing time for refactoring is the same as saying we do not care about this code anymore.

ps : Perhaps outsourcing contracts would do good to keep this in mind, and get the remote team educated enough to be able to do refactoring. Else, you would only have hired many hands in Estonia or India or wherever you have outsourced to, just to create hacks faster and of-course more cheaply.

Advertisements