Coding Styles & Code Smells

Spot the bug

Code A

STDMETHODIMP CMIB2Iface::CollectData(LONG DeviceId, BSTR IpAddress) {
	list<CString>::iterator listIter;
	for(listIter = ccmList.begin(); listIter != ccmList.end(); listIter++) {
		CString ccmIP = *listIter;
		long tempCCMID = getDeviceID(ccmIP);

		// Handle it only if device ID is valid
		if (tempCCMID != -1) {
				CString component;
				long eventType = 0;
				if (endPointType == 0) {
					eventType = 47;
					component = "Gateway Link - ";
				} else if (endPointType == 1) {
					eventType = 48;
					component = "Unity Link - ";

				component += endPointName;
				// Construct an event.
				LunaEvent ev;
				CString attributes = "linkDestination = " + endPointIP;
				ev.attributes = attributes.AllocSysString();
				ev.eventTypeId = eventType;
				ev.deviceId = theCCMID;
				ev.component = component.AllocSysString();
				ev.timestamp = (COleDateTime::GetCurrentTime()).operator DATE();

				try {
					if (endPointStatus == 0) {
						if (tempCCMID == theCCMID) {
							// Raise/Update an event whenever status is 0
							if (prevEndPointStatus == 0) {
								// Case 00
								unsigned long result;
								MY_LOG_DBG(functionName << ": Updating Event on Component : " << component);
								if (faultManager->updateEvent(ev, &result) == S_OK) {
									MY_LOG_DBG(functionName << ": Updating Event Secceeded");
							} else {
								// Case 10
								unsigned long result;
								MY_LOG_DBG(functionName << ": Raising Event on Component : " << component);
								if (faultManager->raiseEvent(ev, &result) == S_OK) {
									MY_LOG_DBG(functionName << ": Raising Event Secceeded");
					} else {
						// Case 11 and 01
						// Always clear the event, and for all CCMs in the cluster.
						unsigned long result;
						MY_LOG_DBG(functionName << ": Clearing Event on Component : " << component);
						if (faultManager->clearEvent(ev, &result) == S_OK) {
							MY_LOG_DBG(functionName << ": Clearing Event Succeeded");
				} catch(...) {


		} // End of if (deviceID != 0)
	} // End of for loop
	return 0;

Code B

STDMETHODIMP CMIB2Iface::CollectData(LONG DeviceId, BSTR IpAddress)
	this -> m_sLog.SetIpAddress(IpAddress);

	/*   ---------------------------------------------------------------------   */

						(void *) this);

	/*   ---------------------------------------------------------------------   */


	if (BOMBED(DoTableGenericInit(m_oInventory, sIfaceTblColln, m_sLog)))
		_log(L_ERROR, "Mib2 collect table init failed \n");
		return S_OK;

	/*   ---------------------------------------------------------------------   */

	if (PASSED(m_oInventory.CollectInventory (DeviceId, IpAddress)))
		GetIfFaultsExistForDevice(DeviceId, LINK_DOWN_NUM , m_iFaultExist, m_sLog);
		m_oInventory. InspectData(DeviceId, eForInventory);
		m_oInventory. SaveData   (DeviceId, eForInventory);

	return S_OK;

Code Review Department

Code Review Department

If tasked at reviewing the 2 pieces of code listed above, chances are that, you might as a reviewer, want to get rid of the task of reviewing A, than spend any more time on an obviously bad effort at coding. Having badly styled code around, allows bugs to remain, and fixes to be nothing more than half hearted patches which neither the patcher nor the original coder can recognize after the act is done.

One bad turn soon begets another and the rate at which the bad code degenerates into sphagetti mess can only be matched by the speed at which engineers long for a total rewrite of this mess. (Failing which they secretly start yearning for a change of job or worse, stick to the same one in a lost/zombie sort of way or decide what they really want to be is a manager)

Humans are not compilers

Human Compiler

Human Compiler

Unlike compilers, humans being are quite easily put off by bad code and no act of redemption can save a bad piece of code from degenerating further and further until it can only be discarded. This is frequently the reason why engineers advocate re-architecture, which is hugely more expensive than adapting existing code. The lost business oppurtunity that this represents can be huge and the very effort of doing so could run into millions of dollars per year for a major software company like CISCO.

Any act of review on such a bad piece of code can only seen as a sham – who after all in their right mind would want to spend so much time reviewing someone else’s code when it would so obviously take such a long time to do it.

So the first step in getting working reveiws is to not let the process to be viewed as sham – and the only way that this can be achieved is by respecting the reviewer’s time and patience.

How do you respect a reviewers time ?

By giving readable code (see above)

How do you make code readable ?

  1. By avoiding coding smells
  2. bi folwing sm cdng convntns(by following the same coding conventions aka coding style)

Twenty different people and their cousins would have an equal number of vastly differing coding styles. If all of these are to be heaped on same page or on the same reviewer, again its equivalent to implicitly admitting that you have no control and that the whole process is a sham. Therefore getting the same style of code to be used EVERYWHERE is what has to be ENFORCED for successful code reviews to happen.

My experience

In the only place i have personally seen code reviews to actually work, a body of code could be rejected simply because a character exceeded the max allowed no of column width of 80 or because in a single location in code, variable was not aligned. The rationale behind this was the assumption that that the coder did not know about this rule and therefore there would be many other places where this mistake would be made.

The act of rejecting the whole body of code due to single misfitting character sends the right signals and soon it becomes a matter of pride to get code accepted without major comments.

The other advantages in having readable code is that

  1. Simple Bugs are easier to spot
  2. Design issues are easier to spot
  3. Review happens faster (that builds more support for review)
  4. Bug fixes are easier
  5. Module upgrades or re-architecture would be easier
  6. SPotting reuse chances are easier – and hence aids faster development
  7. tois which so often happens are much more easier

How can the organizations aid in creating readable code

  1. Use tools that can perform static analysis based on accepted coding styles and flags issues, before reviewer even reads one line of code
  2. Make code readability indexes, part of statistics of the project so that they can be easily tracked
  3. Use tools that can reformat existing code base to the accepted style, no matter what that is, so that its easy to get off the block when the process starts

1 out of 2 is still only 50%

Fixing the code review-ability is only half the fight. Where would a reviewer be, without a programmer who wants the code to be reviewed ? In a team of colleagues, its only more than easy for everyone to scratch everyone else’s back and implicitly give easy reviews. Its also natural for colleagues to contest one others reviews. Therefore, there has to be an external entity and stimulus to engage in code reviews.

Of-course enforcing the code standards using tools part of the build process and enforced through management numbers would be a good start, though that alone might not make it stick.

My experience is that usually stimulus takes the form of bragging factor or nerd pride amongst peers. When an obviously respected person  / geek is to review your code, you would want it to be more shiny and that extra perfect.

Companies like Cisco can easily make this happen by having more tech leads to review the code, and having a system of recommendations where recommended parts of code can even get promoted to technical evangelists like Stroustrup or Gosling once a year for review, which can be such an honor. Tracking  programmers, coding bullshit index would also be a good measure for wanting a review.

The human factor

Coding and reviewing code, is ultimately a very human activity and any mechanical solutions to the problems in this domain are bound to fail. Ultimately a more humane and social solution would be required to transform this bit of ‘work’ into something more than statistic, and something of an advantage.

“Any fool can write code that a computer can understand. Good programmers write code that humans can understand.”

– Martin Fowler


Here is the list of MAJOR breaking changes to be aware of when moving from VC 7.1 (Visual Studio 2003) to VC 8.0 (Visual C++ 2005).

Pretty old information but there was no page that listed everything (important) on a single page with contextual info.  For a FULL of C++ changes consult MSDN.

New features in Visual Studio 2005 will be a separate (bigger) post .

Breaking Changes in Environment

1. VS 2005 removes the support for 95 and NT.

2. Single Threaded C Run time library support is not available anymore.

3. New versions of common shared DLL’s introduced. Mixing code that uses different DLL versions could cause issues.

4. Application and library deployment model has now -> changed. Core run-time DLLs will now have to be stored in the system cache (WinSxS) as side by side assemblies or as private assemblies and applications can link to the DLLs ONLY by describing them using application manifests (generated and embedded automatically into the binary by VS 2005). More info here . Some programmatic info about manifests here.

5. A  new run time DLL (MSVCP80.DLL) is required for applications linking against the DLL version of CRT

NOTE : These are the new common DLLs – MSVCR80.dll, MFC80*.dll, ATL80.dll, MSVCP80.DLL. The version number used is 8.0.50608.0. (This is shipped with Vista too) However your app might link to other versions during compilation due to policy redirect.

NOTE: The version nos for the version of the DLLs installed by Visual Studio 2005 SP1 is 8.0.50727.762. If these are not present your app will not run.

Breaking Changes in C++ / Features that show new behaviour


1.  Security is now officially supported. 🙂 Expect loads of Deprecation warnings for ANSI C string routines. define _CRT_SECURE_NO_DEPRECATE and _SCL_SECURE_NO_DEPRECATE to shut up the compiler. The best way to do this to put them up in the pre-processor directives section rather than figuring out which include file gets included from all sources.

2. Visual Studio supports lots custom non-ANSI C methods. But the naming convention for those functions is supposed to be _fn().  If you where using them without the leading underscore you will get warnings.


1. new will now by default throw an exception if it fails instead of returning NULL which was the wrong  and standard behavior up until now. (Vanilla 2005 has a bug which causes release version to throw and debug version to return null. This has been fixed in SP1. Another reason why you should install SPI + have a catch block for new)

2.  volatile has now acquired memory fencing semantics, which changes the behaviour of read-write operations in multi-threaded code.

3. Named Return Value Optimization is now available. Classes that have side effects while being
copied / constructed will face problems. Discussed here

4. CRT functions now have SAL – ie Standard C functions can now validate their input parameters and will throw asserts in debug mode if used wrongly. This is helpful for catching more errors.

5. std::time_t is now 64 bit (up from 32 bit)

6. STL now supports iterator debugging and checks. Debug builds will have STL iterator debugging enabled by default. This will throw asserts and cause failures in in-correct code and help to ensure correctness in code. It can be set back by using _HAS_ITERATOR_DEBUGGING=0 and SECURE_SCL=0(not recommended)

7. wchar_t used to be #define to unsigned short. Now it is a  built-in type and can be controlled via project settings. So code that treats unsigned short as wchar_t will now break due to type mis-match. You will get linker errors, if related modules treat this setting differently, coz the C++ name mangling scheme will generate different linkage names depending on the wchar_t type.  

8. Many CRT functions now set errno, where they never used to set any before. This is good for debuggability but bad for code that delayed their errno checks.

9. Processor architecture specific optimizations has been removed and a blended architecture is targeted. If your binary has some subtle bugs and where hidden due to the specific binary produced, it might now surface.


1. global functions and variables that reports OS versions from within CRT is now gone.

2.  Some C++ functions like strstr now return const pointers instead (eg strstr)

3. You now need to use the address of operator to get the class member function address. Link

4. Managed C++ support is now dropped which has been replaced by C++/CLI.

5. Exception class has been moved to std namespace.

ps : Here is a brief List of Visual Studio and .NET versions

In a previous post, i tried to outline the different levels of bad-ness in source code, that are indicative of bad code. My experience has been that IT folks generally fall into any of these levels of bad-ness before they fall through and start to care about the craft.

The coding horror post about the art of crafting routines, poses a question to ponder

“What’s a good name for this routine? Naming is hard. Really hard.”

I mean really, how many folks out there as a percentage of the total coding population, might have reached this level, where they produce working, clean code and have started caring about the names they use ?

A mod 5 quote in Slashdot on “Managing for creativity” – If you are developing software in any capacity, nothing helps creativity more than a clean code base.

In an earlier post, I tried to spell out the signs that i look for, that tell me the code base is bad. However the relative importance of each of those items are not spelt out. Obviously, failing to catch errors is a more serious flaw than, say, a badly formatted code base or one junk variable name. So let me try and categorize the relative importance of those flaws into broad categories

  • Level 5 – Program does work as intended (Bad-ness rating 10 out of 10)
  • Level 4 – Does not catch any errors and breaks very often. This obviously is a broken program or a Proof-of-concept. (Bad-ness rating 9 out of 10)
    1. Program crashes / shutdowns
    2. Memory leaks
  • Level 3 – Does not guard against “can happen” errors (Bad-ness Rating = 8 out of 10. Will surely break soon)
    1. Return values ignored – especially bad if done so for user functions (vs system apis)
    2. Uncaught or worse swallowed exceptions (unless intentionally so)
    3. System apis and resources not used like they are intended to
    4. Unvalidated inputs
    5. Unvalidated user inputs
    6. Unvalidated pointers
    7. Unvalidated String lengths
    8. Too many compiler warningsHeres a sample of code at this level
  • Level 2 – Tough to maintain (Bad-ness Rating = 6 out of 10. Will break during next few maintenance cycles as it is hard to see how the code flows and what are the likely errors to catch and safe-guard against)
    1. Class / function too big
    2. Class / function does more than one thing
    3. Global variables
    4. Too many interdependencies
    5. Duplication
    6. Deep levels of inheritance
    7. Duplication
    8. Pointy code
  • Level 1 – Tough to understand (Bad-ness Rating = 5 out of 10). No immediate breaks but will cause code to go to the previous level in a few release cycles)
    1. Go to
    2. Too many variables / members / parameters
    3. Junk / confusing variable names
    4. Bad class / function names – not representative of what it does or is too big
    5. Code with lines too long that you have to browse to read the full line
    6. No comments


  1. One category of BAD-ness leads to another – This cannot be proved, but has to be observed to be understood. One has to be part of quite a few no of projects and failed modules to know what transpires and how this comes about.
  2. Sweet spot of bad code is at Level 2, where the code handles the errors that it has been tested against but is tough to maintain and definitely going down hill without some major code rework. However most code bases i must say, stay in this level for some time, creating patches on top of patches to keep the code base running, until say a few years down the lane, the effort is no more worth it and the product is end- of-life-d. The architecture of the code base defines, how extensible it can be, even in the face of vast swathes being buggy and hard to maintain.In due time of-course, in big companies, a brand new version is created to accommodate the feature requests if the product manages to go into the market. This happens typically because the older version is not maintained well enough to be extensible to accommodate the new requests. The time-lines are typically strict again and the new code base manages to reach the same level as the older version but is yet again difficult to extend and maintain.
  3. Regular maintenance is necessary to maintain even the same Bad-ness levels – A code base can just as easily slip from level 1 to 2 (very easy) as it can go from level 2 to level 3. In short, there is no safe period and regular maintenance / refactor work is always required to keep the code base hale and hearty.
  4. Incentive to move from level 2 of bad-ness to level 1 is typically missing – Since a code base at level 2 and the one at level 1 are both outwardly similar, there is very less incentive to move to this more healthier level in almost all organizations.From a management point of view too, a dirty but working code base is more important than a beautiful but late code base.My solution ? add small maintenance and refactoring patches before every release cycle, so that beauty cost is amortized into the releases fairly without major business impact. However if possible i would say it is better to create code as best as it can be done the first time over. Re-works are almost always error prone and tough to orchestrate, especially if your code base is at level 2 to start with.
  5. Big programs will exist at many bad-ness levels at once – Big projects have multiple modules and multiple teams that work on it. Depending on the sheer luck of how the teams are assigned and who gets to work on what and the culture that prevail within each of these teams, the quality levels can vary from one module to another. The weighted average of the quality levels across the different modules (weighted based on functionality importance as rated by customers) would then determine the quality of the big ones. The size of the project typically ensures that “code smells” do not get noticed too much, much like a slum that grows on one side of the city, until it is too late or the issues that arise are too many.

Creating quality code is hard. One has to understand the fundamentals behind the notion of quality, as a function of the code base, rather than a fashionable management by-word, if any drives for quality can succeed in software projects.