There are usually a set of things that a software shop does to ensure high quality software. No matter whether it is a web, desktop or mobile. The principles are the same.
Years ago, someone else wrote a similar set of rules.
This article was very influential to me when I first read it. However….
It is more focused on the high level issues that affect the company as a whole, not rules for good code. For instance, one rule is do you use source control? In this day, every software shop should and probably does. Maybe they didn’t 13 years ago? I know of plenty software projects, or products that use source control and still have bad code. So that’s not a real test of how good the code is. For the actual coders writing the software, some of the items in Joel’s excellent blog do not really affect them on a daily basis. Programmers live in their own little bubble, and live under a set of realities that effects them only. That is what I’m going to write about here.
- Do you compile at the highest warning level possible?
- Do you check for memory leaks?
- Do you have overly large functions?
- Do you have overly large code files?
- Do you test with debug builds?
- Do you have regression tests?
- Do you measure code coverage?
- Do you stop everything when tests fail?
- Do you do code reviews?
- Do you use a real build system?
Do you compile at the highest warning level possible?
Compiling code and witnessing a long list of warnings spew out is a special kind of demoralization. It’s even worse when these build warnings go unfixed for months or years. The latter state means one thing: Management or the leader of the developers don’t care that the product has subtle problems. In fact it means that nobody cares. Or it could be that someone cares, but feels that they are actively discouraged from doing so by peers or management – which is the worst state of all.
What I really don’t get, is how a developer could let a warning get into the build in the first place. I mean how could a developer NOT look at build output? It leads one to ask, if they aren’t looking at the build output, are there other things they are also not looking at? They have to know if the build succeeded or not right? They obviously are not blind, but it does indicate something is lacking: Either some competence or a desire. Either one is programmer malpractice.
The end result is that there will be very subtle bugs in the software that manifest themselves in very costly ways in the long term. There could be a problem that a customer support person spends 4 hours with a customer resolving. That then gets logged to a developer who spends a day trying to figure out what went wrong. That then gets fixed and distributed to customers 9 months later. Meanwhile everyone loses: The customer from lost productivity, the software shop from labor working on what could have been discovered in a few seconds by a compiler warning. Imagine the cost for such failures to act properly?
Now most modern compilers or linters have a range of options. These options range from the basic, to the important, all the way to the absurd. So not all compiler/linter warnings are created equal. When you start getting warnings that border on absurd, then you can create one (not many) list of exceptions to those warnings, or a list of warnings that are turned off. This list should be in one place, checked into source control, and contain clearly worded, and strong reasons those warnings are turned off. This list should never be large. And one person alone without code review should never be allowed to change it unilaterally.
The compiler (For languages where it applies) is the first line of defense against bugs. Software shops who refuse to use it to its fullest potential are putting out sub-standard products.
Do you check for memory leaks?
Failing to correct (or even care) about memory leaks is a certain kind of wrong. Memory leaks are like lung cancer. Slowly suffocating it’s victim. Memory leaks slowly and insidiously suffocate an application of its memory. Applications live on memory, and when memory goes, so does the application. And customers aren’t happy when their application dies. It’s even worse when it dies and takes all their data with it.
This fundamentally begins with the programmer who writes the feature. They are the ones who originally wrote the code that allocated the memory. They should have immediately considered where the memory would be deallocated, and who would own the memory in the meantime. And second the original programmer should have tested their code.
There are tools out there to help identify memory leaks, and they should be used if they exist. Failure to verify memory leaks exist is just plain sloppy.
Do you have overly large functions?
These are functions that are more than two screens in length. Which seems to be the unwritten standard in this industry today. It wasn’t always an accepted standard however. Ruby developers seem to go by the rule that functions should never be larger than 5-10 lines of code which is even better. Large functions quickly turn a program into an unreadable mess that makes it extremely difficult, or almost impossible, for another human to decipher. The problem here is that such functions cannot be easily refactored without the real possibility of introducing bugs. I have personally seen functions that were thousands and thousands of lines long. The cyclomatic complexity of such a function shoots through the stratosphere, and any modification of such can have real possibilities of adverse, unintended side effects. Such a function was essentially abandoned code and was a dumping ground for code changes that no one cared about.
If you ever see an application that has not changed much over the years, and has not been refactored. Rest assured that over large functions lies at the feet of responsibility.
Do you have overly large code files?
Large functions lay the foundation for large code files. These are files that contain 10,000 to 25,000 lines of code. Files that contain hundreds of functions and dozens of class definitions. Code files that try to be a God to everything. These files cannot be easily read nor browsed through. They remain a black box to all but the author.
Now, there are tools to help browse through large code files. Tools to help list the structure of the application. But when code gets so big that you need a browsing tool for the browsing tool, then you definitely have a problem. When this happens, discoverability nosedives and mindshare in the product evaporates. At that point programmers stop reading the code and start searching it. And when programmers start searching the code, they are not really understanding it. They are in essence looking at a painting through a microscope, and not standing back and looking at the big picture.
Another really bad side effect is the risk of duplicate code. A huge haystack may be hiding the needle you are searching for, but not finding it, you create another needle. This in a profession where needlessly reinventing the needle is a cardinal sin.
Case in point: I once come across someone modifying a rather small class: about 40 lines of code worth (at most). I looked around and saw that we had a few other classes with the same name, and informed them about it during a code review. It turns out there were about 10 to 20 different copies of that class floating around the code base. Apparently different programmers over the years used that class a lot. Ok, that’s a good thing. But instead of simply including a header file and linking against a library, they copied the class into their code. So I pushed back in the code review and asked them to remove the duplicate code. All this because the code base was so large, no one knew where anything was, and whether a particular problem had been solved already.
Do you test with debug builds?
This really applies only to C and C++ based code. Sorry all you script people out there. Your language has evolved past this little sharp edge of history. C and C++ base languages have, by convention, classified building a product into two different states: A release build and a debug build. A release build is what you ship to your customer. It is compiled for speed with little to no error checking built in. A debug build is slower, it is not shipped to the customer and has lots of error checking.
Software should be tested to be correct first, and then tested for speed and performance second. A good software quality assurance division of a company will understand that and not require testing only on release builds.
The reason for this is, that in the world of C and C++ based languages. A ‘debug’ build will have certain extra safety checks built into the product that help identify problems before they escalate into major disasters. Such code has assertions, poison memory patterns, bounds checking, extra type checking etc… These things make code much safer and more enjoyable for the customer. A ‘release’ build on the other hand, simply compiles all those nice safety nets away and instead plops the code right in the middle of a speedway, with no safety guards what-so-ever. Likewise testing only on release builds is like training a child to ride a bike in the middle of a busy road. With no safety precautions, something is going to be break, and it will be difficult to decipher post-mortem what happened.
Do you have regression tests?
A set of regression is the second layer of defense to prevent bugs. These tests must be run by developers as they write. Not just afterwards. Tests at the very least must be run before code is merge or submitted to the repository. Otherwise the tests serve no purpose. Otherwise imagine the havok if code is submitted on a daily basis that breaks things left and right. And no one would know. Once it is discovered the tests are broken and that a feature is broken, it’s usually much later in the development process. By that time a programmer may have moved one to work on something else. To fix it could be time consuming, not to mention costly. So the entire software development process becomes less efficient.
But the ultimate havoc wreaked by not having regression tests is that bugs get through to customers. And customers get annoyed and buy someone else’s software.
Do you measure Code Coverage?
Testing it great. But you also have to know when to stop too. Most programming languages, except the new ones, have tools for measuring code coverage. They should be used. In fact one company I know of, runs tests in a continuous integrations server. The coverage of the product is recorded, and on test results with coverage less than 95%, the commit is rejected. Way to go!!
Interestingly enough, figuring out code coverage is good insurance for when a programmer leaves. The programmer may leave, but tests, guaranteed by high code coverage, ensure the code can be maintained by someone else. So when the new guy or girl comes in, and starts maintaining the code, they can be reasonably sure that their code changes won’t break the world.
Code coverage also helps to ensure the quality of the code. When code is examined in the cold truth of coverage you start to discover a few things. You can discover unused parameters, or branches of code that never get executed. You may also discover expensive, slow code that perhaps you didn’t need to call, because it’s results weren’t needed. The good reasons can go on and on, and will vary. You can will also discover bugs, lots of them. I have never ran code through code coverage and not discovered a bug. It is simple as this: untested code is buggy code.
Do you stop everything when tests fail?
This should be a no brainer: Of course all development should stop if a regression test fails. All hands should be called on deck when the tests fail. Everyone should know not to sync code submitted during a particular time period. Everyone should know not to build upon similarly infected bad code. Everyone should also feel responsibility to look, hunt and search for and fix the problem. In such a situation, everyone should pretty much be guaranteed a clean working environment where they can trust that the code they are basing new features on is stable, solid and bug free.
But if the culture says that regression tests are ignored and swept under the rug. Then no one can be sure of anything about the product. What if the test failure count keeps creeping upwards and no one is held accountable? Then the product as a whole gains technical debt. Someone traded a short term gain that will not last, but incurs debt that grows over time. There is no debt so small that over time cannot grow into something huge. The entire farce can eventually grow into a farce where certain groups refuse to run the regression tests: because they are broken. But then refuse to help fix them in the first place. Such downwards spiral always result in the same thing: A spectacular collapse that offends some customers really badly.
Do you do code reviews?
No developer is immune from writing bugs into their code. Especially those who are unfamiliar with the software, language or platform. It can many years to master each of those 3 areas. In the meanwhile, code submitted by someone who has not mastered those areas should be reviewed by other people.
But this rule should not be applied with no regard for the circumstances. Code Reviews slow down development speed a lot. As in drastically. Ever had a code review languish for weeks at a time? I have, and it’s not fun. Productivity nosedives, moral plummets and old bugs languish. So, if a startup company absolutely required code reviews for everything, they would probably not actually ship a product… at all. Well before going bankrupt and running out of investor money.
So, here are a few common sense solutions to the problem:
- Code reviews are optional for those who originally wrote the code. Those who wrote the code are always more informed about what and how the code works than anyone else. Any changes made by them to code they themselves wrote should be optional.
- Conversely, modifying any code you yourself didn’t write should always require a code review. Especially if the person who wrote the code is still available.
- Have an expiration date. Any code review not reviewed in a certain amount of time expires and the developer is free to submit the code.
- Require unit tests and regression tests instead of code reviews. This works especially well for those who originally wrote the code in the first place.
Requiring a code review of all code is an ideal that should be sought after. But remember, customers buy your software, not your code reviews. Until there is first shipping software, no amount of code reviews will pay the bills. So, until then, don’t sacrifice the new startup for a feature that customers don’t buy.
Do you use a real build system?
A slow or inefficient build is a tax that costs real time and real money. A build is the process of converting the source code into something else which you send to your customer. Applications being written by programmers are also built by programmers. And usually built using custom code that is slow and very inefficient. It doesn’t have to be this way. I once worked at a company that used an archaic build system that took 45 minutes to compile and link. I fixed it up so that it took 1 ½ minutes. Another product I once saw had no build system at all. But rather a collection of bash scripts that did a few things in a certain order: No matter if they were required or not. Such a build system was actually fast compared to building 8 million lines of C/C++ code. But still it did only one thing: Build the product with little to no configurability.
Each programming language and environment has its own build system. C has the antique make system (good riddance). C++ has none really unless you are using Microsoft’s Visual studio and MSBuild. Java has Maven. Ruby has rake. These tools should be used instead of re-inventing the wheel by writing custom scripts for your build. Once a build system is written properly, then it can executed differently according to the needs of the user.
A full clean build can be executed and be guaranteed to work due to no preconditions polluting the source code.
An incremental build can be executed based off of localized changes. These builds don’t start from scratch but rather build only what is needed based off of the actual changes.
A clean build is simply to delete everything that a build creates, and return the source code back to the state it was in when it was first retrieved from the source code repository.
If a build cannot do this, then it is in the stone ages and not helping anyone nor anything.