Monday, 27 June 2011

On Coe’s First Law of Software Development

Last week, I discussed my Second Law of Software Development (self-documenting code isn’t), in reference to why proper, discrete documentation is a Good Thing. I didn’t get into what I think is the best time to write documentation (before you write code), because that’s a whole other rant in and of itself, but I did briefly mention my First Law of Software Development:

When it happens, you’ll know.

I’m not ashamed to admit that I’ve borrowed the phrasing from The Simpsons, but it’s a really good line. The First Law originally applied to baking in security when you’re developing SaaS, but other things keep coming up, wherein if I’d kept in mind that when it happens, you’ll know, I could have avoided a whole lot of hassle.

Simply put, the First Law is all about trying to see things coming, and being prepared for them. I could have borrowed from Scouts and gone with be prepared, but it doesn’t quite appeal to my sense of humour. The fact is, something will eventually go wrong, and when it finally happens, you’ll know. And when you look at the block of code that’s to blame, you’ll ask yourself why you didn’t code defensively for it in the first place.

It originally came to me when I was writing a CRM tool for the company I worked for in 2007. Inspired by a software engineering professor at my university, I wanted to code against bad input. At first, it was about malicious input, but as time has gone on, it really is about just generally bad input. The original motivation was about accepting the fact that at some point in time, someone, somewhere, is going to discover and exploit a weakness in your software. You don’t want to assume that all of your users will be nefarious little pissants, but in the interest of your good users, you ought to assume that your average long-term number of less-than-trustworthy users is nonzero.

So, eventually, someone will try to misuse your software. But does it stop there? The correct answer is no, no it doesn’t. While you’re validating your input against inappropriate behaviour, you can just as easily, if not more easily, validate for correct behaviour—that your users haven’t accidentally done something wrong. Type checking falls under this umbrella, and it’s useful both in the functions that are retrieving user input and the functions that are processing it. This is particularly important in weakly typed languages, because you can’t reliably just cast your input into a variable of type x (particularly in JavaScript, where concatenation and mathematic addition use the same operator). When users provide improper input that isn’t what it should be (but they still have honourable intentions), then you have a problem (maybe it’s in your documentation… but that’s another post). Maybe the input is well-formed, but has unexpected side-effects. When it happens, you’ll know.

Now that you’re validating your user input for validity and intent, are you done? Probably not. In this day and age, software doesn’t exist in a vacuum (apart from the little noddy programs you write to prove that you can handle the concept you were just taught). There are external subsystems that you rely on. In an ideal world, you’ll get perfect data from them, but this isn’t an ideal world. Databases get overloaded and refuse connections. Servers get restarted and services don’t always come up correctly, if at all. Connections time out, or you forget whether or not this request has to go through a proxy. When it happens, you’ll know, because all of a sudden, your software breaks. Hard. You need to figure out what subsystem failed, and more importantly, why, so that you can prevent it from happening that way in the future.

However, that isn’t enough. You should have been ready for that failure. You can’t assume that all the other subsystems will be there 100% of the time. Assume that your caching layer will disappear at an inconvenient moment. Know that your database won’t always give you a result set. If you have to call out to a separately managed web service, do not rely on it being there, or having the same API forever. Code defensively for the fact that eventually, something will go wrong, and you won’t be watching when it happens.

So there’s a very good reason why the First Law of Software Development is when it happens, you’ll know. Eventually, “it” will happen, and when you figure out what “it” was, and where it caused you problems, it’ll seem so obvious that there was a point of failure, or a weakness, that you’ll ask yourself why this problem wasn’t coded against in the first place.

Wednesday, 15 June 2011

Coe's Second Law Of Software Development

Before I begin, let me just put it out there that I really like the Zend Framework. I know, I know, laying down your cards about your favourite editor/framework/OS/whatever is liable to set off holy wars, but I really like ZF. It’s clean, and I like the mix-and-match properties of it that allow me to use only as much of the framework as I need. It’s this very property that I’ve used to great advantage in Project Alchemy and Portico. But I’ve always had one complaint about it.

The documentation in the reference guide, and to a roughly equal extent, in the PHPDoc navigator, is really flaky. The full functionality isn’t properly described in the reference guide, and the PHPDoc doesn’t provide enough information about the API. I’ve found, on several occasions, that I have to dig into the code simply in order to figure out how to use some methods.

Anthony Wlodarski, of, sees this as a positive of Zend Framework; that when ZF community wonks tell you to RTFS, it really is for your own good; that ZF really is that self-documenting. He says,

One thing I learned early on with ZF was that the curators and associates in the ZF ecosystem always fall back to the root of “read the code/api/documentation”. With good reason too! It is not the volunteers simply shrugging you off but it is for your own good

Unfortunately, it’s been my experience that self-documenting code isn’t. Let’s call that “Coe’s Second Law Of Software Development” (the First being when it happens, you’ll know). This is how strongly I feel about the issue. Far and away, the code itself is always the last place you want to look to figure out how it works, and only ever if you have a fair amount of time on your hands, because deciphering another developer’s idiosyncrasies is harder than writing new code. And if you have to look through that code to figure out what the correct usage of the tool is, then someone isn’t doing their job properly, particularly when Zend Framework has the backing of Zend.

I’ve been working for $EMPLOYER$ for more than a year now, and I’ve worked with a number of our internal tools, and across the board, I keep getting bit by the fact that our self-documenting code isn’t. Self-documenting code means that there’s no easily-accessible, central repository of information about how the tools are supposed to work, or about what inputs to provide and outputs to expect. Self-documenting code means that when something goes wrong, and the person who originally wrote the code, the poor sap who has to correct the problem now has to figure out what the code is supposed to do. Self-documenting code means that when your prototypes (or even production tools!) fail without a clear error code, you have to either start shotgun debugging, trying to figure out what you’re doing wrong (or what changed); or you have to ask the project manager, who will ask a developer to dig through the code. This increases the turnaround time on all problems.

Self-documenting code is fine when you’re writing little noddy programs for first-, second-, and even some third-year classes, where the functionality is defined in the assignment description, and the problems straightforward enough that what you’re doing is actually clear at a quick glance. When you’re writing tools for distribution to parties outside of the small group of people who are developing it, you owe it to yourself, to your QA team, to your present and future colleagues, and more importantly to your users, to write good, clear documentation, and to do so ahead of time, because then you only have to update the documentation to reflect what changed about your plan.

But remember, when someone tells you excitedly that their code is self-documenting, remember: self-documenting code isn’t.