Sunday 24 April 2016

Looking back on looking forward

I was just poking around in the BlackBerry PRIV user guide, in an attempt to figure out what USB-based video out standard it uses, when I noticed that certain PRIV models support wireless charging, using the Qi and PMA standards. I got excited (I’m planning on buying one, eventually, once I feel that I’ve got my money’s worth on the spare battery I bought in the fall [read: its lifetime goes for about as crap as the original]) and checked out whether, for example, IKEA’s wireless chargers support either of those standards. Good news for me, they do. I don't have one yet, but we have a bunch of other IKEA furniture, and at least that way I can get a lamp, or and end-table, with a built-in charger for my phone. Hurray!

And I got to thinking about wireless charging, and this thought occurred to me, “great, I see that Android and BlackBerry have caught up to where Palm was seven motherfucking years ago.” Palm, and by extension, HP, were way ahead of the curve when it comes to mobile computing, and for a very good reason. Wireless charging on the Palm Pre. The OS in that phone had real multitasking from day one, beating Apple to the punch by a year. And Palm beat everybody to the punch with their Palm Pilot. Apple was dicking around on the Newton with full handwriting recognition, Palm had a comparatively minuscule processor, and they nailed it. Given their technology constraints, instead of trying to fit the technology to the ideal, they identified the paradigm (pen-like input instead of keys, because the machine was so physically small) they felt would be the killer app, and found a way to apply that paradigm to a pocket-sized device, and won the market.

They kind of reinvented handwriting with Graffiti, but that also allowed them to create an input mode that didn’t actually require user verification the way that even normal handwriting on the page does (e.g. am I staying within the lines? Is this even legible?). Graffiti allowed the Palm user to keep writing away without moving their hand across the screen, and with a simpler gesture set that was still sufficiently similar to real handwriting to be easy to to learn, but also fit inside the tiny 186 processor they had. They revolutionised computer-supported time organisation, and by God if I didn’t want one for myself the first time I laid eyes on it.

Then BlackBerry (well, Research in Motion at the time) comes along with the original BlackBerry. They did the same thing, figured out how to fill a technology gap (simple email where the person was), made it pocket sized, and ran all the way to the bank with it. And like Palm, with something that the user doesn’t actually have to look at in order to input data.

And thinking about my deep and abiding love for both of these technologies reminded me of why I called my “small business” (which I have never really got around to getting off the ground in any meaningful way) Prophecy Newmedia. The notion was that Prophecy would provide its clients with the right technology for their online presence, not just the flashiest, newest stuff. Flash included—it was a pain in the ass from beginning to end, accessibility was notably awful, and after a weird artist love affair with it, it was relegated to streaming video once CSS and AJAX calls were able to do everything Flash could, until Flash itself got end-of-lifed by the vendor, because HTML5 has replaced it.

While I haven't necessarily picked the winning vendors every time (God rest Palm’s soul, and BlackBerry hasn’t been looking very hot since they shit the bed trying to beat Apple at consumer media devices), I feel like when I see an approach to doing things that works… it works. Handheld computing is something I’ve been a devotee of for probably twenty years. Wireless charging, oh my God yes (because wires suck, just plonk the damn thing down and be done with it). Machine readability of websites was part of why I didn’t like Flash in the first place, and Google’s off and running with it, along with everybody else who does big data crunching.

What’s next? I don’t know yet. But it’s going to be exciting.

Thursday 28 January 2016

Who owns the backlog?

A conversation about bug lifetimes came up in a Slack discussion at work today, and a core question came up, that really summarised the entire tenor of the conversation: who owns the backlog?

The answer, if we’re being honest with ourselves, is everyone. The backlog of work to be done, and the prioritisation of that work, is not the exclusive province of any one member of the team, or any one group within your organisation. Everyone who has a stake in the backlog owns it, jointly.

No, seriously. This is especially true if you work in an organisational structure similar to Spotify’s popular squads model. It’s definitely a much more refreshing take on the division of labour than the fairly typical siloing of The Business, The Developers, and The Testers. Admittedly, even then, there really isn’t much excuse not to own your backlog, but when your work priorities are determined by someone on the other side of the office (if they even work in the same building as you), it’s at least understandable that developers might not have as much say about it as they’d like.

But at the end of the day, if you’re invited to periodic planning and prioritisation sessions, you own the backlog as much as everyone else in the room. Developers aren’t there to just be technical consultants on the projected time to complete a given feature. You’re there to contribute to what gets priority over something else.

This is all abstract and fluffy, I know, so here’s an example from my own life.

In the third quarter of 2015, my squad was charged with implementing a new feature on our mobile apps. As an organisation, we’d tried it once before, in 2010 or 2011, and it was an embarrassing failure, so I suspect that, throughout the business, people were a little shy about trying it again. But we had some new technology in place to support this feature, and we thought we could achieve it in a couple of months, by standing on the shoulders of giants, so to speak.

At the same time, we’ve been trying to commit to dual-track agile, so we’d set an aggressive schedule of in-house user testing sessions with the Android app, to validate our workflows and UI. This user testing is fairly expensive, especially if you miss a date, so we agreed upon certain milestones for each round of testing.

In order to make these milestones, we had to make a number of compromises internally. Lots of stuff where we said, “well, it doesn’t need to be complete for the test, but we need something in this UI element, so we’ll hardcode this thing, and fix it after the test.” The only problem—and you can probably see this coming already—is that we didn’t fix those things promptly after the user test. We continued plowing through new features to make the next test. Like I said, it was an aggressive schedule.

Cut to a month or so later, when those hardcoded elements start causing bugs in the interface. I probably spent a week and a half correcting and compensating for things we considered “acceptable” for the user tests, that if I’d either implemented properly in the first place, or immediately corrected it after that test, would have taken a day. This was obviously really frustrating, and I mentioned that frustration in my end-of-year review. I complained that because we were so focussed on the milestones, at launch, it wasn't properly monitored, and it delayed other work, and said I was disappointed with having to release something retroactively dubbed a “public beta”.

When my manager and I reviewed the year, both his comments and mine, he had one simple question for me that stopped me in my tracks: why didn’t you say something right then? And I didn’t have an answer for him. Because I didn’t say anything at the time it came up. I didn’t open tickets for the defects. I may have mentioned the concessions made, in passing, during standups, and counted on my project manager and product owner to open the tickets.

So why didn’t I say something? Because I was acting like someone else owned the backlog, and my job is just to take work off it. This isn’t true. I could have saved myself, and everyone else, a lot of trouble, by recognising that I own the backlog as much as anybody else. If bugs are languishing in the backlog, then it’s your responsibility, as a member of the team, to bring that up during planning sessions. Your manager will appreciate your taking the lead on reducing technical debt and advocating for improving your code base.

Wednesday 20 January 2016

On emotional labour

My awesome wife who's awesome wrote an amazing tweetstorm earlier tonight about the unpaid emotional labour women are socialised to do, and it’s made me pensive about a few things.

First up: I am absolutely guilty of a lot of these things. I hope less so recently than I used to be, but I have enough self-awareness to admit that yes, I have said and done these things. (She promises me it wasn’t directed at me, but that doesn’t mean I’m innocent—and I think I know her well enough to know she’d never subtweet the Hell out of me like that—but I think even that is part of the truckload of unpaid emotional labour that she, as a woman, has been brought up to do) And to be quite honest, if you’re a guy… you’ve done these things too. If you look at me and think to yourself, “well, I don’t think I’m like this,” you almost certainly are like that. Only if you can specifically identify things you do that are not like that… you’re like that.

Because, honestly, guys are socialised to do exactly that. And you even see a lot of it in what jobs in tech are coded as being “inherently” oriented toward men or women. QA and project management, which are both roles that require diligence, attention to minute detail, and an aptitude for predicting someone’s needs before they are aware of it, are coded as and performed by women much more than most other roles in the tech industry. Of tech roles within a tech company, they’re the only positions that I’ve consistently observed having anything close to gender-wise parity.

So why do I think that guys are socialised not to do this work? Because, let’s be honest, it is work. I had an opportunity to fill in for my team’s project manager (who, it’s important to point out, is a woman) late last year, and holy mother of God that job is hard. Like, Nintendo-hard. And at least she’s getting paid for it, but I guarantee she’s been doing this her whole life, because that’s what society’s taught her to do. And I haven’t. Instead, I’ve grown up around a lot of media that’s suggested that one set of gifts (chocolate, teddy bears, and diamonds) will satisfy any woman I may be involved with (here’s a hint: that’s a load of crap), and an escalating and never-ending series of ostensible jokes along the lines of “bitches be cray” when her birthday, or anniversary is forgotten.

And why the hell shouldn’t they be? Seriously, think about it for a minute. The person who you’ve decided to open up to and be most vulnerable with comes home on your birthday, or on the anniversary of the day you got married, and they have no idea what’s happened today, however long ago. They can’t bring it to mind at all.

Wouldn’t you be upset? I would be. I have been.

Long story short, just like math and art, the everyday things that women “just seem to do better than men” are things that they’ve practised, constantly, since they were very young. (Men have, conversely, been raised to ignore those things.) Anything you do every day of your life, for about as long as you can remember… yeah, you’re going to be pretty damn good at. Doesn’t make it easy, just habit. (Professional athletes are a great example of this) If you really want to show some kind of appreciation for the women in your life, and all the things they do for you… do them yourself.

Thursday 13 August 2015

Not-so-hidden treasures on TV

I’ve been watching some really good TV lately. And I don’t just mean the sixth season of Archer, which is a hilarious return-to-form for the show, and for the main character, albeit with some really interesting, if spectacularly raunchy, character development. This is not a show for kids. All of the characters are overtly terrible people who I would have no desire whatsoever to associate with in real life… but because it’s a cartoon about freelance James Bond-style spies, they kind of get away with it.

What I’m specifically thinking about right now is BoJack Horseman (another adult-oriented cartoon, not remotely intended for kids, in which the main character is an incredible asshole), Halt and Catch Fire, and My Little Pony: Friendship is Magic.

Now, if you know me in real life, you shouldn’t be surprised that My Little Pony is showing up here. My family discovered it on Netflix, and my then-three-year-old son got hooked. And in turn, I got hooked. It’s got a lot of really good messages for little kids, and there are tonnes of things for adults (whether they’re watching to be aware of what media their kids are consuming, or on its own merits). For example, the recurring Big Bad God of Chaos, appropriately named Discord, is voiced by John de Lancie, specifically because the character is the show’s equivalent of Q! My recent favourite example of this is in an episode where one character has a series of nightmares, all about her current significant anxieties. Eventually, the dreams culminate in her literally confronting her own shadow—if you’re at all familiar with Jungian theory, it’s not even metaphor any more, and I love it.

But when you keep watching it, and you start to get into it, you realise that there’s a rich storytelling universe that is driven significantly by the characters. Their actions have permanent consequences (the work that goes to maintaining continuity is intense, and is very clear), and whatever scrapes they get into over the course of a story, they learn from. Characters grow over the course of the show.

My son’s favourite pony is Rainbow Dash. At the start of the series, she’s every obnoxious jock stereotype, all at once. She’s athletically talented, and she isn’t at all hesitant to let you know she’s the coolest motherfucker here. She actually tells the main character of the show, the bookish Twilight Sparkle, that “reading is for eggheads, and I’m not an egghead,” at one point. And all of this has bitten her in the ass at some point. She’s learned from her mistakes, and she’s grown.

Take an early episode of Season 5, when her pet tortoise starts to go into hibernation at the beginning of winter. She can’t bear the thought of being without him for three months, and she ends up snapping at her friends when they start to say “hibernate”, so she vows to do whatever she can to prevent the arrival of winter. But time marches on, and even the great Rainbow Dash can’t prevent the seasons from changing. She comes to accept that Winter Is Coming, and her pet is going away for a while… and she gets really depressed about it. One scene toward the end starts with her curled up with her pet in bed, with no enthusiasm or even interest for anything. When given some tough love by the most softspoken of her friends, she ends up bawling her eyes out, both with the classic cartoon eye-gushers, and with a fairly realistic ugly-crying-face. She’s really fucked up by this prospect, but she needs the cry to accept that this is going to happen. After her pet buries himself in the mud, she declines the company of her friends, and sits down with a book to read to him as he begins hibernating… and the episode ends there. It’s not a reset. Things will be all right… but not yet.

This is some major character development for what is ostensibly a kid’s show. And I really appreciate that one of the characters with whom little boys in the audience identify (case in point, my kid) was given an episode to stretch out emotionally, and have her friends not only accept that she’s upset, but empathise with her, coming to the point where the lot of them cry (including blink-and-you’ll-miss-it mascara streaks on the fashion designer). Then she’s given the space she asks for. Kid’s shows don’t even normally have this degree of continuity, let alone character development. But the writers and showrunners behind My Little Pony are doing something that I think can really contribute significantly to the emotional development of their audience… and they have an audience that spans both age and gender.

At the same time, BoJack and Halt and Catch Fire both spent their second seasons trying to depress their audience… or something. When it’s one Wham Episode (as TV Tropes refers to it), that’s one thing, but these shows both went with full-on Wham Seasons. In every episode, either a character lit a metaphorical bridge on fire, or had something heartwrenchingly awful happen to them. In BoJack’s case, the writers gave us an in-universe equivalent of Bill Cosby and everything that’s coming to light about him… and after teasing us with a standard sitcom-y build-up, and you think the character’s going to sway the court of public opinion… it ends exactly where it always ends. And the show pulls shit like this for twelve episodes in a row.

Nonetheless, if you haven’t checked it out, watch it. It’s very crude, and pretty much all the characters are just terrible people you’d hate in real life, but the writing is just stellar. Lots of character development in this show too, and exploration of emotional cause-and-effect, and a very conscious decision on the part of the writing staff to make a diverse cast, which I really appreciate.

Finally, Halt and Catch Fire. I haven’t had a show where I am indisputably the target demographic since I was sixteen. And that show lasted less than a season, so the fact that this got renewed through a second season (I’m praying for a third) is just delightful. It’s by AMC, so they (predictably) show their work in creating a realistic historical environment. There’s a great article in Vox about what this season did amazingly right about gender, by completely flipping the stereotypes on their heads. The two core women in the cast are given the positions of authority and provide excellent complements to each other in running a business. Their ex-boyfriend and husband, instead, get to act completely on the basis of emotion. The husband stays home with the kids this season, and the other makes a series of increasingly rash decisions because of his unresolved feelings for Cameron. And all of them basically have everything fall apart for them over the course of the season. I don’t want to spoil too much, if you haven’t seen the show, but I can’t stress enough how much you need to watch this show, because it’s absolutely phenomenal.

Lots of good TV shows out there to be watched. I’m really glad I’ve made the time to watch these.

Wednesday 29 October 2014

On taking the time to see what happened

I kind of want to watch the video of the failed Antares launch from last night, for the sake of context for the photo of the launch site that NASA recently published on Google+… but I saw the photos of the explosion on Twitter shortly after it happened, and just kind of shivered. Knowing perfectly well that no one was killed in the accident didn’t help—that was clearly a huge blow for the company. As someone on Twitter said, spaceflight is hard. NASA and Roscosmos have it pretty well sorted at this point, having been in the game longer than anyone else (with full credit due, of course, to CNSA and ISRO), so it really does bear pointing out that both Orbital Sciences and SpaceX have only been putting rockets into orbit since about 1990 and 2008 respectively. It’s not that SpaceX is necessarily doing launches and rockets better than Orbital. Orbital has way more experience. SpaceX just hasn’t had a rocket blow up on the pad yet. Hopefully, that doesn’t happen, but the reality is that there’s fundamentally little difference between a rocket and a bomb.

I’ve been trying to stay on top of the commercial American launches, almost just because the advent of commercial spaceflight is really exciting to me. I’ve seen two Falcon 9 launches so far—the most recent one, and (if I recall correctly) its first mission to the ISS—but I haven’t had the opportunity to see anything from Orbital Sciences and Wallops. Maybe it’s a stronger cultural affinity for Cape Canaveral; as far as I knew until the last year or so, the only launch facility NASA had was in Florida. But every time a Virginia launch is mentioned, I secretly hope that this time it’ll be visible from Toronto. I look at the maps of where the launch will be visible from, at what angle, and I’m always a little disappointed that Toronto is well outside the arc. I took the opportunity to see the launch of STS-135 when my family travelled to Orlando for a Disney World/Universal Studio holiday. Our timing coincided with the launch date, and having never seen a Space Shuttle launch before, my wife, my sister-in-law, and I took my then-six-month-old son from Greater Orlando to Titusville to visit Kennedy Space Center. We didn’t make it Titusville for the launch, but we saw it, pretty clearly, from the roughly twenty miles away that we were when the countdown hit the last few minutes. That was an incredible experience, even from that distance (because, by God, you can hear it), and I’d love to have that experience again.

I’m also interested in the Antares launch, and specifically its failure, from a process engineering perspective. A few people on Twitter and Google+ noted that, as soon as the rocket exploded, Orbital Sciences’ Twitter feed went silent. Reports came in from NASA about the same time that the Orbital mission controllers were giving witness statements and storing the telemetry they’d had from the rocket up until that point. In their business, this is absolutely critical for figuring out what caused the incident, so that it can be avoided in the future. Rockets are expensive, so having all that cash go up in flames is a disaster.

But in technology, we can certainly learn from this. So often, when something goes wrong on a server, particularly a production server, our first response is simply to fix it, and get the website running again. Don’t get me wrong; this is important, too—in an industry where companies can live or die on uptime, getting the broken services fixed as soon as possible is important. But preventing the problem from happening again is equally important, because if you’re constantly fighting fires, you can’t improve your offering. When something goes wrong, and you have the option, your first response should be to remove the broken machine from the load balance. Disconnect it from any message queues that it might be listening to, but otherwise keep the environment untouched, so that you can perform some forensic analysis and discover what went wrong. In addition to redundancy, you also need logging. Oh my good good God, you need logging. Yes, logs take up disk space. That’s what services like logrotate are for—logs take up space, sure, but gzipped logs take up a tenth of that space. And if you haven’t looked at those logs for, let’s say, six months…you probably have a solid enough service that you don’t need them any more. And if, for business reasons, you think you might… you can afford to buy more disks and archive your logs to tape. In the grand scheme of things, disks are cheap, but tape is cheaper.

So, ultimately, what’s the takeaway for the software industry? Log everything you can. Track everything you can. And when the shit hits the fan, stop, and gather information before you do anything. I know cloud computing gives us the option (when we plan it out well) of just dropping a damaged cloud instance on the floor, spinning up a new one, and walking away, but if you do that without even trying to diagnose what went wrong, you’ll never fix it.

Saturday 4 October 2014

A lesson learned the very hard way

Two nights ago, I took a quick look at a website I run with a few friends. It’s a sort of book recommendation site, where you describe some problem you’re facing in your life, and we recommend a book to help you through it. It’s fun to try to find just the right book for someone else, and it really makes you consider what you keep on your shelves.

But alas, it wasn’t responding well—the images were all fouled up, and when I tried to open up a particular article, the content was replaced by the text "GEISHA format" over and over again. So now I’m worried. Back to the homepage, and the entire thing—markup and everything—has been replaced by this text.

First things first: has anyone else ever heard of this attack? I can’t find a thing about it on Google, other than five or six other sites that were hit by it when Googlebot indexed them, and one of them at least a year ago.

So anyway, I tried to SSH in, with no response. Pop onto my service provider to access the console (much as I wish I had the machine colocated, or even physically present in my home, I just can’t afford the hardware and the bandwidth fees), and that isn’t looking good, either.

All right, restart the server.

Now HTTP has gone completely nonresponsive. And when I access the console, it’s booted into initramfs instead of a normal Linux login. This thing is hosed. So I click the “Rescue Mode” button on my control panel, but it just falls into an error state. I can’t even rescue the thing. At this point, I’m assuming I’ve been shellshocked.

Very well. Open a ticket with support, describing my symptoms, asking if there’s any hope of getting my data back. I’m assuming, at this point, the filesystem’s been shredded. But late the next morning, I hear back. They’re able to access Rescue Mode, but the filesystem can’t fsck properly. Not feeling especially hopeful, I switch on Rescue Mode and log in.

And everything’s there. My Maildirs, my Subversion repositories, and all the sites I was hosting. Holy shit!

I promptly copied all that important stuff down to my personal computer, over the course of a few hours, and allowed Rescue Mode to end, and the machine to restart into its broken state. All right, I think, this is my cosmic punishment for not upgrading the machine from Ubuntu Hardy LTS, and not keeping the security packages up to date. Reinstall a new OS, with the latest version of Ubuntu they offer, and keep the bastard thing up to date.

Except that it doesn’t quite work that well. On trying to rebuild the new OS image… it goes into a error state again.

Well and truly hosed.

I spun up a new machine, in a new DC, and I’m in the process of reinstalling all the software packages and restoring the databases. Subversion’s staying out; this is definitely the straw that broke the camel’s back in terms of moving my personal projects to Git. Mail comes last, because setting up mail is such a pain in the ass.

And monitoring this time! And backups! Oh my God.

Let this be a lesson: if you aren’t monitoring it, you aren’t running a service. Keep backups (and, if you have the infrastructure, periodically try to refresh from them). And keep your servers up-to-date. Especially security updates!

And, might I add, many many thanks to the Rackspace customer support team. They really saved my bacon here.

Tuesday 16 September 2014

We need a role model

So Notch (along with all the other founders, it seems) has resigned his position from Mojang, and publicly distanced himself from anything to do with Minecraft, now that he's personally worth in excess of $1B USD.

Why? The screed he left is a bit rambly, but seems to boil down to not wanting the responsibility for the overall Minecraft community. Some other of his writings definitely convey that he’s got pretty sick and tired of dealing with managing Mojang and all the extra issues that come with it. But he doesn’t want to have anything to do with Minecraft any more. He says, at the end, “Thank you for turning Minecraft into what it has become, but there are too many of you, and I can’t be responsible for something this big.”

I’m sorry, Notch, but you don’t have that choice. You are responsible for something as big as Minecraft, because you made it, and you put your name on it. You’re responsible for a one-hundred-million-strong user base, which means over the course of the last five years, you are invariably responsible for inspiring more than a couple of people to become programmers. I don’t think Star Trek had an audience that big when it was in production, and it’s only too easy to find stories of the actors—not even Gene Roddenberry, but James Doohan and Nichelle Nichols—talking about how fans approached them, repeatedly, at conventions, telling them that the show had changed their lives and given them a career (and in one case I can recall, Doohan saved a young woman considering suicide).

You’ve established a subculture within gaming, Notch, and it’s one that seems to have a level playing field amongst genders and ages. Your game does what no other game does, and considering the fucking shit show that is gaming culture right now, you owe it to the world to show that, in fact, some gamers can treat people with mutual respect. You are a role model, and this is the time, of all times, to act like one.

Minecraft’s great. I have the Pocket Edition demo on my iPad, and I pop it up every now and then. I’ve been considering shelling out the $7 to be able to play it properly. But if you’re going to act like this… you know, maybe I don’t want to.

Come back to Minecraft, Notch. Your audience needs you.

Monday 2 June 2014

I am not an engineer, and neither are you

I’m called a Software Engineer. It says so when you look me up in my employer’s global address list: “Matthew Coe, Software Engineer.”

I am not a software engineer.

The most obvious place to begin is the fact that I don’t hold an undergraduate engineering degree from a board-accredited program. While I’ve been working at my current job for just over four years now (and have otherwise been in the field for an additional three years before that), I’ve never worked for a year under a licensed professional engineer, nor have I attempted, let alone passed, the Professional Practice Exam.

I don’t even satisfy the basic requirements of education and experience in order to be an engineer.

But let’s say I had worked under a licensed professional engineer. It’s not quite completely outside of the realm of possibility, since the guy who hired me into my current job is an engineering graduate from McGill… though also not a licensed professional engineer. But let’s pretend he is. There’s four years of what we call “engineering”, one of which would have been spent under an engineer. If we could pretend that my Bachelor of Computer Science is an engineering degree (and PEO does provide means to finagle exactly that, but I’d have to prove that I can engineer software to an interview board, among other things), then I’d be all set to take the exam.

Right?

There’s a hitch: what I do in this field that so many people call “software engineering”… isn’t engineering.

So what is engineering?

The Professional Engineers Act of Ontario states that the practice of professional engineering is

any act of planning, designing, composing, evaluating, advising, reporting, directing or supervising that requires the application of engineering principles and concerns the safeguarding of life, health, property, economic interests, the public welfare or the environment, or the managing of any such act.

There are very few places where writing software, on its own, falls within this definition. In fact, in 2008, PEO began recognising software engineering as a discipline of professional engineering, and in 2013 they published a practice guideline in which they set three criteria that a given software engineering project has to meet, in order to be considered professional engineering:

  • Where the software is used in a product that already falls within the practice of engineering (e.g. elevator controls, nuclear reactor controls, medical equipment such as gamma-ray cameras, etc.), and
  • Where the use of the software poses a risk to life, health, property or the public welfare, and
  • Where the design or analysis requires the application of engineering principles within the software (e.g. does engineering calculations), meets a requirement of engineering practice (e.g. a fail-safe system), or requires the application of the principles of engineering in its development.

Making a website to help people sell their stuff doesn’t qualify. I’m sorry, it doesn’t. To the best of my knowledge, nothing I’ve ever done has ever been part of an engineered system. Because I miss the first criterion, it’s clear that I’ve never really practised software engineering. The second doesn’t even seem particularly close. While I’ve written and maintained an expert system that once singlehandedly DOS’d a primary database, but that doesn’t really pose a risk to life, health, property, or the public welfare.

The only thing that might be left to even quasi-justify calling software development “engineering” would be if, by and large, we applied engineering principles and disciplines to our work. Some shops certainly do. However, in the wider community, we’re still having arguments about ifwhen we should write unit tests! Many experienced developers who haven’t tried TDD decry it as being a waste of time (hint: it’s not). There’s been recent discussion in the software development blogosphere on the early death of test-driven development, whether it’s something that should be considered a stepping stone, or disposed of altogether.

This certainly isn’t the first time I’ve seen a practice receive such vitriolic hatred within only a few years of its wide public adoption. TDD got its start as a practice within Extreme Programming, in 1999, and fifteen years later, here we are, saying it’s borderline useless. For contrast, the HAZOP (hazard and operability study) engineering process was first implemented in 1963, formally defined in 1974, and named HAZOP in 1983. It’s still taught today in university as a fairly fundamental engineering practice. While I appreciate that the advancement of the software development field moves a lot faster than, say, chemical engineering, I only heard about TDD five or six years into my professional practice. It just seems a little hasty to be roundly rejecting something that not everybody even knows about.

I’m not trying to suggest that we don’t ever debate the processes that we use to write software, or that TDD is the be-all, end-all greatest testing method ever for all projects. If we consider that software developers come from a wide variety of backgrounds, and the projects that we work on are equally varied, then trying to say that thus-and-such a practice is no good, ever, is as foolish as promoting it as the One True Way. The truth is somewhere in the middle: Test-driven development is one practice among many that can be used during software development to ensure quality from the ground up. I happen to think it’s a very good practice, and I’m working to get back on the horse, but if for whatever reason it doesn’t make sense for your project, then by all means, don’t use it. Find and use the practices that best suit your project’s requirements. Professional engineers are just as choosy about what processes are relevant for what projects and what environments. Just because it isn’t relevant to you, now, doesn’t make it completely worthless to everyone, everywhere.

It’s not a competition

The debate around test-driven development reflects a deeper issue within software development: we engage in holy wars all the time, and about frivolous shit. Editors. Operating systems. Languages. Task management methods. Testing. Delivery mechanisms. You name it, I guarantee two developers have fought about it until they were blue in the face. There’s a reason, and it’s not a particularly good one, that people say “when two programmers agree, they hold a majority”. There’s so much of the software development culture that encourages us to be fiercely independent. While office planners have been moving to open-plan space, and long lines of desk, most of us would probably much rather work in a quiet cubicle or office, or at least get some good headphones on, filled with, in my case, Skrillex and Jean-Michel Jarré. Tune out all distractions, and just get to the business of writing software.

After all, most of the biggest names within software development, who created some of the most important tools, got where they are because of work they did mostly, if not entirely, by themselves. Theo de Raadt created OpenBSD. Guido van Rossum: Python. Bjarne Stroustrup: C++. John Carmack: iD. Mark Zuckerberg. Enough said. Even C and Unix are well understood to have been written by two or three men, each. Large, collaborative teams are seen as the spawning sites of baroque monstrosities like your bank’s back-office software, or Windows ME, or even obviously committee-designed languages like ADA and COBOL. It’s as though there’s an unwritten rule that if you want to make something cool, then you have to work alone. And there’s also the Open Source credo that if you want a software package to do something it doesn’t, you add that feature in. And if the maintainer doesn’t want to merge it, then you can fork it, and go it alone. Lone wolf development is almost expected.

However, this kind of behaviour is really what sets software development apart from professional engineering. Engineers join professional associations and sit on standards committees in order to improve the state of the field. In fact, some engineering standards are even part of legal regulations—to a limited extent, engineers are occasionally able to set the minimums that all engineers in that field and jurisdiction must abide by. Software development standards, on the other hand, occasionally get viewed as hindrances, and other than the Sarbannes-Oxley Act, I can’t think of anything off the top of my head that becomes legally binding on a software developer.

By contrast, we collaborate only when we have to. In fact, the only things I’ve seen developers resist more than test-driven development are code review and paired programming. Professional engineers have peer review and teamwork whipped into them in university, to the extent that trying to go it alone is basically anathema. The field of engineering, like software development, is so vast that no individual can possibly know everything, so you work with other people to cover the gaps in what you know, and to get other people looking at your output before it goes out the door. I’m not just referring to testers here. This applies to design, too. I’d imagine most engineering firms don’t let a plan leave the building with only one engineer having seen it, even if only one put their seal on it.

Who else famously worked alone? Linus Torvalds, who also quite famously said, “given enough eyeballs, all bugs are shallow.” If that isn’t a ringing endorsement of peer review and cooperation among software developers, then I don’t know what is. I know how adversarial code review can feel at first; it’s a hell of a mental hurdle to clear. But if everyone recognises that this is for the sake of improving the product first, and then for improving the team, and if you can keep that in mind when you’re reading your reviews, then your own work will improve significantly, because you’ll be learning from each other.

It’s about continuous improvement

Continuous improvement is another one of those things that professional engineering emphasises. I don’t mean trying out the latest toys, and trying to keep on top of the latest literature, though the latter is certainly part of it. As a team, you have to constantly reflect back on your work and your processes, to see what’s working well for you and what isn’t. You can apply this to yourself as well; this is why most larger companies have a self-assessment aspect of your yearly review. This is also precisely why Scrum includes an end-of-sprint retrospective meeting, where the team discusses what’s going well, and what needs to change. I’ve seen a lot of developers resist retrospectives as a waste of time. If no one acts on the agreed-upon changes to the process, then yeah, retrospectives are a waste of time, but if you want to act like engineers, then you’ll do it. Debriefing meetings shouldn’t only held when things go wrong (which is why I don’t like calling them post-mortems); they should happen after wildly successful projects, too. You can discuss what was learned while working on the project, and how that can be used to make things even better in the future. This is the purpose of Scrum’s retrospective.

But software developers resist meetings. Meetings take away from our time in front of our keyboards, and disrupt our flow. Product owners are widely regarded as having meetings for the sake of having meetings. But those planning meetings, properly prepared for, can be incredibly valuable, because you can ask the product owners your questions about the upcoming work well before it shows up in your to-be-worked-on hopper. Then, instead of panicking because the copy isn’t localised for all the languages you’re required to support, or hastily trying to mash it in before the release cutoff, the story can include everything that’s needed for the developers to their work up front, and go in front of the product owners in a fairly complete state. And, as an added bonus, you won’t get surprised, halfway through a sprint, when a story turns out to be way more work than you originally thought, based on the summary.

These aren’t easy habits to change, and I’ll certainly be the first person to admit it. We’ve all been socialised within this field to perform in a certain way, and when you’re around colleagues who also act this way, then there’s also a great deal of peer pressure to continue do it. But, as they say, change comes from within, so if you want to apply engineering principles and practices to your own work, then you can and should do it, to whatever extent is available within your particular working environment. A great place to start is with the Association for Computing Machinery’s Code of Ethics. It’s fairly consistent with most engineering codes of ethics, within the context of software development, so you can at least use it as a stepping stone to introduce other engineering principles to your work. If you work in a Scrum, or Lean, or Kanban shop, go study the literature of the entire process, and make sure that when you sit down to work, that you completely understand what it is that’s required of you.

The problem of nomenclature

Even if you were to do that, and absorb and adopt every relevant practice guideline that PEO requires of professional engineers, this still doesn’t magically make you a software engineer. Not only are there semi-legally binding guidelines about what’s considered software engineering, there are also regulations about who can use the title “engineer”. The same Act that gives PEO the authority to establish standards of practice for professional engineers also clearly establish penalties for inappropriate uses of the title “engineer”. Specifically,

every person who is not a holder of a licence or a temporary licence and who,
(a) uses the title “professional engineer” or “ingénieur” or an abbreviation or variation thereof as an occupational or business designation;
(a.1) uses the title “engineer” or an abbreviation of that title in a manner that will lead to the belief that the person may engage in the practice of professional engineering;
(b) uses a term, title or description that will lead to the belief that the person may engage in the practice of professional engineering; or
(c) uses a seal that will lead to the belief that the person is a professional engineer,
is guilty of an offence and on conviction is liable for the first offence to a fine of not more than $10 000 and for each subsequent offence to a fine of not more than $25 000.

Since we know that PEO recognises software engineering within engineering projects, it’s not unreasonable to suggest that having the phrase “software engineer” on your business card could lead to the belief that you may engage in the practice of professional engineering. But if you don’t have your license (or at least work under the direct supervision of someone who does), that simply isn’t true.

Like I said up top, I’m called a Software Engineer by my employer. But when I give you my business card, you’ll see it says “software developer”.

I am not a software engineer.

Tuesday 29 April 2014

Upgrade your models from PHP to Java

I’ve recently had an opportunity to work with my team’s new developer, as part of the ongoing campaign to bring him over to core Java development from our soon-to-be end-of-lifed PHP tools. Since I was once in his position, only two years ago—in fact, he inherited the PHP tools from me when I was forcefully reassigned to the Java team—I feel a certain affinity toward him, and a desire to see him succeed.

Like me, he also has some limited Java experience, but nothing at the enterprise scale I now work with every day, and nothing especially recent. So, I gave him a copy of the same Java certification handbook I used to prepare for my OCA Java 7 exam, as well as any other resources I could track down that seemed to be potentially helpful. This sprint he’s physically, and semi-officially, joined our team, working on the replacement for the product he’s been maintaining since he was hired.

And just to make the learning curve a little bit steeper, this tool uses JavaServer Faces. If you’ve developed for JSF before, you’re familiar with how much of a thorough pain in the ass it can be. Apparently we’re trying to weed out the non-hackers, Gunnery Sergeant Hartman style.

So, as part of his team onboarding process, we picked up a task to begin migrating data from the old tool to the new. On investigating the requirements, and the destination data model, we discovered that one of the elements that this task expects has not yet been implemented. What a great opportunity! Not only is he essentially new to Java, he’s also new to test-driven development, so I gave him a quick walkthrough the test process while we tried to write a test for the new features we needed to implement.

As a quick sidebar, in trying to write the tests, we quickly discovered that we were planning on modifying (or at least starting with) the wrong layer. If we’d just started writing code, it probably would have taken half an hour or more to discover this. By trying to write the test first, we figured this out within ten minutes, because the integration points rapidly made no sense for what we were trying to do. Hurray!

Anyway, lunch time promptly rolled around while I was writing up the test. I’d suggested we play “TDD ping-pong”—I write a test, then he implements it, and the test was mostly set up. I said I’d finish up the test on the new service we needed, and stub out the data-access object and the backing entity so he’d at least have methods to work with. After I sent it his way, I checked in after about an hour, to see how he was doing, and he mentioned something that hadn’t occurred to me, because I had become so used to Java: he was completely unfamiliar with enterprise Java’s usual architecture of service, DAO and DTO.

And of course, why would he be? I’m not aware of any PHP frameworks that use this architecture, because it’s based on an availability of dependency injection, compiled code and persistent classes that is pretty much anathema to the entire request lifecycle of PHP. For every request, PHP loads, compiles, and executes each class anew. So pre-loading your business model with the CRUD utility methods, and operating on them as semi-proper Objects that can persist themselves to your stable storage, is practically a necessity. Fat model, skinny controller, indeed.

Java has a huge advantage here, because the service, DAO, and whole persistence layer, never leave memory between requests, just request-specific context. Classes don’t get reloaded until the servlet container gets restarted (unless you’re using JSF, in which case, they’re serialised onto disk and rehydrated when you restart, for… some reason). So you can write your code where your controller asks a service for records, and the service calls out to the DAO, which return an entity (or collection thereof), or a single field’s value for a given identifier.

This is actually a really good thing to do, from an engineering perspective.

For a long time, with Project Alchemy, I was trying to write a persistence architecture that would be as storage-agnostic as possible—objects could be persisted to database, to disk, or into shared memory, and the object itself didn’t need to care where it went. I only ever got as far as implementing it for database (and even then, specifically for MySQL), but it was a pretty valiant effort, and one that I’d still like to return to, when I get the opportunity, if for no other reason than to say I finished it. But the entity’s superclass still had the save() and find() methods that meant that persistence logic was, at some level in the class inheritance hierarchy, part of the entity. While effective for PHP, in terms of performance, this unfortunately doesn’t result in the cleanest of all possible code.

Using a service layer provides separation of concerns that the typical PHP model doesn’t allow for. The entity that moves data out of stable storage and into a bean also contains both business logic and an awareness of how it’s stored. It really doesn’t have to, and should. Talking to the database shouldn’t be your entity’s problem.

But it’s still, overall, part of the business model. The service and the entity, combined, provide the business model, and the DAO just packs it away and gets it back for you when you need it. This two classes should be viewed as part of a whole business model, within the context of a “Model-View-Controller” framework. The controller needs to be aware of both classes, certainly, but they should form two parts of a whole.

Besides, if you can pull all your persistence logic out of your entity, it should be almost trivial to change storage engines if and when the time comes. Say you needed to move from JPA and Hibernate to an XML- or JSON-based document storage system. You could probably just get away with re-annotating your entity, and writing a new DAO (which should always be pretty minimal), then adjusting how your DAO is wired into your service.

Try doing that in PHP if your entity knows how it’s stored!

One of these days, I’ll have to lay out a guide for how this architecture works best. I’d love to get involved in teaching undergrad students how large organisations try to conduct software engineering, so if I can explain it visually, then so much the better!

Saturday 26 April 2014

Women don't join this industry to fulfil your fantasies

It’s been another bad day week for men in the technology industry being complete and total shitheads to women.

Admit it, guys, we didn’t have very far to fall to hit that particular point. But last Friday uncovered a few particularly horrible examples of just how poorly some men can regard women, and how quickly others will come valiantly leaping to their defense.

My day opened up with being pointed to codebabes.com (no link, because fuck that). Describing itself as a portal to “learn coding and web development the fun way” CodeBabes pairs video tutorials with scantily-clad women, and the further you progress in a particular technology, the progressively less and less the instructors are wearing. As if software development wasn’t already enough of a boys’ club that has used busty women to get people to buy things (whether it’s the latest technology at an expo, or the latest video game), now somebody’s had the brilliant idea of having PHP and CSS taught by women in lingerie.

The project doesn’t even pretend to have any particularly noble purpose in mind. Above the fold on their site, the copy reads, “The internet: great for learning to code [and] checking out babes separately, until now.”

The mind boggles.

First of all—even if we were going to ignore everything about how socially backwards this idea is—is just isn’t an effective way to accomplish anything semi-productive. Did they not learn anything from trying to get off, playing video game strip blackjack? Or any other game where the further you progress, the more you see of a nude or semi-nude woman? Because this idea is clearly derived from those. The problem with those games is that in order to reveal the picture you want, you have to concentrate on the challenge you’re presented, but the more you concentrate on the challenge, the less aroused you are. The aims are in complete opposition to each other. If you’re trying to make learning a new technology a game, make sure that the technology is the aim, and the game is an extrinsic motivator. If you’re trying to look at naked girls… well, there are lots of opportunities to do that on the web.

But this problem is so much bigger than just being an unproductive way of trying to do two things at once.

What’s the catchphrase, again? “The internet: great for learning to code [and] checking out babes separately, until now.” These two things are implied to be mutually exclusive—that outside of the site, beautiful women and learning to write software have nothing to do with each other. That, were it not for these videos, beautiful women who look good in lingerie would have nothing to do with writing software.

So, if beautiful women who look good in lingerie would other have nothing to do with writing software, we can further assume that the creators are implying—and equally importantly, that their users will infer—that women know shit about computers (if I may coin a phrase).

Now, that being said, I’s quite sure that the women who are presenting these videos actually know what they’re talking about. There’s nothing more difficult than trying to convincingly explain something you don’t understand, unless you’re a particularly talented actor with a good script (see: every use of technobabble in the last thirty years of science fiction television). And I’m not trying to suggest that they should be accused of damaging the cause of feminism in technology. Why? Because there is nothing about this site that suggests to me that the idea was conceived of by women. The whole premise itself is simply so sophomoric that I can’t come to any conclusion other than that it was invented by men, particularly when the site’s official Twitter account follows The Playboy Blog, Playboy, three Playboy Playmates… and three men.

Software development, in most English-speaking countries, suffers from a huge gender gap, both in terms of staffing and salary. Sites like this will not do anything to close that gap, because all this says to a new generation of developers is that a woman’s role is to help a man on his way to success, and to be an object of sexual desire in the process—their “philosophy” flatly encourages viewers to masturbate to the videos.

What makes it that little bit worse (Yep, it actually does get worse) is that, on reading their “philosophy”, the creators also make it quite clear that they recognise that what they’re doing will offend people, and they don’t care. They say, “try not to take us too seriously”, and “if we’ve offended anyone, that's really not our goal, we hope there are bigger problems in the world for people to worry about.” Where have I seen these arguments before? Ahh, yes, from sexist men who are trying to shut down criticism of their sexist bullshit. The next thing I expect to see is them crying “censorship”. I’m not trying to prevent them from saying their repugnant shit. I would, however, like to try to educate them about why it’s hurtful, and why it’s something that they should know better than to think in the first place.

As for the rest of us, we need to make it clear that there’s no room for these attitudes in today’s software industry. That when somebody suggests visiting sites like this, they get called out for promoting sexist points of view. That when someone posits that a woman only got an interview, or even her job, because of her anatomy, that they get called out for thinking that she isn’t fully qualified to be there. If this industry has any hope at escaping the embarrassing reputation that it’s earned, we have to do better. We have to have to the guts to say, “that shit’s not cool,” to anyone who deserves it. Stand up for what you believe in.