Wednesday, 20 January 2016

On emotional labour

My awesome wife who's awesome wrote an amazing tweetstorm earlier tonight about the unpaid emotional labour women are socialised to do, and it’s made me pensive about a few things.

First up: I am absolutely guilty of a lot of these things. I hope less so recently than I used to be, but I have enough self-awareness to admit that yes, I have said and done these things. (She promises me it wasn’t directed at me, but that doesn’t mean I’m innocent—and I think I know her well enough to know she’d never subtweet the Hell out of me like that—but I think even that is part of the truckload of unpaid emotional labour that she, as a woman, has been brought up to do) And to be quite honest, if you’re a guy… you’ve done these things too. If you look at me and think to yourself, “well, I don’t think I’m like this,” you almost certainly are like that. Only if you can specifically identify things you do that are not like that… you’re like that.

Because, honestly, guys are socialised to do exactly that. And you even see a lot of it in what jobs in tech are coded as being “inherently” oriented toward men or women. QA and project management, which are both roles that require diligence, attention to minute detail, and an aptitude for predicting someone’s needs before they are aware of it, are coded as and performed by women much more than most other roles in the tech industry. Of tech roles within a tech company, they’re the only positions that I’ve consistently observed having anything close to gender-wise parity.

So why do I think that guys are socialised not to do this work? Because, let’s be honest, it is work. I had an opportunity to fill in for my team’s project manager (who, it’s important to point out, is a woman) late last year, and holy mother of God that job is hard. Like, Nintendo-hard. And at least she’s getting paid for it, but I guarantee she’s been doing this her whole life, because that’s what society’s taught her to do. And I haven’t. Instead, I’ve grown up around a lot of media that’s suggested that one set of gifts (chocolate, teddy bears, and diamonds) will satisfy any woman I may be involved with (here’s a hint: that’s a load of crap), and an escalating and never-ending series of ostensible jokes along the lines of “bitches be cray” when her birthday, or anniversary is forgotten.

And why the hell shouldn’t they be? Seriously, think about it for a minute. The person who you’ve decided to open up to and be most vulnerable with comes home on your birthday, or on the anniversary of the day you got married, and they have no idea what’s happened today, however long ago. They can’t bring it to mind at all.

Wouldn’t you be upset? I would be. I have been.

Long story short, just like math and art, the everyday things that women “just seem to do better than men” are things that they’ve practised, constantly, since they were very young. (Men have, conversely, been raised to ignore those things.) Anything you do every day of your life, for about as long as you can remember… yeah, you’re going to be pretty damn good at. Doesn’t make it easy, just habit. (Professional athletes are a great example of this) If you really want to show some kind of appreciation for the women in your life, and all the things they do for you… do them yourself.

Thursday, 13 August 2015

Not-so-hidden treasures on TV

I’ve been watching some really good TV lately. And I don’t just mean the sixth season of Archer, which is a hilarious return-to-form for the show, and for the main character, albeit with some really interesting, if spectacularly raunchy, character development. This is not a show for kids. All of the characters are overtly terrible people who I would have no desire whatsoever to associate with in real life… but because it’s a cartoon about freelance James Bond-style spies, they kind of get away with it.

What I’m specifically thinking about right now is BoJack Horseman (another adult-oriented cartoon, not remotely intended for kids, in which the main character is an incredible asshole), Halt and Catch Fire, and My Little Pony: Friendship is Magic.

Now, if you know me in real life, you shouldn’t be surprised that My Little Pony is showing up here. My family discovered it on Netflix, and my then-three-year-old son got hooked. And in turn, I got hooked. It’s got a lot of really good messages for little kids, and there are tonnes of things for adults (whether they’re watching to be aware of what media their kids are consuming, or on its own merits). For example, the recurring Big Bad God of Chaos, appropriately named Discord, is voiced by John de Lancie, specifically because the character is the show’s equivalent of Q! My recent favourite example of this is in an episode where one character has a series of nightmares, all about her current significant anxieties. Eventually, the dreams culminate in her literally confronting her own shadow—if you’re at all familiar with Jungian theory, it’s not even metaphor any more, and I love it.

But when you keep watching it, and you start to get into it, you realise that there’s a rich storytelling universe that is driven significantly by the characters. Their actions have permanent consequences (the work that goes to maintaining continuity is intense, and is very clear), and whatever scrapes they get into over the course of a story, they learn from. Characters grow over the course of the show.

My son’s favourite pony is Rainbow Dash. At the start of the series, she’s every obnoxious jock stereotype, all at once. She’s athletically talented, and she isn’t at all hesitant to let you know she’s the coolest motherfucker here. She actually tells the main character of the show, the bookish Twilight Sparkle, that “reading is for eggheads, and I’m not an egghead,” at one point. And all of this has bitten her in the ass at some point. She’s learned from her mistakes, and she’s grown.

Take an early episode of Season 5, when her pet tortoise starts to go into hibernation at the beginning of winter. She can’t bear the thought of being without him for three months, and she ends up snapping at her friends when they start to say “hibernate”, so she vows to do whatever she can to prevent the arrival of winter. But time marches on, and even the great Rainbow Dash can’t prevent the seasons from changing. She comes to accept that Winter Is Coming, and her pet is going away for a while… and she gets really depressed about it. One scene toward the end starts with her curled up with her pet in bed, with no enthusiasm or even interest for anything. When given some tough love by the most softspoken of her friends, she ends up bawling her eyes out, both with the classic cartoon eye-gushers, and with a fairly realistic ugly-crying-face. She’s really fucked up by this prospect, but she needs the cry to accept that this is going to happen. After her pet buries himself in the mud, she declines the company of her friends, and sits down with a book to read to him as he begins hibernating… and the episode ends there. It’s not a reset. Things will be all right… but not yet.

This is some major character development for what is ostensibly a kid’s show. And I really appreciate that one of the characters with whom little boys in the audience identify (case in point, my kid) was given an episode to stretch out emotionally, and have her friends not only accept that she’s upset, but empathise with her, coming to the point where the lot of them cry (including blink-and-you’ll-miss-it mascara streaks on the fashion designer). Then she’s given the space she asks for. Kid’s shows don’t even normally have this degree of continuity, let alone character development. But the writers and showrunners behind My Little Pony are doing something that I think can really contribute significantly to the emotional development of their audience… and they have an audience that spans both age and gender.

At the same time, BoJack and Halt and Catch Fire both spent their second seasons trying to depress their audience… or something. When it’s one Wham Episode (as TV Tropes refers to it), that’s one thing, but these shows both went with full-on Wham Seasons. In every episode, either a character lit a metaphorical bridge on fire, or had something heartwrenchingly awful happen to them. In BoJack’s case, the writers gave us an in-universe equivalent of Bill Cosby and everything that’s coming to light about him… and after teasing us with a standard sitcom-y build-up, and you think the character’s going to sway the court of public opinion… it ends exactly where it always ends. And the show pulls shit like this for twelve episodes in a row.

Nonetheless, if you haven’t checked it out, watch it. It’s very crude, and pretty much all the characters are just terrible people you’d hate in real life, but the writing is just stellar. Lots of character development in this show too, and exploration of emotional cause-and-effect, and a very conscious decision on the part of the writing staff to make a diverse cast, which I really appreciate.

Finally, Halt and Catch Fire. I haven’t had a show where I am indisputably the target demographic since I was sixteen. And that show lasted less than a season, so the fact that this got renewed through a second season (I’m praying for a third) is just delightful. It’s by AMC, so they (predictably) show their work in creating a realistic historical environment. There’s a great article in Vox about what this season did amazingly right about gender, by completely flipping the stereotypes on their heads. The two core women in the cast are given the positions of authority and provide excellent complements to each other in running a business. Their ex-boyfriend and husband, instead, get to act completely on the basis of emotion. The husband stays home with the kids this season, and the other makes a series of increasingly rash decisions because of his unresolved feelings for Cameron. And all of them basically have everything fall apart for them over the course of the season. I don’t want to spoil too much, if you haven’t seen the show, but I can’t stress enough how much you need to watch this show, because it’s absolutely phenomenal.

Lots of good TV shows out there to be watched. I’m really glad I’ve made the time to watch these.

Wednesday, 29 October 2014

On taking the time to see what happened

I kind of want to watch the video of the failed Antares launch from last night, for the sake of context for the photo of the launch site that NASA recently published on Google+… but I saw the photos of the explosion on Twitter shortly after it happened, and just kind of shivered. Knowing perfectly well that no one was killed in the accident didn’t help—that was clearly a huge blow for the company. As someone on Twitter said, spaceflight is hard. NASA and Roscosmos have it pretty well sorted at this point, having been in the game longer than anyone else (with full credit due, of course, to CNSA and ISRO), so it really does bear pointing out that both Orbital Sciences and SpaceX have only been putting rockets into orbit since about 1990 and 2008 respectively. It’s not that SpaceX is necessarily doing launches and rockets better than Orbital. Orbital has way more experience. SpaceX just hasn’t had a rocket blow up on the pad yet. Hopefully, that doesn’t happen, but the reality is that there’s fundamentally little difference between a rocket and a bomb.

I’ve been trying to stay on top of the commercial American launches, almost just because the advent of commercial spaceflight is really exciting to me. I’ve seen two Falcon 9 launches so far—the most recent one, and (if I recall correctly) its first mission to the ISS—but I haven’t had the opportunity to see anything from Orbital Sciences and Wallops. Maybe it’s a stronger cultural affinity for Cape Canaveral; as far as I knew until the last year or so, the only launch facility NASA had was in Florida. But every time a Virginia launch is mentioned, I secretly hope that this time it’ll be visible from Toronto. I look at the maps of where the launch will be visible from, at what angle, and I’m always a little disappointed that Toronto is well outside the arc. I took the opportunity to see the launch of STS-135 when my family travelled to Orlando for a Disney World/Universal Studio holiday. Our timing coincided with the launch date, and having never seen a Space Shuttle launch before, my wife, my sister-in-law, and I took my then-six-month-old son from Greater Orlando to Titusville to visit Kennedy Space Center. We didn’t make it Titusville for the launch, but we saw it, pretty clearly, from the roughly twenty miles away that we were when the countdown hit the last few minutes. That was an incredible experience, even from that distance (because, by God, you can hear it), and I’d love to have that experience again.

I’m also interested in the Antares launch, and specifically its failure, from a process engineering perspective. A few people on Twitter and Google+ noted that, as soon as the rocket exploded, Orbital Sciences’ Twitter feed went silent. Reports came in from NASA about the same time that the Orbital mission controllers were giving witness statements and storing the telemetry they’d had from the rocket up until that point. In their business, this is absolutely critical for figuring out what caused the incident, so that it can be avoided in the future. Rockets are expensive, so having all that cash go up in flames is a disaster.

But in technology, we can certainly learn from this. So often, when something goes wrong on a server, particularly a production server, our first response is simply to fix it, and get the website running again. Don’t get me wrong; this is important, too—in an industry where companies can live or die on uptime, getting the broken services fixed as soon as possible is important. But preventing the problem from happening again is equally important, because if you’re constantly fighting fires, you can’t improve your offering. When something goes wrong, and you have the option, your first response should be to remove the broken machine from the load balance. Disconnect it from any message queues that it might be listening to, but otherwise keep the environment untouched, so that you can perform some forensic analysis and discover what went wrong. In addition to redundancy, you also need logging. Oh my good good God, you need logging. Yes, logs take up disk space. That’s what services like logrotate are for—logs take up space, sure, but gzipped logs take up a tenth of that space. And if you haven’t looked at those logs for, let’s say, six months…you probably have a solid enough service that you don’t need them any more. And if, for business reasons, you think you might… you can afford to buy more disks and archive your logs to tape. In the grand scheme of things, disks are cheap, but tape is cheaper.

So, ultimately, what’s the takeaway for the software industry? Log everything you can. Track everything you can. And when the shit hits the fan, stop, and gather information before you do anything. I know cloud computing gives us the option (when we plan it out well) of just dropping a damaged cloud instance on the floor, spinning up a new one, and walking away, but if you do that without even trying to diagnose what went wrong, you’ll never fix it.

Saturday, 4 October 2014

A lesson learned the very hard way

Two nights ago, I took a quick look at a website I run with a few friends. It’s a sort of book recommendation site, where you describe some problem you’re facing in your life, and we recommend a book to help you through it. It’s fun to try to find just the right book for someone else, and it really makes you consider what you keep on your shelves.

But alas, it wasn’t responding well—the images were all fouled up, and when I tried to open up a particular article, the content was replaced by the text "GEISHA format" over and over again. So now I’m worried. Back to the homepage, and the entire thing—markup and everything—has been replaced by this text.

First things first: has anyone else ever heard of this attack? I can’t find a thing about it on Google, other than five or six other sites that were hit by it when Googlebot indexed them, and one of them at least a year ago.

So anyway, I tried to SSH in, with no response. Pop onto my service provider to access the console (much as I wish I had the machine colocated, or even physically present in my home, I just can’t afford the hardware and the bandwidth fees), and that isn’t looking good, either.

All right, restart the server.

Now HTTP has gone completely nonresponsive. And when I access the console, it’s booted into initramfs instead of a normal Linux login. This thing is hosed. So I click the “Rescue Mode” button on my control panel, but it just falls into an error state. I can’t even rescue the thing. At this point, I’m assuming I’ve been shellshocked.

Very well. Open a ticket with support, describing my symptoms, asking if there’s any hope of getting my data back. I’m assuming, at this point, the filesystem’s been shredded. But late the next morning, I hear back. They’re able to access Rescue Mode, but the filesystem can’t fsck properly. Not feeling especially hopeful, I switch on Rescue Mode and log in.

And everything’s there. My Maildirs, my Subversion repositories, and all the sites I was hosting. Holy shit!

I promptly copied all that important stuff down to my personal computer, over the course of a few hours, and allowed Rescue Mode to end, and the machine to restart into its broken state. All right, I think, this is my cosmic punishment for not upgrading the machine from Ubuntu Hardy LTS, and not keeping the security packages up to date. Reinstall a new OS, with the latest version of Ubuntu they offer, and keep the bastard thing up to date.

Except that it doesn’t quite work that well. On trying to rebuild the new OS image… it goes into a error state again.

Well and truly hosed.

I spun up a new machine, in a new DC, and I’m in the process of reinstalling all the software packages and restoring the databases. Subversion’s staying out; this is definitely the straw that broke the camel’s back in terms of moving my personal projects to Git. Mail comes last, because setting up mail is such a pain in the ass.

And monitoring this time! And backups! Oh my God.

Let this be a lesson: if you aren’t monitoring it, you aren’t running a service. Keep backups (and, if you have the infrastructure, periodically try to refresh from them). And keep your servers up-to-date. Especially security updates!

And, might I add, many many thanks to the Rackspace customer support team. They really saved my bacon here.

Tuesday, 16 September 2014

We need a role model

So Notch (along with all the other founders, it seems) has resigned his position from Mojang, and publicly distanced himself from anything to do with Minecraft, now that he's personally worth in excess of $1B USD.

Why? The screed he left is a bit rambly, but seems to boil down to not wanting the responsibility for the overall Minecraft community. Some other of his writings definitely convey that he’s got pretty sick and tired of dealing with managing Mojang and all the extra issues that come with it. But he doesn’t want to have anything to do with Minecraft any more. He says, at the end, “Thank you for turning Minecraft into what it has become, but there are too many of you, and I can’t be responsible for something this big.”

I’m sorry, Notch, but you don’t have that choice. You are responsible for something as big as Minecraft, because you made it, and you put your name on it. You’re responsible for a one-hundred-million-strong user base, which means over the course of the last five years, you are invariably responsible for inspiring more than a couple of people to become programmers. I don’t think Star Trek had an audience that big when it was in production, and it’s only too easy to find stories of the actors—not even Gene Roddenberry, but James Doohan and Nichelle Nichols—talking about how fans approached them, repeatedly, at conventions, telling them that the show had changed their lives and given them a career (and in one case I can recall, Doohan saved a young woman considering suicide).

You’ve established a subculture within gaming, Notch, and it’s one that seems to have a level playing field amongst genders and ages. Your game does what no other game does, and considering the fucking shit show that is gaming culture right now, you owe it to the world to show that, in fact, some gamers can treat people with mutual respect. You are a role model, and this is the time, of all times, to act like one.

Minecraft’s great. I have the Pocket Edition demo on my iPad, and I pop it up every now and then. I’ve been considering shelling out the $7 to be able to play it properly. But if you’re going to act like this… you know, maybe I don’t want to.

Come back to Minecraft, Notch. Your audience needs you.

Monday, 2 June 2014

I am not an engineer, and neither are you

I’m called a Software Engineer. It says so when you look me up in my employer’s global address list: “Matthew Coe, Software Engineer.”

I am not a software engineer.

The most obvious place to begin is the fact that I don’t hold an undergraduate engineering degree from a board-accredited program. While I’ve been working at my current job for just over four years now (and have otherwise been in the field for an additional three years before that), I’ve never worked for a year under a licensed professional engineer, nor have I attempted, let alone passed, the Professional Practice Exam.

I don’t even satisfy the basic requirements of education and experience in order to be an engineer.

But let’s say I had worked under a licensed professional engineer. It’s not quite completely outside of the realm of possibility, since the guy who hired me into my current job is an engineering graduate from McGill… though also not a licensed professional engineer. But let’s pretend he is. There’s four years of what we call “engineering”, one of which would have been spent under an engineer. If we could pretend that my Bachelor of Computer Science is an engineering degree (and PEO does provide means to finagle exactly that, but I’d have to prove that I can engineer software to an interview board, among other things), then I’d be all set to take the exam.

Right?

There’s a hitch: what I do in this field that so many people call “software engineering”… isn’t engineering.

So what is engineering?

The Professional Engineers Act of Ontario states that the practice of professional engineering is

any act of planning, designing, composing, evaluating, advising, reporting, directing or supervising that requires the application of engineering principles and concerns the safeguarding of life, health, property, economic interests, the public welfare or the environment, or the managing of any such act.

There are very few places where writing software, on its own, falls within this definition. In fact, in 2008, PEO began recognising software engineering as a discipline of professional engineering, and in 2013 they published a practice guideline in which they set three criteria that a given software engineering project has to meet, in order to be considered professional engineering:

  • Where the software is used in a product that already falls within the practice of engineering (e.g. elevator controls, nuclear reactor controls, medical equipment such as gamma-ray cameras, etc.), and
  • Where the use of the software poses a risk to life, health, property or the public welfare, and
  • Where the design or analysis requires the application of engineering principles within the software (e.g. does engineering calculations), meets a requirement of engineering practice (e.g. a fail-safe system), or requires the application of the principles of engineering in its development.

Making a website to help people sell their stuff doesn’t qualify. I’m sorry, it doesn’t. To the best of my knowledge, nothing I’ve ever done has ever been part of an engineered system. Because I miss the first criterion, it’s clear that I’ve never really practised software engineering. The second doesn’t even seem particularly close. While I’ve written and maintained an expert system that once singlehandedly DOS’d a primary database, but that doesn’t really pose a risk to life, health, property, or the public welfare.

The only thing that might be left to even quasi-justify calling software development “engineering” would be if, by and large, we applied engineering principles and disciplines to our work. Some shops certainly do. However, in the wider community, we’re still having arguments about ifwhen we should write unit tests! Many experienced developers who haven’t tried TDD decry it as being a waste of time (hint: it’s not). There’s been recent discussion in the software development blogosphere on the early death of test-driven development, whether it’s something that should be considered a stepping stone, or disposed of altogether.

This certainly isn’t the first time I’ve seen a practice receive such vitriolic hatred within only a few years of its wide public adoption. TDD got its start as a practice within Extreme Programming, in 1999, and fifteen years later, here we are, saying it’s borderline useless. For contrast, the HAZOP (hazard and operability study) engineering process was first implemented in 1963, formally defined in 1974, and named HAZOP in 1983. It’s still taught today in university as a fairly fundamental engineering practice. While I appreciate that the advancement of the software development field moves a lot faster than, say, chemical engineering, I only heard about TDD five or six years into my professional practice. It just seems a little hasty to be roundly rejecting something that not everybody even knows about.

I’m not trying to suggest that we don’t ever debate the processes that we use to write software, or that TDD is the be-all, end-all greatest testing method ever for all projects. If we consider that software developers come from a wide variety of backgrounds, and the projects that we work on are equally varied, then trying to say that thus-and-such a practice is no good, ever, is as foolish as promoting it as the One True Way. The truth is somewhere in the middle: Test-driven development is one practice among many that can be used during software development to ensure quality from the ground up. I happen to think it’s a very good practice, and I’m working to get back on the horse, but if for whatever reason it doesn’t make sense for your project, then by all means, don’t use it. Find and use the practices that best suit your project’s requirements. Professional engineers are just as choosy about what processes are relevant for what projects and what environments. Just because it isn’t relevant to you, now, doesn’t make it completely worthless to everyone, everywhere.

It’s not a competition

The debate around test-driven development reflects a deeper issue within software development: we engage in holy wars all the time, and about frivolous shit. Editors. Operating systems. Languages. Task management methods. Testing. Delivery mechanisms. You name it, I guarantee two developers have fought about it until they were blue in the face. There’s a reason, and it’s not a particularly good one, that people say “when two programmers agree, they hold a majority”. There’s so much of the software development culture that encourages us to be fiercely independent. While office planners have been moving to open-plan space, and long lines of desk, most of us would probably much rather work in a quiet cubicle or office, or at least get some good headphones on, filled with, in my case, Skrillex and Jean-Michel Jarré. Tune out all distractions, and just get to the business of writing software.

After all, most of the biggest names within software development, who created some of the most important tools, got where they are because of work they did mostly, if not entirely, by themselves. Theo de Raadt created OpenBSD. Guido van Rossum: Python. Bjarne Stroustrup: C++. John Carmack: iD. Mark Zuckerberg. Enough said. Even C and Unix are well understood to have been written by two or three men, each. Large, collaborative teams are seen as the spawning sites of baroque monstrosities like your bank’s back-office software, or Windows ME, or even obviously committee-designed languages like ADA and COBOL. It’s as though there’s an unwritten rule that if you want to make something cool, then you have to work alone. And there’s also the Open Source credo that if you want a software package to do something it doesn’t, you add that feature in. And if the maintainer doesn’t want to merge it, then you can fork it, and go it alone. Lone wolf development is almost expected.

However, this kind of behaviour is really what sets software development apart from professional engineering. Engineers join professional associations and sit on standards committees in order to improve the state of the field. In fact, some engineering standards are even part of legal regulations—to a limited extent, engineers are occasionally able to set the minimums that all engineers in that field and jurisdiction must abide by. Software development standards, on the other hand, occasionally get viewed as hindrances, and other than the Sarbannes-Oxley Act, I can’t think of anything off the top of my head that becomes legally binding on a software developer.

By contrast, we collaborate only when we have to. In fact, the only things I’ve seen developers resist more than test-driven development are code review and paired programming. Professional engineers have peer review and teamwork whipped into them in university, to the extent that trying to go it alone is basically anathema. The field of engineering, like software development, is so vast that no individual can possibly know everything, so you work with other people to cover the gaps in what you know, and to get other people looking at your output before it goes out the door. I’m not just referring to testers here. This applies to design, too. I’d imagine most engineering firms don’t let a plan leave the building with only one engineer having seen it, even if only one put their seal on it.

Who else famously worked alone? Linus Torvalds, who also quite famously said, “given enough eyeballs, all bugs are shallow.” If that isn’t a ringing endorsement of peer review and cooperation among software developers, then I don’t know what is. I know how adversarial code review can feel at first; it’s a hell of a mental hurdle to clear. But if everyone recognises that this is for the sake of improving the product first, and then for improving the team, and if you can keep that in mind when you’re reading your reviews, then your own work will improve significantly, because you’ll be learning from each other.

It’s about continuous improvement

Continuous improvement is another one of those things that professional engineering emphasises. I don’t mean trying out the latest toys, and trying to keep on top of the latest literature, though the latter is certainly part of it. As a team, you have to constantly reflect back on your work and your processes, to see what’s working well for you and what isn’t. You can apply this to yourself as well; this is why most larger companies have a self-assessment aspect of your yearly review. This is also precisely why Scrum includes an end-of-sprint retrospective meeting, where the team discusses what’s going well, and what needs to change. I’ve seen a lot of developers resist retrospectives as a waste of time. If no one acts on the agreed-upon changes to the process, then yeah, retrospectives are a waste of time, but if you want to act like engineers, then you’ll do it. Debriefing meetings shouldn’t only held when things go wrong (which is why I don’t like calling them post-mortems); they should happen after wildly successful projects, too. You can discuss what was learned while working on the project, and how that can be used to make things even better in the future. This is the purpose of Scrum’s retrospective.

But software developers resist meetings. Meetings take away from our time in front of our keyboards, and disrupt our flow. Product owners are widely regarded as having meetings for the sake of having meetings. But those planning meetings, properly prepared for, can be incredibly valuable, because you can ask the product owners your questions about the upcoming work well before it shows up in your to-be-worked-on hopper. Then, instead of panicking because the copy isn’t localised for all the languages you’re required to support, or hastily trying to mash it in before the release cutoff, the story can include everything that’s needed for the developers to their work up front, and go in front of the product owners in a fairly complete state. And, as an added bonus, you won’t get surprised, halfway through a sprint, when a story turns out to be way more work than you originally thought, based on the summary.

These aren’t easy habits to change, and I’ll certainly be the first person to admit it. We’ve all been socialised within this field to perform in a certain way, and when you’re around colleagues who also act this way, then there’s also a great deal of peer pressure to continue do it. But, as they say, change comes from within, so if you want to apply engineering principles and practices to your own work, then you can and should do it, to whatever extent is available within your particular working environment. A great place to start is with the Association for Computing Machinery’s Code of Ethics. It’s fairly consistent with most engineering codes of ethics, within the context of software development, so you can at least use it as a stepping stone to introduce other engineering principles to your work. If you work in a Scrum, or Lean, or Kanban shop, go study the literature of the entire process, and make sure that when you sit down to work, that you completely understand what it is that’s required of you.

The problem of nomenclature

Even if you were to do that, and absorb and adopt every relevant practice guideline that PEO requires of professional engineers, this still doesn’t magically make you a software engineer. Not only are there semi-legally binding guidelines about what’s considered software engineering, there are also regulations about who can use the title “engineer”. The same Act that gives PEO the authority to establish standards of practice for professional engineers also clearly establish penalties for inappropriate uses of the title “engineer”. Specifically,

every person who is not a holder of a licence or a temporary licence and who,
(a) uses the title “professional engineer” or “ingénieur” or an abbreviation or variation thereof as an occupational or business designation;
(a.1) uses the title “engineer” or an abbreviation of that title in a manner that will lead to the belief that the person may engage in the practice of professional engineering;
(b) uses a term, title or description that will lead to the belief that the person may engage in the practice of professional engineering; or
(c) uses a seal that will lead to the belief that the person is a professional engineer,
is guilty of an offence and on conviction is liable for the first offence to a fine of not more than $10 000 and for each subsequent offence to a fine of not more than $25 000.

Since we know that PEO recognises software engineering within engineering projects, it’s not unreasonable to suggest that having the phrase “software engineer” on your business card could lead to the belief that you may engage in the practice of professional engineering. But if you don’t have your license (or at least work under the direct supervision of someone who does), that simply isn’t true.

Like I said up top, I’m called a Software Engineer by my employer. But when I give you my business card, you’ll see it says “software developer”.

I am not a software engineer.

Tuesday, 29 April 2014

Upgrade your models from PHP to Java

I’ve recently had an opportunity to work with my team’s new developer, as part of the ongoing campaign to bring him over to core Java development from our soon-to-be end-of-lifed PHP tools. Since I was once in his position, only two years ago—in fact, he inherited the PHP tools from me when I was forcefully reassigned to the Java team—I feel a certain affinity toward him, and a desire to see him succeed.

Like me, he also has some limited Java experience, but nothing at the enterprise scale I now work with every day, and nothing especially recent. So, I gave him a copy of the same Java certification handbook I used to prepare for my OCA Java 7 exam, as well as any other resources I could track down that seemed to be potentially helpful. This sprint he’s physically, and semi-officially, joined our team, working on the replacement for the product he’s been maintaining since he was hired.

And just to make the learning curve a little bit steeper, this tool uses JavaServer Faces. If you’ve developed for JSF before, you’re familiar with how much of a thorough pain in the ass it can be. Apparently we’re trying to weed out the non-hackers, Gunnery Sergeant Hartman style.

So, as part of his team onboarding process, we picked up a task to begin migrating data from the old tool to the new. On investigating the requirements, and the destination data model, we discovered that one of the elements that this task expects has not yet been implemented. What a great opportunity! Not only is he essentially new to Java, he’s also new to test-driven development, so I gave him a quick walkthrough the test process while we tried to write a test for the new features we needed to implement.

As a quick sidebar, in trying to write the tests, we quickly discovered that we were planning on modifying (or at least starting with) the wrong layer. If we’d just started writing code, it probably would have taken half an hour or more to discover this. By trying to write the test first, we figured this out within ten minutes, because the integration points rapidly made no sense for what we were trying to do. Hurray!

Anyway, lunch time promptly rolled around while I was writing up the test. I’d suggested we play “TDD ping-pong”—I write a test, then he implements it, and the test was mostly set up. I said I’d finish up the test on the new service we needed, and stub out the data-access object and the backing entity so he’d at least have methods to work with. After I sent it his way, I checked in after about an hour, to see how he was doing, and he mentioned something that hadn’t occurred to me, because I had become so used to Java: he was completely unfamiliar with enterprise Java’s usual architecture of service, DAO and DTO.

And of course, why would he be? I’m not aware of any PHP frameworks that use this architecture, because it’s based on an availability of dependency injection, compiled code and persistent classes that is pretty much anathema to the entire request lifecycle of PHP. For every request, PHP loads, compiles, and executes each class anew. So pre-loading your business model with the CRUD utility methods, and operating on them as semi-proper Objects that can persist themselves to your stable storage, is practically a necessity. Fat model, skinny controller, indeed.

Java has a huge advantage here, because the service, DAO, and whole persistence layer, never leave memory between requests, just request-specific context. Classes don’t get reloaded until the servlet container gets restarted (unless you’re using JSF, in which case, they’re serialised onto disk and rehydrated when you restart, for… some reason). So you can write your code where your controller asks a service for records, and the service calls out to the DAO, which return an entity (or collection thereof), or a single field’s value for a given identifier.

This is actually a really good thing to do, from an engineering perspective.

For a long time, with Project Alchemy, I was trying to write a persistence architecture that would be as storage-agnostic as possible—objects could be persisted to database, to disk, or into shared memory, and the object itself didn’t need to care where it went. I only ever got as far as implementing it for database (and even then, specifically for MySQL), but it was a pretty valiant effort, and one that I’d still like to return to, when I get the opportunity, if for no other reason than to say I finished it. But the entity’s superclass still had the save() and find() methods that meant that persistence logic was, at some level in the class inheritance hierarchy, part of the entity. While effective for PHP, in terms of performance, this unfortunately doesn’t result in the cleanest of all possible code.

Using a service layer provides separation of concerns that the typical PHP model doesn’t allow for. The entity that moves data out of stable storage and into a bean also contains both business logic and an awareness of how it’s stored. It really doesn’t have to, and should. Talking to the database shouldn’t be your entity’s problem.

But it’s still, overall, part of the business model. The service and the entity, combined, provide the business model, and the DAO just packs it away and gets it back for you when you need it. This two classes should be viewed as part of a whole business model, within the context of a “Model-View-Controller” framework. The controller needs to be aware of both classes, certainly, but they should form two parts of a whole.

Besides, if you can pull all your persistence logic out of your entity, it should be almost trivial to change storage engines if and when the time comes. Say you needed to move from JPA and Hibernate to an XML- or JSON-based document storage system. You could probably just get away with re-annotating your entity, and writing a new DAO (which should always be pretty minimal), then adjusting how your DAO is wired into your service.

Try doing that in PHP if your entity knows how it’s stored!

One of these days, I’ll have to lay out a guide for how this architecture works best. I’d love to get involved in teaching undergrad students how large organisations try to conduct software engineering, so if I can explain it visually, then so much the better!

Saturday, 26 April 2014

Women don't join this industry to fulfil your fantasies

It’s been another bad day week for men in the technology industry being complete and total shitheads to women.

Admit it, guys, we didn’t have very far to fall to hit that particular point. But last Friday uncovered a few particularly horrible examples of just how poorly some men can regard women, and how quickly others will come valiantly leaping to their defense.

My day opened up with being pointed to codebabes.com (no link, because fuck that). Describing itself as a portal to “learn coding and web development the fun way” CodeBabes pairs video tutorials with scantily-clad women, and the further you progress in a particular technology, the progressively less and less the instructors are wearing. As if software development wasn’t already enough of a boys’ club that has used busty women to get people to buy things (whether it’s the latest technology at an expo, or the latest video game), now somebody’s had the brilliant idea of having PHP and CSS taught by women in lingerie.

The project doesn’t even pretend to have any particularly noble purpose in mind. Above the fold on their site, the copy reads, “The internet: great for learning to code [and] checking out babes separately, until now.”

The mind boggles.

First of all—even if we were going to ignore everything about how socially backwards this idea is—is just isn’t an effective way to accomplish anything semi-productive. Did they not learn anything from trying to get off, playing video game strip blackjack? Or any other game where the further you progress, the more you see of a nude or semi-nude woman? Because this idea is clearly derived from those. The problem with those games is that in order to reveal the picture you want, you have to concentrate on the challenge you’re presented, but the more you concentrate on the challenge, the less aroused you are. The aims are in complete opposition to each other. If you’re trying to make learning a new technology a game, make sure that the technology is the aim, and the game is an extrinsic motivator. If you’re trying to look at naked girls… well, there are lots of opportunities to do that on the web.

But this problem is so much bigger than just being an unproductive way of trying to do two things at once.

What’s the catchphrase, again? “The internet: great for learning to code [and] checking out babes separately, until now.” These two things are implied to be mutually exclusive—that outside of the site, beautiful women and learning to write software have nothing to do with each other. That, were it not for these videos, beautiful women who look good in lingerie would have nothing to do with writing software.

So, if beautiful women who look good in lingerie would other have nothing to do with writing software, we can further assume that the creators are implying—and equally importantly, that their users will infer—that women know shit about computers (if I may coin a phrase).

Now, that being said, I’s quite sure that the women who are presenting these videos actually know what they’re talking about. There’s nothing more difficult than trying to convincingly explain something you don’t understand, unless you’re a particularly talented actor with a good script (see: every use of technobabble in the last thirty years of science fiction television). And I’m not trying to suggest that they should be accused of damaging the cause of feminism in technology. Why? Because there is nothing about this site that suggests to me that the idea was conceived of by women. The whole premise itself is simply so sophomoric that I can’t come to any conclusion other than that it was invented by men, particularly when the site’s official Twitter account follows The Playboy Blog, Playboy, three Playboy Playmates… and three men.

Software development, in most English-speaking countries, suffers from a huge gender gap, both in terms of staffing and salary. Sites like this will not do anything to close that gap, because all this says to a new generation of developers is that a woman’s role is to help a man on his way to success, and to be an object of sexual desire in the process—their “philosophy” flatly encourages viewers to masturbate to the videos.

What makes it that little bit worse (Yep, it actually does get worse) is that, on reading their “philosophy”, the creators also make it quite clear that they recognise that what they’re doing will offend people, and they don’t care. They say, “try not to take us too seriously”, and “if we’ve offended anyone, that's really not our goal, we hope there are bigger problems in the world for people to worry about.” Where have I seen these arguments before? Ahh, yes, from sexist men who are trying to shut down criticism of their sexist bullshit. The next thing I expect to see is them crying “censorship”. I’m not trying to prevent them from saying their repugnant shit. I would, however, like to try to educate them about why it’s hurtful, and why it’s something that they should know better than to think in the first place.

As for the rest of us, we need to make it clear that there’s no room for these attitudes in today’s software industry. That when somebody suggests visiting sites like this, they get called out for promoting sexist points of view. That when someone posits that a woman only got an interview, or even her job, because of her anatomy, that they get called out for thinking that she isn’t fully qualified to be there. If this industry has any hope at escaping the embarrassing reputation that it’s earned, we have to do better. We have to have to the guts to say, “that shit’s not cool,” to anyone who deserves it. Stand up for what you believe in.

Monday, 30 December 2013

How you speak reflects how you think

My good friend Audra pointed me to this list of “20 programming jargons” that is just full of awfulness. First of all—and this is the easy, facile complaint—I’ve never heard any of these terms (at least, with these definitions) used in the wild. Maybe it’s because I’ve generally worked for reasonably grown-up companies where we attempted to act like professionals, but who can say? I don’t quite know why these terms are so foreign to me, but some of them are just bad.

First, the good… or, at least, neutral:

Baklava/Lasagne Code
Okay, this one’s actually kind of good. Lasagne code evokes a particular variety of spaghetti code that got that way by adhering a little too closely to old “enterprise” techniques of organising class responsibilities, and by applying code patterns for the sake of applying code patterns. I’m hesitant to use baklava to describe this phenomenon because (a) baklava’s delicious, (b) there’s too good of a parallel to spaghetti code with lasagne, and (c) it feels like it’s making fun of Muslims, and I just don’t go in for that.
Banana Banana Banana
I’ve never heard of this, that I can recall. As an actor, I’m definitely familiar with rhubarb rhubarb rhubarb, the chorus equivalent of making it look like you’re having a conversation onstage without actually saying anything particularly coherent, that would pull focus.
Claustrocodeia
Can’t say I’m familiar with this term. Granted, I don’t like moving away from my nice 24” screen at the office, and working on my laptop screen, but it’s not like it’s anything more than a minor inconvenience. Get over it. Or buy yourself a big screen and expense it.
Stringly Typed
Again, this one’s pretty good. I’ve seen code like this, and it’s always much more of a headache than it could possibly be worth.

There’s Already A Word For This

Bugfoot
Over here, where the professionals work, we call this an “intermittent bug”. Could be environmental, could be infrastructure, could be code. But while we can’t replicate it, we don’t want to dismiss it out-of-hand, because this also means we can’t gauge its impact on the end user.
Jenga Code
Also known as “tightly coupled”. If you don’t already know this, or for some reason refuse to use this industry standard terminology, I frankly worry for the quality of your code.
Shrug Report
The bug report with insufficient detail. This is just a bad bug report, and should get sent back to the reporter for more details.
Unicorny
Otherwise known as “the business’s problem”. Unless you’re being called in to provide swag work estimates. I’ve also heard this as a “future project”.

Your Professionalism Is Showing

Counterbug
Code review isn’t supposed to be an adversarial process. When another programmer is reviewing your code, this is supposed to be about everyone’s professional development. Your reviewer may learn techniques they weren’t previously familiar with, and will gain exposure to more areas of the codebase, and that’s always a good thing. And by the same token, you’ll have the opportunity to learn techniques you weren’t previously familiar with, because your reviewed may be able to suggest a better (or even just different) way of solving the problem at hand. By saying, “yeah, well, you did this wrong”, you aren’t helping anyone (beyond exposing additional bugs to be fixed).

That said, getting used to code review as a collaborative process, and no longer hearing their comments as a personal attack, takes a conscious effort at first. But it’s effort that’s worth it. So when a reviewer points out an error, you should first bite your tongue, then think about what you can do better the next time around.
Duck
The business is not your enemy. Distracting their attention away from one particular area of the software is staggeringly unprofessional. It suggests that either you didn’t do your job properly there, or you think you know better what the business, or customer, wants. You might. You probably don’t; the business holds all the cards, and knows things they haven’t bothered to tell you, because they didn’t consider it germane to the feature request. If you have an issue with their ideas, the best thing to do is to take it up with them. In the event that they didn’t consider your alternative approach, then you might get them to reconsider their approach, and create an ultimately better product.
Refactoring
There’s a great maxim about how to write software: write code as though the next maintainer is an axe-wielding psychopath who knows where you live. Vivid, I know, and pretty violent, but I think it kind of gets the point across. Another, less aggressive version, describes the next maintainer as someone less capable than you with no context. If you’re writing code, or refactoring, and you don’t leave it in such a state that any developer could read it and know immediately what it does, you’re doing it wrong.

I know, I know; I’ve written before about self-documenting code being a lie. That doesn’t mean your code should be incomprehensible; it’s about not eschewing documenting your classes at a high level, simply because another developer need only read the code to understand it. Being a nice person to the next person is what self-documenting code is supposed to be about.
Smug Report
I’ll be the first to admit that I’m guilty of having this reaction to an external bug report that suggests that the reporter knows more about what’s going on than the developer. I’ll also cop to the reality that I’ve probably filed a few of these bug reports, as well. However, instead of going into adversarial mode, or trying to belittle the reporter, acknowledge something: you probably have a particularly technically competent reporter on your hands. If they’re outside your organisation, then you basically have someone who will do free testing for you! Make use of this reality, because you have someone on your level, who you probably wouldn’t have to work very hard to convince to do black-box testing. These reporters are gifts in human form.

Unfortunately, there are some that betray an awful interpersonal culture. Presented in no particular order, and certainly not the original order:

Being A Good Person: You’re Doing It Wrong

Jimmy
This, frankly, reads more like an in-joke at one particular organisation. Probably Jimmy was a wet-behind-the-ears recent graduate who managed to get past the interview, only to flail wildly until he was put out of the rest of the team’s misery. It happens sometimes. It’s unfortunate for Jimmy that no one on his team was willing to mentor him into a good programmer, but if the rest of the list was written at the same company, there probably weren’t many qualified candidates.

Not to say that professionals don’t turn colleagues’ names into in-jokes. Never mind that at the foosball table, I can think of at least four developers whose names are used to describe particular moves, one of our developers was, due to a fluke of seating assignments, regularly forgotten to be pulled into impromptu meetings. He’s now a verb, used when something (or, more typically, someone) is carelessly forgotten. However, this is done (a) in jest, and (b) is aimed squarely at the forgetter, not the forgettee. Poor guy just got it named after him after we did it so often that we started using his name as a shorthand for “we forgot him!’

So why is it unfair to call the clueless newbie a Jimmy? Because it pokes fun at the clueless newbie for being just that. Instead of mocking the new kid, they should be welcomed with open arms and guided into being a better developer.
Chug/Drug Report
Yes, suggest that the bug reporter was, at best, in an altered mental state when they wrote the report. This seems fair. Or, you could be working with a language barrier. Perhaps two, depending on when you, as the reader of the bug report, learned the language it’s written in. Assume that the reporter did their best to convey what they encountered, and give them the benefit of the doubt.
Mad Girlfriend Bug
First of all, this kind of bug is pretty much any bug that isn’t completely straightforward. Second of all, this is an idiotic gender stereotype. If you aren’t willing to try to communicate with your signficant other when you’re having an argument, your miserable relationship is your own problem. Finally, reinforcing idiotic gender stereotypes in the workplace is just one of the reasons that the gender balance both in the workforce and in school is so completely out to lunch. It’s one of the reasons that twice as many women as men exit programming as a profession. It creates a hostile work environment for your peers. Don’t do this.
Barack Obama
This one feels vaguely racist, and the fact that it’s the editors’s favourite, of everything on the list, gives a certain amount of credence to my worry in this regard. I want this to be a joke about Healthcare.gov, or about how a lot of liberal people had a lot of hope which has been repeatedly dashed, but the bit about “which would not otherwise get approval” bears a pretty strong suggestion that African Americans only voted for Obama because of the colour of his skin… and that just a reference of questionable future usefulness into a racist joke.
Ghetto Code
While I was simply wondering if the Obama project account was racist, this one pretty obviously is. Inelegant code is just that. Inelegant, perhaps sloppy. Tightly coupled. Not cohesive. Spaghetti code. With so many descriptors for code that isn’t elegant available… why on Earth would you go for the one that takes advantage of a marginalised group of people? Other than the obvious explanation that you almost certainly grew up in a life of privilege, with no idea of what it’s like to live in a ghetto. Bear in mind that the word ghetto was first used to refer to the quarters of the European cities where Jews were effectively obligated to live, because the Christians in power viewed the Jews as something less than themselves. Ghettos are areas of both crippling poverty and, at least once (if not still), systematic oppression. Your inelegant code is not described by this. Just stop using this word entirely.

As a profession, we get a lot of flak for how we treat people who don’t look like us… and I can’t say it isn’t well-earned. By adopting, using, and promoting language like this, all we’re doing is saying that we think it’s okay to do exactly that. It isn’t. So other than the two phrases up at the top… this stuff all has to go. The writers and editors at EFY Times who wrote and approved this list should give some serious consideration to what they’re really saying when they publish lists like this.

Tuesday, 3 September 2013

Abusing dependency injection for fun and profit!

Using Spring for dependency injection can do a lot more than just put together your singleton services how you need them, optionally tweaking them for different environments based on a configuration file.

Oh, the DI in Spring can do so much more fun than just that.

Let’s say, just as a “hypothetical” example, that you’re working with a controller and service interface in a prepackaged JAR that you either can’t modify, or would have to go through a far amount of headache to change. Specifically, you’re implementing the service interface, and you know it will be called from the controller.

You take on a new task that requires access to the HttpServletRequest. Okay, you think, I’ll just get it from the FacesContext, until you remember that this module isn’t using JSF (and you can’t make it—not your controller, remember?). You can read the controller source and see that the method that’s calling your service uses the HttpServletRequest, but it’s not a parameter on the service method.

Barring modifying the JAR, there’s a surprising, and easy, way to get at it: inject it. I kid you not, this works.

That’s seriously all there is to it. You can inject anything higher on the call stack, though I suspect there needs to be only one instance of it. The HttpServletRequest gets updated with each request, even though MyService has been left in the default singleton scope. I wouldn’t recommend doing this simply because you can, though, for the sake of not having to pass the HttpServletRequest as a parameter. Injection breeds reflection, which is a crapton more logic, processing time, and memory usage, than simply passing a reference down the call stack. It also opens you up to debugging hell, particularly if you inject something that you then modify.

Use this with care, is what I’m saying, only when you have no other choice.