Monday 18 October 2010

In which there is a first time for everything

So, this Thursday is the Toronto Facebook Developer Garage… This is actually going to be the first industry event I’ll have ever attended, and hopefully not the last. Should be interesting—I’m going with the development team from work (all three of us), and I know that a few of the guys from my first Facebook-related job will be there (naturally; that company is one of the hosts). I haven’t so much as touched the Facebook API since I left that job, so it’s certainly going to be interesting to see what cool stuff people are doing with the platform, and how it’s changed in the last two years.

I’m also, admittedly, really curious to see just what goes on at one of these events. Apparently it grew from a bunch of grassroots, community-organised gatherings of just developers who wanted to show off some cool stuff, but this one’s got a proper agenda, with keynote speeches and whatnot. I know that the Garage in NYC involved getting some swag (saw pictures, said former employer co-hosted that one too), so obviously it’s much more… regimented than just an informal gathering.

Should be fun, though. The biggest issue of getting deeper integration of Facebook into what my company does is the monetisation issue, and unless I’m much mistaken, you can’t post your own ads inside of a Facebook app canvas (probably because it takes eyeballs away from the ads on the side of the page), so the issue becomes, how would we get Facebook users to buy the paid parts of the service? Or at least how do we get Facebook users to come to our site (because we’re already one of the most popular sites in Canada; can’t go wrong with getting some more eyes), so that they’ll…

  1. see the ads that our advertisers are paying for, and
  2. continue through the regular flow and maybe buy the paid parts of the service?

Therein lies the challenge… and maybe there will be answers at this Garage. At the very least, there should be ideas on what can be done with the API that’s been introduced since 2008!

Friday 23 July 2010

In which work ethics are considered

Over the past couple of weeks, I’ve had two very clear indicators—to myself—of just how much I’m enjoying my current job. I say that because there’s a subtle, but important, difference between saying “I love my job because I love doing x” and observing your own behaviours and thought patterns, and noticing that the way that you do your job, and approach your job, demonstrates how much you enjoy it.

The first way is something I realised about two weeks ago, when I met with the general manager (my boss’s boss, and an all-around good guy) for a quick catch-up chat. I’ve been working at this job for eleven weeks, and it feels like I’ve been there a lot longer, and more importantly, the environment, and the nature of the work is just naturally enjoyable, and it’s something I’m keenly interested in, so I like going to work on general principle. But when I was talking to him, it dawned on me that I like my work so much that, for the first time in years (at least two, if not four) have I been able to get so wrapped up in my work that I lose track of the time. Almost every other job I’ve had, with possibly one exception, since I moved to Toronto, I’ve tended to kind of check out around 4:30 or 4:45; I’d start trying to find something to do that would be productive work, but wouldn’t take too long to do, because I wanted to get out the door. Where I am now, as often as not, it’ll be almost 5:30 when I look at the clock and realise that I should probably go home.

The second, even clearer indication came this past Friday. I’ve been working on coming up with a way to better integrate two of the projects I’ve been working on, and particularly a way to do it across a subdomain divide (the products need to communicate on both the server and client sides; I managed to hack it on the pure client side by spawning some IFrames, but getting a particular server-side action in Project A to trigger an action in Project B has been a little less clear-cut. So, I decided that a simple XML interface into Project B was needed, with a wrapper for Project A (and Project C, which another developer is working on was necessary). So, I spent Thursday and Friday afternoons working on this API.

Two very cool things came from this.

  1. Apart from a few minor syntax errors (missing a parenthesis, putting a colon where a semicolon should’ve been, &c.), the stripped down API worked perfectly right off the bat. A few hundred lines of code, and it Just Worked. I haven’t been that successful in a while.
  2. I ran the first test at about 4:40. I had other things that needed to be done that evening which necessitated my leaving as close to 5:00 as possible, but I had a thought I haven’t had in a long time: I wish I didn’t have to leave right away. I wanted to take my work home with me.

This hasn’t happened in ages, and I didn’t realise how much I missed that feeling until this past Friday. I’ve been telling people how much I like my job based on its perks: catered lunches on Fridays, stocked kitchen, and an amazing sense of community with my co-workers. But being able to say, “there are days that I don’t want to stop working”… that might be the surest sign that you’ve got a great job.

It’s kind of funny, because I normally try to fight against that really, typically Protestant work ethic of, when you boil it down, “living to work”. I can leave my work at the office; most days, it’s a case of losing track of the time because I’m so wrapped up in what I’m doing, so when I realise what time it is, I clean up what I was doing, get it to a state where I can leave, and I go home. And I think that’s what this is an extension of—I got so wrapped up in what I was doing that, had I not had other things to do, I almost certainly would have stuck around, ignoring the clock.

I think that might be the difference. Most days, I don’t care what time it is; it’s irrelevant to me how many hours I spend at the office, as long as I get done what I want to get done. When I compare that against the Toronto workaholics who not only work to live, but take it as a point of some perverse kind of pride that they work sixty- or eighty-hour weeks, I can see much better what the difference is. I’m doing what I love, and I take pride in the result of my work, whereas some people take pride in the amount of work that they do.

Thursday 3 June 2010

That thing you use? I made that.

Monday night (technically Tuesday morning, but who’s counting? Other than Blogger, that is) I mentioned that being able to say to somebody, “you know that thing that you use? I made that” is a great feeling. I just ran into a former colleague from my previous contract, who let me know just on what scale my stuff is operating.

One of the projects I worked on—made, really; the requirements were small enough—was a carbon and cost savings calculator for Sears Canada’s website, so that people looking to replace one of their appliances could see about how much money they’d save by switching. Fairly simple to do; the worst of it was extracting the formulas from the Big Ugly Interactive Spreadsheet that Sears provided. I worked hard to provide the best little jQuery applet I could, complete with pretty transitions and everything. Fully translated into French, too, and I made sure it’d work acceptably well in IE6. It was kind of a focus of their recent/current Green promotion, but how many people, really, were going to wind up using it?

As it turns out, a lot. In the linked article, you can see that last Friday, they opened a six-kiosk booth in the Vancouver Robson store that runs that “little jQuery applet” in a touchscreen interface! I’m… speechless. To the best of my knowledge, nothing I’ve ever made has seen such a wide userbase. They’re adding more booths to more stores, too. I kind of hope that one will show up in Toronto so I can play with it and show people.

Tuesday 1 June 2010

In which a scale is found tipping

There’s a certain aphorism that I’ve been thinking about lately: “there are twenty-four useful hours in every day” I don’t recall where I heard it first—as I recall, the version I heard growing up was “you’ve got the same amount of time in the day as the rest of us”—but I’ve realised two things about that first saying:

  1. There are decidedly not twenty-four useful hours in every day. Depending on your particularly sleep needs, there are eighteen and, say, fifteen waking hours in every day. Then when you factor in time spent in transit between work/school and home, and mealtimes, along with basic hygiene needs, your time-per-day number drops considerably. I”d estimate around twelve. Your mileage may vary, depending on, well, the mileage between home and what you do to make a living.
  2. I need more. I think this is why I’m reminded if my father’s different, truer version of the saying.

There are people in this industry, in this city, who love the freelancing life; people who love seeking out the Next Big Contract, and who get off on working late into the night to make a higher paycheque than the next guy. I’m not one of those people, as I’ve been discovering this year.

Don’t get me wrong. I like networking, and I like making things happens, and I like being able to say, “you know that thing that you use? I made that.” It’s a great feeling. But for the past few months, I’ve really felt like I’ve had at least twenty-four hours of work to do, every single day. So I perpetually feel like I’m behind. And that’s a pretty crappy feeling.

I read a book a few years back called The Hacker Ethic. It discusses the Protestant work ethic, which I think is a huge influence, in this city, of why people will willingly put in ten- or twelve-hour working days, five or six days a week. I don’t get that. I’ve been doing that for months, and it’s awful. All you’re thinking about is what you have to do. Your main focus is making more money. But why? So you can buy more things?

Seriously, everybody should read that book. I got into computers professionally because I love using them, and bending them to my will. But I also love my wife, and it’s important to find that work/life balance that keeps you sane and healthy.

This is becoming a bit of a rant, so I think I’d best cut it off. Too much work to do, anyway.

Friday 12 March 2010

In which an error is realised

I’ve been giving some deeper thought to the multiple-has table I described in my last post, and I’ve figured out where the problem is. All the usual actions can be performed in constant time for any specific table, as usual… the issue is what kind of time they require. For n-dimensional tables, the number of operations for find increases linearly with n, but the operations required for add and delete increases factorially. For n=2, add requires three actions, but for n=3, add takes seven. n=4 causes add to take twenty-five, unless I’ve mucked up my math.

This is, of course, bad. Unless you can magically prepopulate the hashtable, and you never have to modify it… and that’s exceedingly unlikely. I think I need to find a better way of storing the indices. I think the problem that’s causing the factorial growth is the fact that I was originally thinking of storing the sub-hashtables (say, where n=2, the tables one dimension deep) as independent of each other, and relying on encapsulation. Didn’t I specifically mention that the annoying part about classic multidimensional arrays is that they’re based on encapsulation?!

Back to the drawing board.

Sunday 7 March 2010

In which an algorithm is partially puzzled out

This past week, I was working on a small project at work that seemed, at first glance, to need a unique type of multidimensional array—I needed to be able to store a collection of data that had a composite key, where the values of each component of the key were bounded sets. I realised after an hour or so of trying to figure it out that I was overthinking the problem (a regular multidimensional array would suffice, because as the programmer I can fully control which key component would be the primary axis of the array), but it left me with an interesting idea in my head that, after a brief bit of research, I've realised nobody's really approached... and while I thought of the idea at work, I also tossed it out as being unnecessary, so I don't think they can really lay claim to the intellectual property, because I'm not doing any development of the idea on their dime! Only part of what I'm going to get into was something done at the office, so the vast bulk of this is my own work, for my own amusement.

That idea is, in essence, an n-dimensional multi-hash table (which I'll just refer to as an MHT, for brevity's sake). The problem that I thought I had was that I would need to allow the user to specify either of the components, then update the available choices from the other component appropriately, or retrieve a subset of the data that had that key (like I said, it turned out to not be the case), but I've realised since then, what if, someday, I really need to do that, and I want to be able to do it quickly? My approach to the problem follows, but this is fairly off-the-cuff, and hasn't been implemented. I'm really just trying to think this out, and I want to make sure that if this turns out to be Important, that there's a published record of when I came up with it.

Let S be a set of data. Each item s in S contains (for the purpose of this example) a binary composite key, made of values j and k, where j and k are members of sets J and K. s has a unique key value n that may or may not be exposed to the user, but is available to the software. Instead, the user wishes to access s by specifying j and k, but may need to obtain a list of all items in S with a specific value of either j or k. Since S may be of arbitrary size, maintaining a low order for the algorithm is important. Hash tables are particularly well-suited to this, as they can typically maintain O(1) performance for most access functions.

In order to accomplish this, multiple hash tables must be maintained (it's actually quite similar to maintaining hash indices in a DBMS; perhaps I should look into the specific algorithms used there). A hash table of all J values must be maintained, and each item in the J hashtable must point to a hash table with values as its key. These relationships must in turn be transposed, so that the user can access all the s in S with a particular k.

The issue, of course, becomes one of implementation. We also want to maintain a low storage requirement, so each item in the J-to-K hash table should simply be a pointer into S; this is why I mentioned that each s in S has a unique key that is programmatically available, but not necessarily exposed to the user.

So, what's the best approach to doing this? It seems to me that n+1 hash tables are required, where n is the number of dimensions that the user needs to use to access a given s. A master S hash table will exist with all the items. Then each dimension will have its own hash table. For dimension J, each record would be a key-value pair with the key j, and the value being a hash table where the keys are values from K, and the values are pointers into S. Dimension K would exist similarly. When adding an item x to S, after calculating its unique key value, the add function would need to add xk to J and xj to K. The reverse would be necessary for deletions. So, for a 2-dimensional MHT, it seems that adds and deletes could be performed in 3 O(1) steps. It's not perfect, but it's better than O(n). Fortunately, the data storage requirements for the two-dimensional are 3n (so still O(n))—each record would exist one time in each of the S, J, and K hash tables.

The difficulty that I'm realising, as I write this, could very easily come up, is what do you do when you need to create an n-dimensional multihash table where n>2? The mechanism I've described above works fine where each j in J refers to a simple hash table, but what happens when you have keys J, K, and L? What would each j in J refer to? A multidimensional array? Then updating J may necessitate updating multiple tables with the same data (which is something I'd like to avoid). I think the answer lies in what each j in J points to, but I haven't worked out how that's going to work, and I tend to do my best figuring out of this type of thing when I can see the problem—and being 12:30 in the morning, at home, I don't exactly have access to a gigantic whiteboard on which to work, or the opportunity to try hacking on the problem to see what works best.

But there's my approach to a 2-dimensional hash table. This is definitely something I want to try implementing, and that I want to continue working with in a bid to make an n-dimensional, any-addressable, MHT. I don't know if it will give any kind of accolades, but I think it'll be a fun problem to solve... and considering the fact that I can't find any evidence of anyone else trying to approach it this way on The Google, I'm starting to think that it might even be patentable.. so I think this would qualify as prior art!

Now I'm even more excited.

Wednesday 10 February 2010

In which the author retraces his steps… slowly.

I don’t think I’ve ever really truly been ambivalent about anything before. In high school and university I garnered a reputation for being almost unflappable; that I could let anything slide. Something went wrong, and it wasn’t a big deal. Never really experienced ambivalence, as it’s properly described—that the subject feels strongly about the object, but in opposite directions at once. Not until now, not until tonight.

I have never been simultaneously so happy to have largely abandoned Microsoft’s operating systems and Web browser for my daily use and first-line development testing, and so spectacularly frustrated. I’ve been having some difficulty with the Windows XP partition on my main computer at home for a while now, mainly because I was using a particularly poorly-designed firewall (Comodo Personal Firewall, for those keeping score—I’d advise you to never install it)… I’d attempted to upgrade the software, but the previous version refused to be removed first. Deleting it didn’t help; there were still innumerable system hooks that actually wound up making it impossible to use the Internet! I thought perhaps the firewall had rewritten the network stack, and hypothesised that upgrading the operating system to Service Pack 3 would help. Needless to say, it didn’t, but I eventually managed to excise the demonspawn firewall. Somewhere in here, I also upgraded to Internet Explorer 8—after even Microsoft started actively encouraging the switch, for safety reasons, I knew its time had finally come and I could stop supporting it.

Unwisely, I tried the new version of the firewall, and now explorer.exe has become quite possibly the least-responsive piece of software on the computer. I can’t be 100% certain that these are causally related, because I can’t find any trace of the firewall on the system, and I’ve followed the same removal steps as the last time, and Windows Explorer hasn’t improved any.

This is all just context… a secondary source of frustration, because it means my computer is running far slower than it ever normally did prior to all these changes. The real frustration is that I’m jumping through all these hoops (all the while trying to install Subversion, Apache and MySQL on a FreeBSD 6.0 server with a 500 MHz Celeron processor in PuTTY) so that I can track down an IE-specific rendering bug in my new contract. Unfortunately the bug is so specific that it’s only appearing in Internet Explorer 7! At this point, anyone who’s intimiately familiar with how deeply-tied Microsoft’s browser and operating system are knows exactly what’s become necessary: In order to install IE7, I have to first uninstall IE8. However, in order to uninstall IE8, I have to revert the SP3 upgrade. This is a large and significant problem, because the upgrade itself cost me more than two hours! That process is currently running in the background as I write this (and checkout the current branch of the project to the small Celeron machine, dubbed mouse). I am, nonetheless, quite distinctly far from amused by Microsoft’s decision to make IE8 and SP3 so deeply integrated that, if IE8 is installed prior to SP3 (which I’m fairly sure is what happened; while Comodo had disable the majority of my connectivity, Windows Update continued to operate), then removing the former requires removing the latter by design. It’s certainly further motivation to move to Europe!

Monday 1 February 2010

In which good news is announced

The past… roughly sixty-six hours have been pretty exciting in my neck of the woods, for a number of reasons:

  1. I met a recruiter downtown regarding a twelve-month contract situated (if memory serves) very near to my home, for a major classifieds website you’ve probably heard of. I was told that my resumé will be submitted, so hopefully I’ll be contacted for a phone interview.
  2. I heard from another recruiter, this one with disappointing, but not altogether surprising, news: the client that had been given my resumé, regarding a front-end position, said they’ll not be seeking an interview with me. Disappointing because it means no job there, but not surprising because the bulk of my skill set lies in back end development (as you’ve probably discovered, if you’ve been following my writing), with less emphasis on front end technologies. My Flash is rusty and creaky; I have no Flex or Air; I could keep going, but I’d rather not, because…
  3. Yet another recruiter got in touch with me about a brief, lucrative contract, working out of my home, picking up where another developer has had to bow out due to time pressures. He wanted to know how I’d feel about picking up a mostly-there project built on Symfony, with jQuery as the JavaScript framework: an MVC framework to which I have minimal (but positive) exposure, and the whipping-boy JS framework of comp.lang.javascript that I’ve never touched (I use Prototype in Project Alchemy and Project McLuhan). Recruiter knew this, passed on what I am familiar with and mentioned that I tested well in his company’s PHP skills test, and the client wanted to get in touch with me anyway. After a couple of hours to look at the source code, I was confident I’d be able to get myself up to speed with Symfony over the course of the weekend. I met with the client on Saturday over coffee, met with the previous developer on Sunday, and, well… I’ve already nailed down two of the end user’s major we-want-this-fixed-by-the-beginning-of-business-Wednesday bugs!

So yes, dear readers, I am (or rather, will be, once I’ve signed the contract) gainfully employed—at least for the month of February! I feel good; I feel energised… and I feel that by the middle of February I’ll have two new things I’ll be able to add to resumé! I think my lack of previous jQuery has been a bit of a sticking point—everybody uses it! So now, I’ll be able to say I can use jQuery.

This gig is going to be a great stretch of my skills. I’ve already hit the ground running (thank God that Symfony’s interpretation of MVC is fairly similar to Zend’s!), so as long as I hit this Wednesday’s deadline, everything should be great… and I don’t see any reason why I won’t be able to hit it.

This was a good weekend, and one that I’ve been waiting for for quite a while now. With any luck, this contract will turn into further opportunities with this client, which will mean (fairly) regular work and more opportunities to expand what I do!

One closing note: Looking at Symfony, I have to say that I’m already pretty much in love with it. It’s got a very similar logic to Zend (and, near as I can tell, is similarly modular, which I like), but it’s got a lot of back-office automation, kind of like Cake and Rails, that unburdens the developers, and it doesn’t appear to be doing so in an ugly an inefficient way! I’m almost mad that I hadn’t found it in the summer of 2008—so many things would probably be different! Project Alchemy would probably be much farther along than it is (I’ve been having trouble with the ORM layer and articulating join logic as associative arrays). Also, it’s kind of a pity that I’ve found it now, rather than long before, because I’m also about to start learning Python, which I’ve been meaning to do, in order to move away from the Failboat that will be PHP 6.

Ahh well, can’t always do everything right the first time you do it, right? Otherwise, you’d never learn anything!

Wish me luck this month; I’m going to be awfully busy!

Sunday 24 January 2010

In which software that wasn't properly engineered is shown to have killed people... again.

My wife showed me an article in today’s online New York Times (yesterday’s print edition) that got my blood boiling—a radiation therapy machine manufactured and programmed by Varian Medical Systems wound up massively overdosing and killing two patients in 2005, just like Therac-25 did back in the eighties. The bad part is that Therac-25 is a standard case study for CS students throughout the continent (and probably worldwide), so this shouldn’t have happened in the first place. The worst is that Varian Medical Systems said it was pilot error. Now, as a software developer and a computer scientist, incidents like these are particularly poignant, as they demonstrate the urgent need for a cultural sea change within the industry.

A quick backgrounder: Therac-25 was involved in six known massive overdoses of radiation, at various sites, that killed two patients. A careful study of the machines and their operating circumstances was conducted and it was ultimately revealed that the control software was designed so poorly that it allowed almost precisely the same problem outlined in the Times’ article. In Therac-25’s case, a higher-powered beam was able to be activated without a beam spreader in place to disperse the radiation, much like how the Varian Medical Systems machine allowed the collimator to be left wide open during therapy. Furthermore, the Therac-25 user interface, like the Varian interface, was found to occasionally show a different configuration to the operator than was active in the machine, as the article mentions: “when the computer kept crashing, …the medical physicist did not realize that her instructions for the collimator had not been saved.” The same issue occurred with Therac-25, where operator instructions never arrived at the therapy machine. Finally, both interfaces failed to prevent potentially lethal configurations without some kind of unmistakable warning of the danger to the operator.

The Therac-25 incident quickly became a standard case study in computer science and software engineering programs throughout Canadian and American universities, so the fact that this problem was able to happen again is shocking, as is the fact that Varian Medical Systems and the hospitals in question deflected the blame onto the operators. The fact of the matter is that Varian’s machine is one that, when it fails to operate correctly, can maim or even kill people. In such a system, there is simply no room for operator error and it must be safeguarded against. Unfortunately, in shifting the blame, Varian Medical Systems has denied their responsibility in untold more tragedies.

Had a professional engineer been overseeing the machine’s design, these deaths could have been prevented. Unfortunately, most professional engineering societies do not yet recognise the discipline of software engineering, primarily because it is exceedingly difficult to define exactly what software engineering entails. For decades, software systems have been in positions where life and death are held in the balance of their proper operation, and it is critical, in these cases, that professional engineers be involved in their design. These tragedies underscore the need for engineering societies to begin licensing and regulating the proper engineering of such software. By comparison, an aerospace engineer certifies that an airplane’s design is sound, and when an airframe fails, those certified plans typically reveal that the construction of (or more often, later repairs to) the plane did not adhere to the design. Correspondingly, in computer-controlled systems that can fail catastrophically, as Varian’s has, it is imperative that a professional engineer certify that the design—and in a computer program’s case, the implementation—is sound.

Varian Medical Systems’ response—to merely send warning letters reminding “users to be especially careful when using their equipment”—is appallingly insufficient. Varian Medical Systems is responsible for these injuries and deaths, due to their software’s faulty design and implementation, and I urge them to admit their fault. I recognise that it would be bad for their business, but it is their business practices that have cost lives and livelihoods. I think the least they could do is offer a mea culpa with a clear plan for how they will redesign their systems to prevent these incidents in the future.

The IEEE, the ACM, and professional engineering societies need to sit up and take notice of incidents such as this. That they are still happening, even with the careful studies that have been performed of similar tragedies, is undeniable proof that software engineers are necessary in our ever more technologically dependent society, and that software companies must, without exception, be willing to accept the blame when their poor designs cause such incidents. Medical therapy technology must be properly engineered, or we will certainly see history continue to repeat itself.

If this reads like a submission to the Op-Ed page, there’s a very good reason for that! It was, but they decided not to publish it. Oh well.

Friday 22 January 2010

In which the author reflects on his career

Without trying to sound like a couple of cartoon nine-year-olds, I realised something today—I’ve done a lot of pretty cool stuff in this industry. I might be young, but there’s something to be said for starting early, and always trying to challenge yourself. Let’s see what I can remember… but I warn you, this is a long, long post.

  • Portico/Project McLuhan CMS
    Late last year, I delivered, to a freelance client, an early build of an original CMS that I called Portico. I refer to it as Project McLuhan when I’m coming up with new features, but it’s certainly not just a random exercise—it’s currently being used to serve up bellabalm.com, the website of one of my wife’s colleagues. Since delivery, I’ve been working on enhancements to the user interface (lots of use of AJAX) as well as the core functionality, to make it a little more usable, but it was pretty complete on delivery. It links to PayPal for purchase completion, and Canada Post for shipping cost calculation, and honestly, the shipping calculator was probably the hardest to write, because it's all XML, and I hadn't used XSLT for about six months before I started into it.
  • Project Alchemy toolkit
    In 2008 I worked for a company I describe as “social advertising agency”. At the time, I was one of two developers, and we jointly decided to standardise the company on Zend Framework, based primarily (if my memory serves me) on the fact that ZF was created by the same group of programmers who wrote PHP, so we good reason to believe that it would be the most effective for the job. A few months later, a couple of other developers were brought who knew Ruby on Rails, and with it, CakePHP. I’ll say this for Cake: it certainly enhances developer time for developers familiar with its API, and it seems to be a bit more clearly documented than Zend. However, from my work with it, it’s not as efficient in terms of database usage, so I can't really say I’m a fan (also, hearing one of these Ruby guys say, on about a weekly basis, “I HATE CAKE!” amused me). However, I did find really quickly that the Zend API is somewhat lacking. Project Alchemy was born out of my desire to merge the two—to take everything I liked about Cake, and make it available in a Zend context, along with some other features that I’ve long thought were important. Of particular note is a full audit trail of all database changes (and optionally, queries), which is easier than you might think to write, and will benefit you in the long run in case Something Bad happens. I'm also trying to decide on the best approach to implement model caching; a course in operating system design late last year brought the notion of shared memory into my mind, so now it’s just a matter of actually putting it together. Project Alchemy’s by no means complete, in terms of everything I want to do with it, but the essentials are all there, and it’s been an amazing learning process.
  • Innumerable changes to CIBC.com, including a core role in the October 2009 redesign
    I was on contract at CIBC for quite a while, but in a purely front-end role. Nevertheless, I got exposed to some fun technologies while I was there—XSLT, XPath, XForms—that I’ve been meaning to investigate but otherwise haven’t had the time to properly study. When the project to update the website’s layout and accessibility came up, I was glad to be involved in the project, because I got a chance to really stretch and refresh my front-end skills, including some neat CSS and JavaScript techniques. I refamiliarised myself with basic AJAX, and I’ve been using it a lot in Alchemy and McLuhan.
  • The back-end image manager for the “Radio Perez” mobile podcast site and iPhone app.
    While I was working at that ad agency, I was asked to write a pretty bare-bones CMS to serve up images and ads to an iPhone app, as well as mobile-ready pages. Two cool things I got to do with this were using the GPS data from the iPhone to provide geographically-relevant data (specifically, serving up a particular button graphic depending on your proximity to one of the client's radio transmitters), and I used the scaffolding technique on this project to get the required functionality going early, before worrying about making it look right. I first saw this technique in use by the Cake developers at that agency, thought it was a good idea, and tried it out. Turns out that it’s a great idea, and I've been using it ever since. It helped me deliver bellabalm.com in time without worrying too much about making the back end look exactly perfect. It did what it needed to do well before launch, it just wasn’t too pretty.
  • An inter-social-network photo album (ad agency)
    This one was only partially completed before the Radio Perez thing came up, so it got shelved (there were only a few developers in the company at the time), but it was going to be a pretty cool photo album, that users could get at from Facebook, MySpace, iGoogle… wherever, really. I did my best to make it work like Facebook’s photo albums, but I couldn’t figure out how to make photos draggable to reorder them (might have been because I hadn’t properly researched JavaScript frameworks at the time), so instead I implemented a priority queue in JS to update the sequence numbers correctly in real time. As I said, it never got completed while I was there.
  • An inter-social-network discusson board (ad agency)
    This was the first project the company did with the technique of opening up a social app to multiple social networks simultaneously. I think it’s gone on to become one of their flagship products, so I have to laugh that I came up with the original design, and figured out all the potential problems and issues in doing this kind of thing, and now they’re making a mint… and then I get sad, because I remember that I’m still bound to a noncompetition agreement and I probably can’t do anything with that knowledge for a while yet. RIM used it, MTV used it, and I think Astral Media used it before it got refactored from Zend to Cake. It doesn’t seem to be running its original form anywhere anymore, so it looks like it’s been pretty thoroughly rewritten.
  • RMAtic Return Merchandise Authorisation tracker
    My first attempt at writing an issue tracker, and if I do say so myself, a successful one! This is where I first got the chance to write the audit trail I mentioned above with Project Alchemy, because if anything needs that kind of tracking, it’s an issue tracker. Some weird bit rot has set in that won't let me update the RMAs in the database that I haven’t been able to isolate, but if I ever want to do an issue tracker again, I’d be rewriting it from scratch--this was done before I had a chance to use Zend, so all the links between things are procedural. It’s not super-pretty code, when you get down to it. But then, I honestly didn’t expect to ever have to see it again, so I wasn’t concerned with code re-use. I’m told it takes three times as much effort to write reusable code than it does to write a one-off, and that’s exactly what this became! If you’re curious, though, check it out. The username and password are both “admin”. It's still branded with the client’s livery.
  • A client relationship and project manager
    This was the last project I worked on at a custom software company north of the city. Based in equal parts the existing in-house CRM, their in-house CMS, and Basecamp, the notion was that I’d rewrite the CRM to be something that could be resold. I very carefully wrote up a specification for everything it was going to do, how everything would interact, how the permissions model would operate, a huge E-R diagram for it, and got to work. I was easily 80-90% of the way through it—and all the core stuff was there; I think I was down to making it easy to install add-ins, and maybe one or two other features that I can’t quite recall—when the contract ended, for a variety of reasons. It was hugely ambitious, I was the only person on the project, and I think I did a great job of it. It was able to handle all the different clients the company had, with infinitely-overlappable user groups, all the various projects, work tasks (it had a built-in issue-tracker), notes, phone calls, anything billable… invoicing. The whole shebang, with a two-dimensional access control scheme that depended on what permissions the user/user group had on individual items and their parent items. Hugely complex, and I’ve realised in retrospect that (a) role-based access control is a far better idea in terms of software complexity, database load, and usability, and (b) the database table that held the permissions really needed to be indexed and stored with a different storage engine. You live, you learn. I’ve been meaning to write myself a simple CRM for my freelance clients, and the little snippets of memory I’ve still got about that project should at least help me get started. It’ll be done in either Project Alchemy or even Django, if I’ve become familiar enough with Python by the time I get started.
  • A shopping cart for an existing CMS (custom software company)
    The client wanted to add a store to their website, so I had to learn the old version of the company’s CMS in a hurry and add a new store module to it. It wasn’t easy, particularly when trying to put the shopping cart together, and make sure that all the right information appeared on the right pages, but I did it, and it worked, and allowed customers to select from a bunch of different customisable options, as well as customising certain products. I realised, in the last month, that the user experience certainly left something to be desired, but again… you live, you learn. I think it was, more than anything, an issue of nomenclature. I used “option” to refer to, say, a specific T-shirt size, and “option group” to refer to offering size as an option. I just didn't make it very intuitive, and I think I know how I’d do it better in the future.
  • An inventory manager for a warehousing and shipping company (custom software company)
    This client stored materials for their clients, and need a way to allow the clients to see how many boxes of each they had warehoused, order some to be shipped, and be alerted when the level was dropping low so that new items could be made. Fairly straightforward, and served as a basis for the issue tracker I built into the CRM later. Users could get things shipped to the address of their choice, and it would remember past addresses to make things easier. Got the opportunity to put together a PDF template that exactly matched their existing shipping labels, so that was fun. First project I did there.
  • A really elemental blogging tool for my own use.
    This was bad, hacky code. I put in only what features I needed, and it also served up static content stored in HTML files within a specific directory structure. I just wanted to write my own blog without having to edit the HTML every time. There were comments, but I started getting spammed, so I had to disable them. The only feature I actually built was writing new blog posts. Couldn’t even edit after the fact. But it was also the first database-driven thing I tried writing… back in first year. I wanted to learn what it took to do it. There may be a reason why I use Blogger for it now… it’s a more complex problem than I wanted to get into at the time. Maybe later, though.
  • While I was in high school, I came up with a way of handling a multi-page ticket-ordering process for a theatre that didn’t require sessions. I didn’t really have access to the server that hosted it, and I didn’t understand session cookies at the time anyway (not that I could use them without being to able get at the server), so I used a frameset, with an invisible frame full of hidden inputs. The hidden inputs were updated by JavaScript, and the last page email their contents to the box office. It was quite interesting, and it was superceded a while ago.

That’s about it for proper projects that really saw the light of day, that I did something interesting with. A few things I came up that either didn't go anywhere or just weren’t really a technical achievement follow:

  • You know how Jillian Michaels, one of the coaches on The Biggest Loser, now has a website where you can track your weight-training goals online, and interact with other people trying to do the same thing? I had that same idea while taking a strength training class in high school. Many, many moons ago. Used it as a project idea. Shelved it because I didn’t have the money to start it up as a business and advertise it. Now I’m kicking myself.
  • In grade school, I had the opportunity to make the school’s website. I wanted to coordinate with the teachers to put lesson plans up online, so that parents could keep aware of what their children were learning. Nobody really wanted to go along with it. In my final year of high school, I tried designing and implementing the same solution, on a high school level, in a way that would empower the teachers to put up what content the wanted… it was really more ambitious than I expected, and all I have left are a few design documents. Then I started university, and discovered that WebCT and Blackboard existed… they do exactly what I was trying to make for my school. D'oh!
  • Also in my last year of high school, I had the “brilliant” idea of trying to create a bash-like shell for MS-DOS… in Perl. Like the course management tool I just described, this was far more ambitious of an idea than I ever expected it to be, probably because at the time, I didn't have much experience with Unix, and didn’t understand the domain of the problem. But it serves as a good example of the level of idea I tend to come up with: BIG. Sometimes revolutionary, sometimes just complex, but I think big. I like big problems that require a lot of thought.
  • Back in the bad old days, just when DVDs had first came out, my cousin and I designed a website for a video store in our hometown. Not very revolutionary there, but we had an idea that no one else did--show the movie trailers on the website. It’s obvious now, sure, but it wasn’t in 1997; only a small fraction of movies even had websites to begin with!

So, yeah… I’ve been around. I’ve seen quite a bit, now that I think of it, and there’s probably a lot of stuff that I can’t properly remember that’s getting left out, fairly or unfairly. I should remember this.

Saturday 16 January 2010

In which opportunities are taken

The coming of the new year, like everyone is always told, has shown itself to be a time of new opportunities. However, what’s unusual this time around is that I’ve also been in a wonderful position to take the opportunities with which I’ve been presented. The software I’ve been slowly writing over the past roughly six months (dubbed, depending on what specific functions I’m working on, either of Project Alchemy or Project McLuhan) has forced me into a position to really clean up some of the core things—that is, Project Alchemy—I originally put off implementing for the sake of getting it out the door, and an outside project has finally given me an opportunity to make good on my threat to learn Python.

First things last.

Given the nature of this blog so far, and the fact that I’ve been using it to talk about technical matters from a reasonably professional point of view, I kind of wish this opportunity to learn Python had a more professional set of circumstances. On the other hand, it’s not as though I’ve ever tried to hide this particular circumstance; I’ve even discussed it in this very blog. In my continuing attempts to finish my BCSc, I’m required to take an introductory software engineering course, and I’m part of a development team that will be making a product virtually guaranteed to be delivered over the web. At first, I thought that I might be able to make an argument for integrating Project Alchemy into the codebase to speed up development, but then I discovered that I’m the only member of the team who has no experience with Python, and no one else seems to know PHP. As a result, It’s kind of a given that we’ll be using Python, so I have a fairly steep learning curve ahead of me to get caught up. The University of Toronto’s Computer Science department uses Python as the language of choice for its first year classes, so everyone else has had at least a year of active (albeit toy) development with the language, and I have less than two-and-a-half months on this project, in which to provide real, tangible support. No time to lose!

On the other hand, I think I have more project planning experience than the rest of the team—I remember how slapdash my programming practice was at Dalhousie, but this is a real-world product—so I think I might be able to make up my gaps in language experience by taking care of more the behind-the-scenes things in the management of the project. I’ve been kind of spearheading that already, but it hasn’t really been particularly set it stone who’s doing it. I guess we’ll see how this works out. With any luck, our client will get a solid piece of code and I’ll get some realistic experience in project management, which can only ever serve to benefit me professionally, for reasons that seem fairly obvious.

In terms of Project Alchemy, like I said, I put off implementing Alchemy functions for the sake of getting Project McLuhan (and more specifically, my client’s website) out the door on schedule. As a result, fairly important architectural things like multi-model find calls were put on the back burner while I dealt with end-user issues like a shipping rate calculator. Now that it’s been delivered, I’m trying to clean up the McLuhan interface to improve its usability, and I’m realising just how critical that functionality is.

Admittedly, part of my initial hesitance in implementing it up-front was a certain degree of intimidation by the problem—a large part of the design involved coming up with a way of compartmentalising WHERE clauses—rather than simply requiring the find conditions to be input as a string, abstracting it out to better insulate against malformed data from the end user. As it turned out, it wasn’re exceedingly difficult (with two caveats: I still have yet to come up with a way of handling subqueries for the IN/NOT IN operators, and I’m not convinced that I’ve done it in a particularly efficient manner), but now I have to go back through all my controllers and models and rewrite the find calls to use the new schema. It’s a bit of a pain—yet another example of why Doing It The Right Way First will invariably save you time in the long run—but it’s ultimately worth it, because it’s also providing a good opportunity to review my own code and look for bad algorithms.

So, yeah, there have been a couple of great opportunities so far this year that I feel I’ve really been able to taken advantage of. One great opportunity to learn a language I’ve been meaning to learn for more than a year, and another to finish up something important and learn a good deal more about what really goes into writing an MVC framework. This can only possibly be good for me, so stay tuned to see what develops!