Showing posts with label PHP. Show all posts
Showing posts with label PHP. Show all posts

Tuesday, 29 April 2014

Upgrade your models from PHP to Java

I’ve recently had an opportunity to work with my team’s new developer, as part of the ongoing campaign to bring him over to core Java development from our soon-to-be end-of-lifed PHP tools. Since I was once in his position, only two years ago—in fact, he inherited the PHP tools from me when I was forcefully reassigned to the Java team—I feel a certain affinity toward him, and a desire to see him succeed.

Like me, he also has some limited Java experience, but nothing at the enterprise scale I now work with every day, and nothing especially recent. So, I gave him a copy of the same Java certification handbook I used to prepare for my OCA Java 7 exam, as well as any other resources I could track down that seemed to be potentially helpful. This sprint he’s physically, and semi-officially, joined our team, working on the replacement for the product he’s been maintaining since he was hired.

And just to make the learning curve a little bit steeper, this tool uses JavaServer Faces. If you’ve developed for JSF before, you’re familiar with how much of a thorough pain in the ass it can be. Apparently we’re trying to weed out the non-hackers, Gunnery Sergeant Hartman style.

So, as part of his team onboarding process, we picked up a task to begin migrating data from the old tool to the new. On investigating the requirements, and the destination data model, we discovered that one of the elements that this task expects has not yet been implemented. What a great opportunity! Not only is he essentially new to Java, he’s also new to test-driven development, so I gave him a quick walkthrough the test process while we tried to write a test for the new features we needed to implement.

As a quick sidebar, in trying to write the tests, we quickly discovered that we were planning on modifying (or at least starting with) the wrong layer. If we’d just started writing code, it probably would have taken half an hour or more to discover this. By trying to write the test first, we figured this out within ten minutes, because the integration points rapidly made no sense for what we were trying to do. Hurray!

Anyway, lunch time promptly rolled around while I was writing up the test. I’d suggested we play “TDD ping-pong”—I write a test, then he implements it, and the test was mostly set up. I said I’d finish up the test on the new service we needed, and stub out the data-access object and the backing entity so he’d at least have methods to work with. After I sent it his way, I checked in after about an hour, to see how he was doing, and he mentioned something that hadn’t occurred to me, because I had become so used to Java: he was completely unfamiliar with enterprise Java’s usual architecture of service, DAO and DTO.

And of course, why would he be? I’m not aware of any PHP frameworks that use this architecture, because it’s based on an availability of dependency injection, compiled code and persistent classes that is pretty much anathema to the entire request lifecycle of PHP. For every request, PHP loads, compiles, and executes each class anew. So pre-loading your business model with the CRUD utility methods, and operating on them as semi-proper Objects that can persist themselves to your stable storage, is practically a necessity. Fat model, skinny controller, indeed.

Java has a huge advantage here, because the service, DAO, and whole persistence layer, never leave memory between requests, just request-specific context. Classes don’t get reloaded until the servlet container gets restarted (unless you’re using JSF, in which case, they’re serialised onto disk and rehydrated when you restart, for… some reason). So you can write your code where your controller asks a service for records, and the service calls out to the DAO, which return an entity (or collection thereof), or a single field’s value for a given identifier.

This is actually a really good thing to do, from an engineering perspective.

For a long time, with Project Alchemy, I was trying to write a persistence architecture that would be as storage-agnostic as possible—objects could be persisted to database, to disk, or into shared memory, and the object itself didn’t need to care where it went. I only ever got as far as implementing it for database (and even then, specifically for MySQL), but it was a pretty valiant effort, and one that I’d still like to return to, when I get the opportunity, if for no other reason than to say I finished it. But the entity’s superclass still had the save() and find() methods that meant that persistence logic was, at some level in the class inheritance hierarchy, part of the entity. While effective for PHP, in terms of performance, this unfortunately doesn’t result in the cleanest of all possible code.

Using a service layer provides separation of concerns that the typical PHP model doesn’t allow for. The entity that moves data out of stable storage and into a bean also contains both business logic and an awareness of how it’s stored. It really doesn’t have to, and should. Talking to the database shouldn’t be your entity’s problem.

But it’s still, overall, part of the business model. The service and the entity, combined, provide the business model, and the DAO just packs it away and gets it back for you when you need it. This two classes should be viewed as part of a whole business model, within the context of a “Model-View-Controller” framework. The controller needs to be aware of both classes, certainly, but they should form two parts of a whole.

Besides, if you can pull all your persistence logic out of your entity, it should be almost trivial to change storage engines if and when the time comes. Say you needed to move from JPA and Hibernate to an XML- or JSON-based document storage system. You could probably just get away with re-annotating your entity, and writing a new DAO (which should always be pretty minimal), then adjusting how your DAO is wired into your service.

Try doing that in PHP if your entity knows how it’s stored!

One of these days, I’ll have to lay out a guide for how this architecture works best. I’d love to get involved in teaching undergrad students how large organisations try to conduct software engineering, so if I can explain it visually, then so much the better!

Sunday, 13 January 2013

On learning good practices the hard way

At the beginning of December, I intended to write an article here about the perceived value of skills certification in the industry, in light of my own recent certification as an Oracle Certified Associate Java SE 7 Programmer. It’s something I’m very glad I did, and it was at my manager’s virtual insistence… but that same manager has also told me that, when perusing resumés to decide who to interview and who to pass over, he places no extra value on applicants with vendor certifications. It’s a bit of a paradox at first, and I promise I will actually publish it.

The problem is that, as usual, life got in the way. Life, this year so far, has also strongly got in the way of any new releases with Project Seshat. There have several bugs that I’ve discovered and fixed, with the help of a couple of good friends, but I haven’t really been able to deploy the most recent work, because of two issues—my son has been sick ever since we returned from our Christmas holiday, and one of the components of this release is turning out to be vastly more complicated than I originally expected.

In retrospect, it’s becoming apparent that I ought to leave out the new feature, deploy the bug fixes for version 0.1.3, and continue on the feature for 0.2.0.

That late realisation aside, like I said, a new component is somewhat complicating matters for me. I became dissatisfied with how I’ve been configuring request mapping a long time ago, and had left a mental note to clean up the technical debt; I was using Zend Framework’s static routes to associate this URL pattern with that controller method. Works reasonably well in most other applications I’ve written, but my desire to use the first part of the URL pattern to differentiate between UIs created a wrinkle large enough that I decided it would (eventually) be more convenient to write a URL decoder to create the route instead.

I worked on the decoder over the holiday, when I had an hour or two here and there after my son had gone to sleep. I implemented the whole thing using test-driven development principles (and fixed a few quirks in my homebrewed test framework while I was at it), and promptly discovered two things:

  1. My original understanding of Zend Framework’s Front Controller and dispatching process was flawed. This may be due to my ever-increase familiarity with Spring Web. I also happen to disagree with how Zend is doing things, but then, that’s largely the point of Project Alchemy—to create my own PHP development framework, based on my needs, by replacing parts of Zend Framework as I find they either aren’t doing what I want, or I just don’t like the API and don’t want to deal with it any more.

  2. The URL decoder could easily be used to fix a hack that I put in place to answer the question of which Notebook to link back to in the “Back” button in the interface. Simply leaving up to the browser Back button is insufficient; I want this button to really be an Up button, the way that the top-left button in iOS and Android apps works. Again, this is part of the point of how I’m writing Project Seshat; I want to write one set of back-end code, and apply an appropriate set of layout, stylesheets and JavaScripts to wrap the application in an idealised wrapper for the usage environment. Whatever device is used, it should work and feel like it was always intended to work on that device.

So, in trying to implement it there—by decoding the Referer URI—I thought I could really easily derive the Correct Value of the Up button. The problem now is that that isn’t remotely the case, based on the navigation paradigm I’m currently using, and intend to use. For the desktop, and probably for more capable mobile interfaces, I want to have the user be able to navigate to a Note either through a Notebook chain or through a Tag. Unfortunately, the way I’ve implemented it so far has created some circular navigation problems, that also run somewhat counter to iOS and Android navigation recommendations. I’m still trying to decide what the best approach is, and it feels like there are several options available to me.

Naturally, this is a pretty big issue that needs some pretty particular and dedicated thought; I really don’t want to just wing it. So, with my son being sick, and I’ve needed to get sle myself, I haven’t especially been able to take the time I want to take. That, in turn, has cost me some development momentum. I also apologise to my testers for leaving a couple of bugs up there for them to deal with while they try it out. I promise, a deployment is coming with bug fixes. Clearly, I got ambitious.

So, at the end of the day, what’s the lesson to be learned here? I think there are several. Stop taking notes in my head, and focus on writing down my thought process, so I can come back to it easily, later (this is valuable in any professional’s working life, particularly software developers, and I feel like being on holiday dropped me off the wagon a bit). Don’t ever combine bug fix releases with feature releases; if it takes longer than anticipated to fix the bugs (assuming you aren’t on a short iteration and deployment schedule), the bugs will remain in production too. Have storyboards for your interface, and understand the style guide(s) you intend to adhere to when you plan out your interaction flow. When designing new features, write down what they will do and how they’ll be used before you get started, instead of making it up as you go; it’s too easy to code yourself into a corner that way.

There are probably others, but I also need to sleep. I’m accepting suggestions. If nothing else, if I can’t be a good example, I can at least be a hell of a warning!

Saturday, 27 October 2012

On monitors and error detection

Earlier today, a colleague and I were discussing monitoring tools for Web services. He recently joined our team as a systems administrator, and I was filling him in on a homebrew monitoring service I put together a couple of years ago, to cover a gap in our existing monitor’s configuration, done in the spirit of Big Brother. He had praise for its elegance, and we joked a bit about reusing it outside the company, the fact that it would need to be completely rebuilt in that case (since, though it wasn’t composed of original ideas, just a merger of Big Brother and Cacti, it remains the intellectual property of $EMPLOYER$), and whether or not I would even need such a service for Prophecy.

After thinking about it briefly, I realized that not only will Project Seshat deserve some kind of monitoring once I install it on my server—I guess I’ll just add that to the pile of TODOs—but I remembered that I have a WordPress instance running for the Cu Nim Gliding Club, in Okotoks, Alberta. Surely a production install of WordPress deserves monitoring, in order to make sure that Cu Nim's visitors can access the site.

So, while waiting at a restaurant for my wife and our dinner guests to arrive, I took to The Internet to look for any existing solutions for monitoring WordPress with, say, Nagios. I may not be familiar with many monitors, but I know enough about Nagios to know that it works well with heartbeats—URIs that indicate the health of a particular aspect of a service.

The first hit I found that wasn’t a plugin for one of the two was a blog entry describing how to manually set up a few monitors for a local WordPress instance. It explained how to configu Nagios to run a few basic service checks: that the host in question can serve HTTP, that it can access the MySQL server, and that WordPress is configured, a single check on the homepage.

To me, this seems woefully incomplete. A single check to see that anything is returned by WordPress, even if you are separately checking on Apache and MySQL, strikes me as being little more than an “allswell” test. Certainly, success of this test can be reasonably inferred to indicate good health of the system, but failure of this test could mean any number of things, which would need to be investigated to determine what has gone wrong, and the priority of the fix.

When I use a monitoring system, I want it to be able to tell me exactly what went wrong, to the best of its ability. I want it to be able to tell me when things are behaving out of the ordinary. I want it to tell me that, even though the page loaded, it took longer than some threshold that I've set (which would probably warrant a different level of concern and urgency than the page not loading at all, which would be the case with a single request having a short timeout). In short, I want more than just the night watch to call out, “twelve o’clock and all’s well!”.

The options that I could take to accomplish this goal are myriad. First of all, yes, I want something in place to monitor the WordPress instance. But for original products, like Project Seshat, I would definitely like something not just more robust, but also more automatic. Project Alchemy is intended to create an audit trail for all edits without having to specifically issue calls to auditing methods in the controllers. I’d love to take a page from JavaMelody and create an aspect-oriented monitoring solution that can report request timing, method timing, errors per request, and perhaps even send out notifications the first time an error of a particular severity occurs, instead of the way Big Brother does it, where it polls regularly to gather data.

Don’t get me wrong, it’s probably a huge undertaking. I don’t expect to launch Project Seshat with such a system in place (as much as I’d love to). But it’s certainly food for thought for what to work on next. And when Seshat does launch, I will want to have a few basic checks to make sure that it hasn’t completely fallen over. After all, so far, I’ve been adhering to the principle of “make it work, then make it pretty.” May as well keep it up.

Saturday, 22 September 2012

In which the general fear of TDD is discovered

Since I last wrote about test-driven development—since we spent that time at work learning how to do it, I’ve been trying to make use of it in my off-time development. I’ve mentioned before that I’ve been writing an ORM from scratch, to satisfy an itch that I have. In its current incarnation, I haven’t really had many opportunities to write anything using it, other than an aborted attempt to create a tracker for the No-Cry Sleep Solution.

Earlier this year, the note-taking web app that I’ve been using for years made a major overhaul of their user interface…and left mobile web out in the cold. Seriously. If you aren’t accessing the site from something that can fully act like a desktopfull-scale browser, then you’d better be on either an Android or iOS device, because otherwise, you’ve been left out in the cold.

At the time, I was well and truly in the cold. My mobile phone was, and still is, a Palm Centro. My only tablet-like device was my Kobo Touch (my wife owns a Nook Color, but I wasn't about to both commandeer it during the day and install a note taking app), though we’ve since also purchased an iPad with LTE. At work, at the time, I used my Kobo to present myself with my notes during scrums. Since then, I’ve been writing to a static HTML file on those days that I don't bring the iPad to the office, but there’s still a nontrivial issue of synchronisation. While I could probably use Dropbox and a reasonably simple PHP application to read and write to a single note file, that still just doesn’t do it for me.

So, I opted to begin writing my own, using Alchemy and Zend Framework on the back end. The initial progress wasn't so bad, and it isn’t as though I didn’t have alternatives that have worked reasonably well in the meantime. I decided to basically cater to my own use cases, since I could. Mobile Web would be reasonably fully featured, if a degraded experience. My Kobo Touch would get a good interface where I could edit notes, or write new ones, easily. It would all be there.

The problem is that it hasn’t always been smooth sailing. Ignoring the fact that I don’t often have the opportunity to work on it at home, having a toddler, it seems like with every model I implement, I find another thing about Alchemy that needs to be added or fixed. I’ve been trying to adhere to test-driven development to do that, but by God, I made it difficult to do that in some places. Doing the whole “TDD as if you meant it” thing can be particularly tricky when you’re working with an existing codebase that isn't particularly (or even remotely) tested, and particularly when you're writing web application controllers. Controllers are notoriously hard to unit test, if for no other reason that because their very purpose is side effects, which runs somewhat contrary to many of the premises of test-driven development. I’m finding that it’s far more straightforward to perform acceptance testing on your controllers, and actually go through the motions of the task you’re seeking to test.

Where I’ve been running into difficulty with Project Seshat,¹ though, is in code that I not only wrote a long time ago (somewhere on the order of three years), but also works perfectly well in isolation. The Model class, and its database-driven subclass, provide a parent class to all model-like activity in my application. It acts as entity, DAO, and service layer, mainly because that’s what made the most sense to me at the time I started writing it (this was well before I started working with enterprise Java. I still disagree with the notion of the DTO, but have yet to fully articulate why, to my own satisfaction). And that’s fine; it can still work reasonably well within that context. The problem is that, at some point when working with each of the last two Models I’ve added, the logic that stores the information in the database has both succeeded and failed in that regard at the same time.

Huh?

One of the core features of the ORM in Project Alchemy is that every change that’s written to the database with an expectation of long-term persistence (so, basically, everything that isn’t session data) also gets logged elsewhere in the database, so that complete change history is available. This way, if you ever need to figure out who did something stupid, it’s already there. As a developer, you don’t have to create and call that auditing layer, because it was always there, and done for you.

This audit trail, in its current form, is written to the database first—I decided to implement write-ahead logging for some reason that made perfect sense at the time. Not that it doesn’t make sense now, but there are a lot of features that still have to be implemented…like reading from this log and providing a straightforward function for reverting to any previous version. But at least the data will be there, if only, for now, for low-level analysis.

At any rate, because I can see these audits being written, I know that the ORM is at least trying to record the changes to the entities that I've specified; they’re available at the time that I call the write() method in the storage area for uncommitted data. The pain is that when it tries to create a new instance of the Model in the database, the model-specific fields aren’t being written to the entity table, only to the log. The yet-more painful part is that this doesn’t happen in testing, when I try to reproduce it in a controlled environment. This probably just means that these bug-hunting tests are insufficient; that they don’t fully reproduce the environment in which the failure is occurring.

So yeah. TDD, while it’s great for writing new code, is very difficult to integrate into existing code. I’ve had to do what felt like some strange things to shoehorn a test fixture in place around all this code I’ve already written. I recognise that the audit trail makes the testing aspect a little bit more difficult, since it's technically a side-effect. However, I don’t really want to refactor too much, of any, of the Model API, simply because my IDE isn’t nearly clever enough to be able to do it automagically, and because I still really, really want the audit trail to be something that doesn’t have to be specifically called.

I am, however, beginning to understand why so many developers who have never really tried TDD dismiss it, claiming that you end up writing your code twice. At first, you think that the test and the code are completely distinct entities, and that the structure of your tests will necessarily reflect your code. Yeah, this would mean that you’re doing everything twice. But that’s not TDD done properly. But then when you get into it, you realise that it isn’t the new code you have to write twice, but all the existing code that has to be massively refactored (and in some cases, virtually rewritten, so dissimilar is the result from the what you started with), and that’s always a daunting thought. You may even find yourself feeling compelled to throw out things you’ve spent a great deal of time and effort on, purely in order to get it testable.

I get that. That’s where I am right now. But there are two things to remember. First of all, your code is not you. If you want to work effectively in any kind of collaborative environment, whether at work or on an open-source project, you need to be able to write code and leave your ego at the door. Hell, the same thing goes for personal projects. The second is that if you refuse to make something better (whether better means more efficient, more maintainable, or whatever), simply because you invested x hours in it is foolish. You probably made something good, but you can always make it better if you’re willing to put in the effort.

And speaking of persistence issue, I’m sure I’ll fix it eventually. I already did once, though I didn’t properly record how I did it. Gee, if only I had some kind of mechanism for taking notes!


¹ Seshat was the Egyptian goddess of wisdom, knowledge, and writing. Seems appropriate to use a name that means "she who scrivens" for the tool you're going to use for your own scrivening.

Wednesday, 15 June 2011

Coe's Second Law Of Software Development

Before I begin, let me just put it out there that I really like the Zend Framework. I know, I know, laying down your cards about your favourite editor/framework/OS/whatever is liable to set off holy wars, but I really like ZF. It’s clean, and I like the mix-and-match properties of it that allow me to use only as much of the framework as I need. It’s this very property that I’ve used to great advantage in Project Alchemy and Portico. But I’ve always had one complaint about it.

The documentation in the reference guide, and to a roughly equal extent, in the PHPDoc navigator, is really flaky. The full functionality isn’t properly described in the reference guide, and the PHPDoc doesn’t provide enough information about the API. I’ve found, on several occasions, that I have to dig into the code simply in order to figure out how to use some methods.

Anthony Wlodarski, of PHPDeveloper.org, sees this as a positive of Zend Framework; that when ZF community wonks tell you to RTFS, it really is for your own good; that ZF really is that self-documenting. He says,

One thing I learned early on with ZF was that the curators and associates in the ZF ecosystem always fall back to the root of “read the code/api/documentation”. With good reason too! It is not the volunteers simply shrugging you off but it is for your own good

Unfortunately, it’s been my experience that self-documenting code isn’t. Let’s call that “Coe’s Second Law Of Software Development” (the First being when it happens, you’ll know). This is how strongly I feel about the issue. Far and away, the code itself is always the last place you want to look to figure out how it works, and only ever if you have a fair amount of time on your hands, because deciphering another developer’s idiosyncrasies is harder than writing new code. And if you have to look through that code to figure out what the correct usage of the tool is, then someone isn’t doing their job properly, particularly when Zend Framework has the backing of Zend.

I’ve been working for $EMPLOYER$ for more than a year now, and I’ve worked with a number of our internal tools, and across the board, I keep getting bit by the fact that our self-documenting code isn’t. Self-documenting code means that there’s no easily-accessible, central repository of information about how the tools are supposed to work, or about what inputs to provide and outputs to expect. Self-documenting code means that when something goes wrong, and the person who originally wrote the code, the poor sap who has to correct the problem now has to figure out what the code is supposed to do. Self-documenting code means that when your prototypes (or even production tools!) fail without a clear error code, you have to either start shotgun debugging, trying to figure out what you’re doing wrong (or what changed); or you have to ask the project manager, who will ask a developer to dig through the code. This increases the turnaround time on all problems.

Self-documenting code is fine when you’re writing little noddy programs for first-, second-, and even some third-year classes, where the functionality is defined in the assignment description, and the problems straightforward enough that what you’re doing is actually clear at a quick glance. When you’re writing tools for distribution to parties outside of the small group of people who are developing it, you owe it to yourself, to your QA team, to your present and future colleagues, and more importantly to your users, to write good, clear documentation, and to do so ahead of time, because then you only have to update the documentation to reflect what changed about your plan.

But remember, when someone tells you excitedly that their code is self-documenting, remember: self-documenting code isn’t.

Friday, 23 July 2010

In which work ethics are considered

Over the past couple of weeks, I’ve had two very clear indicators—to myself—of just how much I’m enjoying my current job. I say that because there’s a subtle, but important, difference between saying “I love my job because I love doing x” and observing your own behaviours and thought patterns, and noticing that the way that you do your job, and approach your job, demonstrates how much you enjoy it.

The first way is something I realised about two weeks ago, when I met with the general manager (my boss’s boss, and an all-around good guy) for a quick catch-up chat. I’ve been working at this job for eleven weeks, and it feels like I’ve been there a lot longer, and more importantly, the environment, and the nature of the work is just naturally enjoyable, and it’s something I’m keenly interested in, so I like going to work on general principle. But when I was talking to him, it dawned on me that I like my work so much that, for the first time in years (at least two, if not four) have I been able to get so wrapped up in my work that I lose track of the time. Almost every other job I’ve had, with possibly one exception, since I moved to Toronto, I’ve tended to kind of check out around 4:30 or 4:45; I’d start trying to find something to do that would be productive work, but wouldn’t take too long to do, because I wanted to get out the door. Where I am now, as often as not, it’ll be almost 5:30 when I look at the clock and realise that I should probably go home.

The second, even clearer indication came this past Friday. I’ve been working on coming up with a way to better integrate two of the projects I’ve been working on, and particularly a way to do it across a subdomain divide (the products need to communicate on both the server and client sides; I managed to hack it on the pure client side by spawning some IFrames, but getting a particular server-side action in Project A to trigger an action in Project B has been a little less clear-cut. So, I decided that a simple XML interface into Project B was needed, with a wrapper for Project A (and Project C, which another developer is working on was necessary). So, I spent Thursday and Friday afternoons working on this API.

Two very cool things came from this.

  1. Apart from a few minor syntax errors (missing a parenthesis, putting a colon where a semicolon should’ve been, &c.), the stripped down API worked perfectly right off the bat. A few hundred lines of code, and it Just Worked. I haven’t been that successful in a while.
  2. I ran the first test at about 4:40. I had other things that needed to be done that evening which necessitated my leaving as close to 5:00 as possible, but I had a thought I haven’t had in a long time: I wish I didn’t have to leave right away. I wanted to take my work home with me.

This hasn’t happened in ages, and I didn’t realise how much I missed that feeling until this past Friday. I’ve been telling people how much I like my job based on its perks: catered lunches on Fridays, stocked kitchen, and an amazing sense of community with my co-workers. But being able to say, “there are days that I don’t want to stop working”… that might be the surest sign that you’ve got a great job.

It’s kind of funny, because I normally try to fight against that really, typically Protestant work ethic of, when you boil it down, “living to work”. I can leave my work at the office; most days, it’s a case of losing track of the time because I’m so wrapped up in what I’m doing, so when I realise what time it is, I clean up what I was doing, get it to a state where I can leave, and I go home. And I think that’s what this is an extension of—I got so wrapped up in what I was doing that, had I not had other things to do, I almost certainly would have stuck around, ignoring the clock.

I think that might be the difference. Most days, I don’t care what time it is; it’s irrelevant to me how many hours I spend at the office, as long as I get done what I want to get done. When I compare that against the Toronto workaholics who not only work to live, but take it as a point of some perverse kind of pride that they work sixty- or eighty-hour weeks, I can see much better what the difference is. I’m doing what I love, and I take pride in the result of my work, whereas some people take pride in the amount of work that they do.

Monday, 1 February 2010

In which good news is announced

The past… roughly sixty-six hours have been pretty exciting in my neck of the woods, for a number of reasons:

  1. I met a recruiter downtown regarding a twelve-month contract situated (if memory serves) very near to my home, for a major classifieds website you’ve probably heard of. I was told that my resumé will be submitted, so hopefully I’ll be contacted for a phone interview.
  2. I heard from another recruiter, this one with disappointing, but not altogether surprising, news: the client that had been given my resumé, regarding a front-end position, said they’ll not be seeking an interview with me. Disappointing because it means no job there, but not surprising because the bulk of my skill set lies in back end development (as you’ve probably discovered, if you’ve been following my writing), with less emphasis on front end technologies. My Flash is rusty and creaky; I have no Flex or Air; I could keep going, but I’d rather not, because…
  3. Yet another recruiter got in touch with me about a brief, lucrative contract, working out of my home, picking up where another developer has had to bow out due to time pressures. He wanted to know how I’d feel about picking up a mostly-there project built on Symfony, with jQuery as the JavaScript framework: an MVC framework to which I have minimal (but positive) exposure, and the whipping-boy JS framework of comp.lang.javascript that I’ve never touched (I use Prototype in Project Alchemy and Project McLuhan). Recruiter knew this, passed on what I am familiar with and mentioned that I tested well in his company’s PHP skills test, and the client wanted to get in touch with me anyway. After a couple of hours to look at the source code, I was confident I’d be able to get myself up to speed with Symfony over the course of the weekend. I met with the client on Saturday over coffee, met with the previous developer on Sunday, and, well… I’ve already nailed down two of the end user’s major we-want-this-fixed-by-the-beginning-of-business-Wednesday bugs!

So yes, dear readers, I am (or rather, will be, once I’ve signed the contract) gainfully employed—at least for the month of February! I feel good; I feel energised… and I feel that by the middle of February I’ll have two new things I’ll be able to add to resumé! I think my lack of previous jQuery has been a bit of a sticking point—everybody uses it! So now, I’ll be able to say I can use jQuery.

This gig is going to be a great stretch of my skills. I’ve already hit the ground running (thank God that Symfony’s interpretation of MVC is fairly similar to Zend’s!), so as long as I hit this Wednesday’s deadline, everything should be great… and I don’t see any reason why I won’t be able to hit it.

This was a good weekend, and one that I’ve been waiting for for quite a while now. With any luck, this contract will turn into further opportunities with this client, which will mean (fairly) regular work and more opportunities to expand what I do!

One closing note: Looking at Symfony, I have to say that I’m already pretty much in love with it. It’s got a very similar logic to Zend (and, near as I can tell, is similarly modular, which I like), but it’s got a lot of back-office automation, kind of like Cake and Rails, that unburdens the developers, and it doesn’t appear to be doing so in an ugly an inefficient way! I’m almost mad that I hadn’t found it in the summer of 2008—so many things would probably be different! Project Alchemy would probably be much farther along than it is (I’ve been having trouble with the ORM layer and articulating join logic as associative arrays). Also, it’s kind of a pity that I’ve found it now, rather than long before, because I’m also about to start learning Python, which I’ve been meaning to do, in order to move away from the Failboat that will be PHP 6.

Ahh well, can’t always do everything right the first time you do it, right? Otherwise, you’d never learn anything!

Wish me luck this month; I’m going to be awfully busy!

Friday, 22 January 2010

In which the author reflects on his career

Without trying to sound like a couple of cartoon nine-year-olds, I realised something today—I’ve done a lot of pretty cool stuff in this industry. I might be young, but there’s something to be said for starting early, and always trying to challenge yourself. Let’s see what I can remember… but I warn you, this is a long, long post.

  • Portico/Project McLuhan CMS
    Late last year, I delivered, to a freelance client, an early build of an original CMS that I called Portico. I refer to it as Project McLuhan when I’m coming up with new features, but it’s certainly not just a random exercise—it’s currently being used to serve up bellabalm.com, the website of one of my wife’s colleagues. Since delivery, I’ve been working on enhancements to the user interface (lots of use of AJAX) as well as the core functionality, to make it a little more usable, but it was pretty complete on delivery. It links to PayPal for purchase completion, and Canada Post for shipping cost calculation, and honestly, the shipping calculator was probably the hardest to write, because it's all XML, and I hadn't used XSLT for about six months before I started into it.
  • Project Alchemy toolkit
    In 2008 I worked for a company I describe as “social advertising agency”. At the time, I was one of two developers, and we jointly decided to standardise the company on Zend Framework, based primarily (if my memory serves me) on the fact that ZF was created by the same group of programmers who wrote PHP, so we good reason to believe that it would be the most effective for the job. A few months later, a couple of other developers were brought who knew Ruby on Rails, and with it, CakePHP. I’ll say this for Cake: it certainly enhances developer time for developers familiar with its API, and it seems to be a bit more clearly documented than Zend. However, from my work with it, it’s not as efficient in terms of database usage, so I can't really say I’m a fan (also, hearing one of these Ruby guys say, on about a weekly basis, “I HATE CAKE!” amused me). However, I did find really quickly that the Zend API is somewhat lacking. Project Alchemy was born out of my desire to merge the two—to take everything I liked about Cake, and make it available in a Zend context, along with some other features that I’ve long thought were important. Of particular note is a full audit trail of all database changes (and optionally, queries), which is easier than you might think to write, and will benefit you in the long run in case Something Bad happens. I'm also trying to decide on the best approach to implement model caching; a course in operating system design late last year brought the notion of shared memory into my mind, so now it’s just a matter of actually putting it together. Project Alchemy’s by no means complete, in terms of everything I want to do with it, but the essentials are all there, and it’s been an amazing learning process.
  • Innumerable changes to CIBC.com, including a core role in the October 2009 redesign
    I was on contract at CIBC for quite a while, but in a purely front-end role. Nevertheless, I got exposed to some fun technologies while I was there—XSLT, XPath, XForms—that I’ve been meaning to investigate but otherwise haven’t had the time to properly study. When the project to update the website’s layout and accessibility came up, I was glad to be involved in the project, because I got a chance to really stretch and refresh my front-end skills, including some neat CSS and JavaScript techniques. I refamiliarised myself with basic AJAX, and I’ve been using it a lot in Alchemy and McLuhan.
  • The back-end image manager for the “Radio Perez” mobile podcast site and iPhone app.
    While I was working at that ad agency, I was asked to write a pretty bare-bones CMS to serve up images and ads to an iPhone app, as well as mobile-ready pages. Two cool things I got to do with this were using the GPS data from the iPhone to provide geographically-relevant data (specifically, serving up a particular button graphic depending on your proximity to one of the client's radio transmitters), and I used the scaffolding technique on this project to get the required functionality going early, before worrying about making it look right. I first saw this technique in use by the Cake developers at that agency, thought it was a good idea, and tried it out. Turns out that it’s a great idea, and I've been using it ever since. It helped me deliver bellabalm.com in time without worrying too much about making the back end look exactly perfect. It did what it needed to do well before launch, it just wasn’t too pretty.
  • An inter-social-network photo album (ad agency)
    This one was only partially completed before the Radio Perez thing came up, so it got shelved (there were only a few developers in the company at the time), but it was going to be a pretty cool photo album, that users could get at from Facebook, MySpace, iGoogle… wherever, really. I did my best to make it work like Facebook’s photo albums, but I couldn’t figure out how to make photos draggable to reorder them (might have been because I hadn’t properly researched JavaScript frameworks at the time), so instead I implemented a priority queue in JS to update the sequence numbers correctly in real time. As I said, it never got completed while I was there.
  • An inter-social-network discusson board (ad agency)
    This was the first project the company did with the technique of opening up a social app to multiple social networks simultaneously. I think it’s gone on to become one of their flagship products, so I have to laugh that I came up with the original design, and figured out all the potential problems and issues in doing this kind of thing, and now they’re making a mint… and then I get sad, because I remember that I’m still bound to a noncompetition agreement and I probably can’t do anything with that knowledge for a while yet. RIM used it, MTV used it, and I think Astral Media used it before it got refactored from Zend to Cake. It doesn’t seem to be running its original form anywhere anymore, so it looks like it’s been pretty thoroughly rewritten.
  • RMAtic Return Merchandise Authorisation tracker
    My first attempt at writing an issue tracker, and if I do say so myself, a successful one! This is where I first got the chance to write the audit trail I mentioned above with Project Alchemy, because if anything needs that kind of tracking, it’s an issue tracker. Some weird bit rot has set in that won't let me update the RMAs in the database that I haven’t been able to isolate, but if I ever want to do an issue tracker again, I’d be rewriting it from scratch--this was done before I had a chance to use Zend, so all the links between things are procedural. It’s not super-pretty code, when you get down to it. But then, I honestly didn’t expect to ever have to see it again, so I wasn’t concerned with code re-use. I’m told it takes three times as much effort to write reusable code than it does to write a one-off, and that’s exactly what this became! If you’re curious, though, check it out. The username and password are both “admin”. It's still branded with the client’s livery.
  • A client relationship and project manager
    This was the last project I worked on at a custom software company north of the city. Based in equal parts the existing in-house CRM, their in-house CMS, and Basecamp, the notion was that I’d rewrite the CRM to be something that could be resold. I very carefully wrote up a specification for everything it was going to do, how everything would interact, how the permissions model would operate, a huge E-R diagram for it, and got to work. I was easily 80-90% of the way through it—and all the core stuff was there; I think I was down to making it easy to install add-ins, and maybe one or two other features that I can’t quite recall—when the contract ended, for a variety of reasons. It was hugely ambitious, I was the only person on the project, and I think I did a great job of it. It was able to handle all the different clients the company had, with infinitely-overlappable user groups, all the various projects, work tasks (it had a built-in issue-tracker), notes, phone calls, anything billable… invoicing. The whole shebang, with a two-dimensional access control scheme that depended on what permissions the user/user group had on individual items and their parent items. Hugely complex, and I’ve realised in retrospect that (a) role-based access control is a far better idea in terms of software complexity, database load, and usability, and (b) the database table that held the permissions really needed to be indexed and stored with a different storage engine. You live, you learn. I’ve been meaning to write myself a simple CRM for my freelance clients, and the little snippets of memory I’ve still got about that project should at least help me get started. It’ll be done in either Project Alchemy or even Django, if I’ve become familiar enough with Python by the time I get started.
  • A shopping cart for an existing CMS (custom software company)
    The client wanted to add a store to their website, so I had to learn the old version of the company’s CMS in a hurry and add a new store module to it. It wasn’t easy, particularly when trying to put the shopping cart together, and make sure that all the right information appeared on the right pages, but I did it, and it worked, and allowed customers to select from a bunch of different customisable options, as well as customising certain products. I realised, in the last month, that the user experience certainly left something to be desired, but again… you live, you learn. I think it was, more than anything, an issue of nomenclature. I used “option” to refer to, say, a specific T-shirt size, and “option group” to refer to offering size as an option. I just didn't make it very intuitive, and I think I know how I’d do it better in the future.
  • An inventory manager for a warehousing and shipping company (custom software company)
    This client stored materials for their clients, and need a way to allow the clients to see how many boxes of each they had warehoused, order some to be shipped, and be alerted when the level was dropping low so that new items could be made. Fairly straightforward, and served as a basis for the issue tracker I built into the CRM later. Users could get things shipped to the address of their choice, and it would remember past addresses to make things easier. Got the opportunity to put together a PDF template that exactly matched their existing shipping labels, so that was fun. First project I did there.
  • A really elemental blogging tool for my own use.
    This was bad, hacky code. I put in only what features I needed, and it also served up static content stored in HTML files within a specific directory structure. I just wanted to write my own blog without having to edit the HTML every time. There were comments, but I started getting spammed, so I had to disable them. The only feature I actually built was writing new blog posts. Couldn’t even edit after the fact. But it was also the first database-driven thing I tried writing… back in first year. I wanted to learn what it took to do it. There may be a reason why I use Blogger for it now… it’s a more complex problem than I wanted to get into at the time. Maybe later, though.
  • While I was in high school, I came up with a way of handling a multi-page ticket-ordering process for a theatre that didn’t require sessions. I didn’t really have access to the server that hosted it, and I didn’t understand session cookies at the time anyway (not that I could use them without being to able get at the server), so I used a frameset, with an invisible frame full of hidden inputs. The hidden inputs were updated by JavaScript, and the last page email their contents to the box office. It was quite interesting, and it was superceded a while ago.

That’s about it for proper projects that really saw the light of day, that I did something interesting with. A few things I came up that either didn't go anywhere or just weren’t really a technical achievement follow:

  • You know how Jillian Michaels, one of the coaches on The Biggest Loser, now has a website where you can track your weight-training goals online, and interact with other people trying to do the same thing? I had that same idea while taking a strength training class in high school. Many, many moons ago. Used it as a project idea. Shelved it because I didn’t have the money to start it up as a business and advertise it. Now I’m kicking myself.
  • In grade school, I had the opportunity to make the school’s website. I wanted to coordinate with the teachers to put lesson plans up online, so that parents could keep aware of what their children were learning. Nobody really wanted to go along with it. In my final year of high school, I tried designing and implementing the same solution, on a high school level, in a way that would empower the teachers to put up what content the wanted… it was really more ambitious than I expected, and all I have left are a few design documents. Then I started university, and discovered that WebCT and Blackboard existed… they do exactly what I was trying to make for my school. D'oh!
  • Also in my last year of high school, I had the “brilliant” idea of trying to create a bash-like shell for MS-DOS… in Perl. Like the course management tool I just described, this was far more ambitious of an idea than I ever expected it to be, probably because at the time, I didn't have much experience with Unix, and didn’t understand the domain of the problem. But it serves as a good example of the level of idea I tend to come up with: BIG. Sometimes revolutionary, sometimes just complex, but I think big. I like big problems that require a lot of thought.
  • Back in the bad old days, just when DVDs had first came out, my cousin and I designed a website for a video store in our hometown. Not very revolutionary there, but we had an idea that no one else did--show the movie trailers on the website. It’s obvious now, sure, but it wasn’t in 1997; only a small fraction of movies even had websites to begin with!

So, yeah… I’ve been around. I’ve seen quite a bit, now that I think of it, and there’s probably a lot of stuff that I can’t properly remember that’s getting left out, fairly or unfairly. I should remember this.

Saturday, 16 January 2010

In which opportunities are taken

The coming of the new year, like everyone is always told, has shown itself to be a time of new opportunities. However, what’s unusual this time around is that I’ve also been in a wonderful position to take the opportunities with which I’ve been presented. The software I’ve been slowly writing over the past roughly six months (dubbed, depending on what specific functions I’m working on, either of Project Alchemy or Project McLuhan) has forced me into a position to really clean up some of the core things—that is, Project Alchemy—I originally put off implementing for the sake of getting it out the door, and an outside project has finally given me an opportunity to make good on my threat to learn Python.

First things last.

Given the nature of this blog so far, and the fact that I’ve been using it to talk about technical matters from a reasonably professional point of view, I kind of wish this opportunity to learn Python had a more professional set of circumstances. On the other hand, it’s not as though I’ve ever tried to hide this particular circumstance; I’ve even discussed it in this very blog. In my continuing attempts to finish my BCSc, I’m required to take an introductory software engineering course, and I’m part of a development team that will be making a product virtually guaranteed to be delivered over the web. At first, I thought that I might be able to make an argument for integrating Project Alchemy into the codebase to speed up development, but then I discovered that I’m the only member of the team who has no experience with Python, and no one else seems to know PHP. As a result, It’s kind of a given that we’ll be using Python, so I have a fairly steep learning curve ahead of me to get caught up. The University of Toronto’s Computer Science department uses Python as the language of choice for its first year classes, so everyone else has had at least a year of active (albeit toy) development with the language, and I have less than two-and-a-half months on this project, in which to provide real, tangible support. No time to lose!

On the other hand, I think I have more project planning experience than the rest of the team—I remember how slapdash my programming practice was at Dalhousie, but this is a real-world product—so I think I might be able to make up my gaps in language experience by taking care of more the behind-the-scenes things in the management of the project. I’ve been kind of spearheading that already, but it hasn’t really been particularly set it stone who’s doing it. I guess we’ll see how this works out. With any luck, our client will get a solid piece of code and I’ll get some realistic experience in project management, which can only ever serve to benefit me professionally, for reasons that seem fairly obvious.

In terms of Project Alchemy, like I said, I put off implementing Alchemy functions for the sake of getting Project McLuhan (and more specifically, my client’s website) out the door on schedule. As a result, fairly important architectural things like multi-model find calls were put on the back burner while I dealt with end-user issues like a shipping rate calculator. Now that it’s been delivered, I’m trying to clean up the McLuhan interface to improve its usability, and I’m realising just how critical that functionality is.

Admittedly, part of my initial hesitance in implementing it up-front was a certain degree of intimidation by the problem—a large part of the design involved coming up with a way of compartmentalising WHERE clauses—rather than simply requiring the find conditions to be input as a string, abstracting it out to better insulate against malformed data from the end user. As it turned out, it wasn’re exceedingly difficult (with two caveats: I still have yet to come up with a way of handling subqueries for the IN/NOT IN operators, and I’m not convinced that I’ve done it in a particularly efficient manner), but now I have to go back through all my controllers and models and rewrite the find calls to use the new schema. It’s a bit of a pain—yet another example of why Doing It The Right Way First will invariably save you time in the long run—but it’s ultimately worth it, because it’s also providing a good opportunity to review my own code and look for bad algorithms.

So, yeah, there have been a couple of great opportunities so far this year that I feel I’ve really been able to taken advantage of. One great opportunity to learn a language I’ve been meaning to learn for more than a year, and another to finish up something important and learn a good deal more about what really goes into writing an MVC framework. This can only possibly be good for me, so stay tuned to see what develops!

Thursday, 5 November 2009

In which the author learns that any great Web application developer needs to be half DBA

The latest section of study in my database tuning class has focused on transactions and crash recovery. As with table indices, I’ve been aware of transactions on, shall we say, a theoretical level: I knew what they were, and I knew that they were useful for grouping queries together for atomicity. However, it somehow didn’t occur to me how spectacularly important they are if you’re writing a web application that has relation constraints, in order to maintain database consistency.

Perhaps it was merely that my previous argument had been, “but each request will be executed in less than a second!” That doesn’t matter when you’re trying to serve content to, say, more than ten thousand requests in an hour. On average, that’s one request every 0.36 seconds. But we all know that it doesn’t work out like...

  1. 0:00.00: Request
  2. 0:00.36: Request
  3. 0:00.72: Request
  4. 0:01.04: Request
  5. 0:01.40: Request
  6. &c

That just doesn’t happen. At ten thousand requests per hour, you will almost certainly see overlap in your requests. This means, of course, that if you have an application that involves relational constraints, foreign keys, and really, anything where you have to reflect a change in table A in table B in such a way that every request is consistent, that change has to be applied to both tables before another request is allowed to query either. This is why you need transactions.

I never really thought about it in terms of even modest usage patterns like that. The textbook went over it in more depth, more in a context of what considerations a DBMS programmer has to be aware of, but it was a great reminder. Just two concurrent transactions, one writing two tables, and another reading those two, can display errors in the data if the read transaction reads table A after it’s been changed, but reads table B before its change has been applied.

It retrospect, it’s so obvious. Unfortunately, I haven’t ever—and I mean ever, in my entire professional career—met a web application programmer who actually used transactions, and didn’t just blithely let their database autocommit every update as soon as the query was completed. Nobody does this, and why? At a guess, because it takes extra programmer time. Only one PHP framework I know of forces transactions by default (hint: It’s not Zend… something else I need to add in to my Still Unnamed Project), but what about developers making their own frameworks? Not using frameworks? Using a framework that relies on the programmer to specifically start transactions? It won’t happen, because so much emphasis is put on reducing programmer time that no one really, properly, considers taking the extra hour, or day, or even, at worst, week, to plan things ahead and properly insulate against failure from step one.

But it does go to show how important proper planning is to being able to do this kind of work properly. Proper analysis of your tools and knowledge on the programmer’s part of what needs to go in to the software is what makes for better software in the long run. Seriously, take the little bit of extra time to do it right, and improve your product’s performance. How much does a day or two cost—once—relative to how much an extra server costs every month to compensate for inefficient code?

Thursday, 19 March 2009

You learn something new every day

I’m willing to admit when I’ve been wrong, mistaken, or just poorly informed about something. I’m always happy to learn new tricks, and useful ways to improve my productivity, and my code’s efficiency… but a lesson I just learned last night takes the cake for things I wish I’d known years ago.

After more than two years spent outside of studying, I’ve finally started to get things back in gear to complete my B.C.Sc. degree (Brief explanatory digression: I had to suspend my studies a year before they were finished due to lack of funds, and then a series of layoffs after moving to Toronto has consistently interrupted my ability to resume). I’m currently taking two courses by correspondence: Small Business Management and Distributed Database Systems and Database Tuning. The first topic in the DB Tuning course is a brief analysis of difference database indexing methods, in terms of how the DBMS manages the pages and records under certain structures. I’ve been seeing indexing options for years now, fiddling with phpMySQLAdmin, but I’ve never really been clear on how the indexing worked, from an internal point of view. Never looked into it, just as I never looked into JOINing tables instead of relying on the WHERE clause… and when the light went on, reading the textbook, it nearly blinded me.

I’m not going to get into the details of it here, primarily because I don’t feel 100% confident in my correct understanding of the mechanisms used, but I realise now that I’ve been treating my databases as magical blackboxes a little bit too much. It’s a pretty rich statement, coming from the guy who says he does his work better when he has a more thorough understand of the whole system, from top to bottom. This kind of thing certainly demonstrates how true that is, though. For, literally, years, I’ve been treating DBMSes as though they just magically know how to organise my data to be able to get at the records I need quickly. I’ve strongly espoused letting the database do the hard work of returning a specific set of tuples, instead of pulling back a huge range of candidate tuples, and letting the host language (PHP, in my case) decided yea or nay about each record, on the grounds that the DBMS has been designed to do that kind of thing, and really well. And I’d like to think that’s still true, but man alive, it’s clearer than ever that I still have a lot to learn about writing database driven applications.

On that note, I unfortunately have to admit that my Zend enhancement/application template has stalled. But don’t worry, I already have an excuse lined up and ready to go: between my new job, going back to school and a play that’s been in rehearsal since the beginning of the year, I’ve had precious little time to work on my own, personal projects, which is unfortunately what that project is. Educational, certainly. Valuable to the wider community? Possibly. An interesting problem to work, most certainly. But also something that is simply not my highest priority, and my time feels pretty stretched most days. Besides, I’m still stuck at the logging functionality and integrating it better as a Model, and in turn, redesigning my Model superclass. I've been debating whether or not I should create an Object superclass as well, for better adherence to MVC, or whether that would just be unnecessarily reinventing the wheel (unlike the project, which I feel is a fairly necessary reinvention of the wheel).

Wednesday, 14 January 2009

Sometimes, reinventing the wheel becomes necessary.

I believe I’ve found that a previous entry, A Model! A Model! My kingdom for a Model!, was written, perhaps, a little hastily. Some further analysis, and review of another Zend-based project, has revealed that the Zend_Db functions do provide some Model functionality, with the Zend_Db_Table and Zend_Db_Tablerow classes.

However, I maintain that they’re ineffective as a Model, if for no other reason than it restricts to user to thinking of Models as extensions of the database. This is simply not true. The Model should just exist as an abstract, and if the application requires for the majority of the Models to be stored in a database, then by all means, it should put them there. But the framework shouldn't necessarily require it.

I’ve come to that opinion while working on my own ZF application template. I realised over the Christmas holidays that my own Model class is somewhat restrictive… because it requires the database! I've since forgotten how I came up with the idea—I think it had something to do with storing uploaded images in the filesystem, rather than as binary blobs in a database, that I realised that a Model should really be more like an interface, with find(), set(), write(), &c. methods that need to be implemented by, say, a triad of DatabaseModel, FileModel and MemoryModel classes. Perhaps even a SessionModel or CookieModel, but I think that might be pushing it.

I’ve been giving this same notion a little bit of thought over my recent hacking runs as a solution to a problem I’ve been running into. What's happening is that I want to implement a combination flash messenger and logger. Why? Well, it’s (I hope) fairly simple.

During a given execution of the application, any number of messages may need to be delivered to the user. Say, for example, a user wants to upload five images to the application. Four of them work, and we want to tell the user this, but one of them failed, and we want to tell the user this, as well. ZF’s FlashMessenger can kind of do that, but either all the messages have to appear in the same context, or the messages have to be rendered on the way into the messenger. This isn’t ideal, for the simple reason that it’s putting View logic into a Model. Thanks, but no.

So, I thought that, since I’m already using Zend_Log to capture runtime information that I might want displayed in a debugging situation, maybe I could pull the logged information out of that in the layout. No such luck; Zend_Log doesn’t let me access its writers. After that, maybe I can write my own Zend_Log to write to a Zend_View’s placeholder, right?

If only it were actually that simple. I don’t really want to think about how much extra effort that’s going to involve, wading through the verbose, but less-than-useful ZF Reference Guide, the less-verbose, sometimes-useful ZF APIs, and just digging through the actual code to figure out how all this stuff fits together.

Or, I could just go the route I’ve already been going with my Model and Database classes, and roll my own functionality that does what I need it to do. Part of the point of this project was to identify the parts of ZF that fall short of what I, and possibly other programmers, need, and fill in what’s missing. To have a skeleton of an application pre-built that offers a lot of highly-useful, rarely-requested features.

I’m starting to think that as time goes on, I might wind up having to replace some more core elements of any MVC framework so that things in my own stuff work together more smoothly. God knows, I might even wind up building my own framework out of this!

How that affects my desire to move away from PHP and towards Python is, as yet, unknown, but it might help. It’ll certainly provide a hell of a groundwork for a first project in a new language!

Friday, 7 November 2008

A model! A model! My kingdom, for a model!

At my last position, I was introduced to the formal concept of the Model-View-Controller architecture, specifically by way of Zend Framework. It’s a good framework. I enjoy using it a great deal, if for no other reason than the fact that it isn’t a bondage-and-discipline framework; I don’t have to use ZF’s functionality if I don’t want to.

But, it bears saying that the one function that ZF needs, that it doesn’t have, is a Model class. It has a Controller. It has a View. It has Layouts and Helpers, even a Paginator, for crying out loud, but it has no model. I feel this is a key component to the Model-View-Controller architecture. It’s even mentioned in the name. But ZF has no Model.

This leads, if you’re not careful, or ZF is your introduction to MVC, to a fairly significant error in design. The purpose of the Controller is to direct traffic between the user, the model and the view. Make sure the user should be able to do what they’re trying to do, see if there was a transmission failure in the case of user uploads, but really, your controller needs to small. Skinny controller, fat model, is what an old co-worker of mine advised the team. And I happen to agree with him.

The model is where all the heavy lifting involving your data goes. Controller tells the model, “I want a set of your objects that satisfy these conditions,” and the model goes and gets it. Controller tells the model, “Make an instance of yourself from this data, then store it,” and the model does it. Controller Actions should be tiny. You should be able to see an average of three on a screen.

But if you use ZF and you’re not fantastically careful about it, you’ll end up with a Controller that does all your work. That’s the one advantage I’ve seen about other frameworks like CakePHP. While you pretty much have to use Cake’s functionality, you use MVC correctly.

And that’s why I'm making a ZF application template that, by God, has a basic Model class to be extended.

Tuesday, 4 November 2008

So many job profiles to choose from..!

It’s really spooky, looking through jobs on Monster.ca, and noticing that every job in their “Featured Jobs” skyscraper box is listed in Ft. Meade. More so when you pick one at random and see that security clearance is a job requirement. As inexorably cool as it would be to work for the NSA, albeit indirectly, that would also be pretty stressful.

Besides, I’m Canadian. If I wanted to work for the spooks, I’d be applying at CSIS and CSE. I guess they’re not hiring.

What I am looking for, as of this point in time, is something in PHP development or system administration. While I can say pretty certainly that my days of writing PHP code are numbered, ever since I read about what the new namespace token is going to be, it’s also what I have the most expertise in, and frankly, it's what’s going to get me working. I’ll experiment with Python once I’m getting a steady paycheque again. If you happen to need a PHP guy, I’m your man. I like Zend as a framework; I'm not entirely sure that’s mentioned on my LinkedIn profile.