Some Thoughts on Testing Developers
For reasons I can't quite fathom, I've been thinking a lot about testing developers recently. That's testing developers as part of the hiring process, as opposed to developer testing (which I do bang on about rather a lot, to be fair).
I say I can't fathom the reasons, because we're not actively recruiting right now, nor am I looking to be recruited (though if you have your air conditioning switched on you may be in luck).
So anyway, it's fair to say that before you hire a developer, you want to find out if they're any good at developing, right? And therein lies the problem: how on earth do you measure the candidate's skill level?
I've seen, and used, a few approaches myself, so I'll go over a few of them and see what drops out the other end.
Approaches I've Used
At PropertyMall we gave candidates a test with a few questions that they could do in their own time, with all the resources - books, internet, cups of tea - that they would have in a real developer role.
The test started off with some really simple stuff about printing out some numbers (so you knew that if they got those wrong you could stop reading). It then went through a few relatively straightforward OO PHP and design pattern examples, and ended with a very open question which gave the candidate a chance to just write some code.
One of the questions I liked most gave the candidate a snippet of code and asked them which design pattern it exemplified. The question was multiple choice, the motivation being that we didn't necessarily need a candidate who could recognise a composite (or whatever it was) straight off, but if they could Google the four terms, most likely find a Java or C++ or Smalltalk example, and relate it to the PHP code in front of them, they probably had the makings of a decent developer.
We hired some bloody good people at PropertyMall, so we were presumably doing something right.
I think the key thing is always to get the candidate writing code. At PlayPhone we just cut to the chase and give a fifty-or-so word written description of some behaviour, and send them home to write the code. If it doesn't suck, we invite them back to discuss it - and that's probably the most important part of the test: having the developer explain their approaches, thought processes and motivations.
Approaches I've Come Up Against
I've changed jobs a handful of times myself, so I've come across a few different approaches to testing developers.
Like absolutely every other PHP developer in London I've been put through Brainbench by Allegis, who recruit for IPC. Brainbench is taken online (so with access to the web a requirement, rather than an option!) and is time-limited. I think it was about 45 minutes, and the questions come thick and fast. At the end you're left with a cold, hard numerical score, which is presumably for the benefit of managers and other folks who can't grasp anything complex or organic.
The vendors claim that the test is smart, so the better you're doing, the harder the questions get. I did the PHP and the Perl ones, and I seem to remember the results were quite complimentary about my PHP. However, the system is clearly flawed, as it put me in the top few percent of the nation's Perl developers despite me blindly guessing my way through most of it, grimacing occasionally as I struggled to recall the few dozen lines of Perl I wrote as a student.
Another approach I've come across was that the company would email over a document with a bunch of questions or exercises, you'd have a crack at it, and 45 minutes later they would ring you and discuss it all. That's quite a nice combination of time-limited and open-book, it incorporates the all-important discussion stage, and doesn't waste too much of anyone's time.
Well, I say that, but it turned out that the company involved were such amateurs that they twice failed to keep the appointment, before I got fed up and told them to quit the hell wasting my time. But I liked the idea.
Closed-book can be interesting too. Before I started my current job I interviewed with a promising startup who were looking for a lead developer to head up the technical side of the company. That's quite a challenge and a lot of responsibility, so they were keen to make sure that they got the strongest developer they could find.
The result was a very challenging but rather satisfying time-limited, closed-book test covering everything from hardware performance to regular expressions, dependency injection and even DSLs - not your typical PHP web scripting stuff, for sure.
What surprised me was not my score, but the fact that it was expressed as a percentage. I think I managed something in the low 80s, but the implication that they thought that any of that stuff had a right or wrong answer really set the alarm bells ringing.
It got worse though, and the post-mortem was painful as hell. Now, since the company didn't yet have a technical team (that would have been my job to remedy), they had hired a local "guru" (surely I'm not the only one who raises an eyebrow when that word crops up?) in a kind of consultancy role, to set the test.
In the post-test discussion and interview, it emerged that he'd marked me down for doing a command line svn merge using a syntax with which he was unfamiliar. He was also completely unaware of svnserve, and rather assumed I was making it up; another black mark there. I was given a dressing down for suggesting that PHP makes for a perfectly good templating language (that's only what it was invented for, mate) and again for once having written my own framework. Conversation later turned to the framework he was writing.
The whole thing left a bitter aftertaste, and after a good night's sleep I'd decided I really, really didn't want the job. Conveniently enough, I didn't get it.
But I digress. I don't want to turn this into a personal rant, because - believe it or not - I do actually have a point here, which I can best sum up as:
Who tests the testers?
Really, what reason does the candidate have to believe that the tester is any better than them, or even that they have a clue what they're doing? As you climb the experience ladder over the years, that question becomes more and more pertinent with every rung.
I think this is why I'm uncomfortable with handing out tests that are supposed to have a right and a wrong answer. On the other hand, I don't think that personal opinion (as in the templating example above) has a role to play in testing developers either.
Conclusions, If Any
So you can see what a minefield this can be, and why after eight years of interviewing and being interviewed, this is a nut I'm still to crack.
I guess I've picked up a few crumbs of wisdom over the years though. I definitely feel that open-book tests are the way to go, since they more closely simulate the environment in which developers actually work. I think all tests should be done with an internet connection, at the very least.
I also think it's utterly vital that a test doesn't seek to trick a candidate, or catch them out. You want to know what they can do for you, after all, so give them space to show it.
I think you also want to find out if the candidate is bright, and so you'll want to throw something in there that tests their brain, rather than their experience of a specific programming language. Not for nothing is Joel's book titled "Smart and Gets Things Done".
So what about you? How does your company test developers, and what experiences have you had? Finally, what role does professional certification, such as Zend Certification, play in all of this?
Russell
This really struck a chord with me; not least because you hired me at PM but also because I have been through the Allegis / Brainbench machine.
Surely the best option is to use the recruitment process as an opportunity for a bit of developer laziness and either:
1. Get the applicant to complete a mundane & routine task you have done a thousand times before or
2. Ask them to solve an incredibly hard task and then pass it off as your own work to your boss.