How to decide whether a tool is right for you
We are only at the beginning of our journey in building software. Our discipline is barely a few decades old. We have only a very little experience in how to correctly write code and a limited range of tools and skills to do it with. We should be actively looking for new tools, not wasting our time either promoting our toolset exclusively or disparaging the toolsets of others.
Tools are tools
Test-driven development has been one of those tools that has proved useful for many people over a number of years. Do I use it? Yes, much of the time.
Using a refactoring IDE has also proved useful to many people over a number of years, especially in certain languages. Do I use one? No. Does that mean it’s not a useful tool to others? No, of course not.
The ability to decouple code to promote changeability is also a great tool to have in your toolbox. Do I try and do this? Yes, wherever I can, and I’m always trying to get better at it.
There are many more. The insight as to when to refactor, not just how, is an incredibly valuable skill to have. The understanding that all code is built for someone and we should ensure we talk to them about what they want is powerful. Being able to check into source control without touching the network is a real speed boost, and gives me a detailed history of progress.
The recent TDD storm
So why do we get so hung up on one particular tool? When something works for us, we’re compelled to proclaim it’s the One True Way and that it’ll work for every problem and solve every headache. This is a grevious error, but in avoiding it, we can make the opposite one: when we find a tool’s limitations, we discard it completely and move on, proclaiming it useless for all.
DHH has a point. Don’t listen to the people who say there’s only one way to do a job, and that we should use any tool for everything.
Gary Bernhardt has a point. Test speeds do matter - the faster the better. Fast tests are a powerful tool.
Uncle Bob has a point. It’s not just about fast tests: separating concerns in order to promote changeability in code is a useful skill to learn.
Tom Stuart has a point. TDD is a useful tool because it gives you another client for your code, encouraging us to think harder about what it’s doing.
Seb Rose has a point. We need to learn how to use TDD (or indeed any tool) well before giving up on it.
Cory House has a point. People display their own biases in their opinions and we should learn from them all.
How to decide whether a software tool is right for you
It doesn’t matter whether it’s TDD, Vim, Git, Refactoring, OO, Functional programming, JavaScript, RubyMotion, etc. Follow the following advice repeatedly, substituting your own values:
If you haven’t tried tool X, give it a go. Many have found it helpful in areas Y and Z. Some have also found it applicable in areas A and B, but your mileage may vary. Some don’t get on with it, and a few hate it and say no-one should use it. Learn it properly before making any final decision about its usefulness to yourself and others. This will take <an amount of days/months/years> to do. Continue using it as long as it’s helpful to you.
There are as yet very few absolutes with software tools - we’re still way too primitive in our discipline for many of those. Let’s learn how to use as many tools and skills as possible, and use the right ones for the job. Let’s not decry the tools, skills and techniques of others if they are useful to them: let’s instead spend our energy actively seeking new skills and tools to further our discipline.
Share
More articles
Why I ditched all the build tools in favour of a simple script
Build tools are wonderful and impressive constructions. We developers invest colossal amounts of time, effort and code into their creation and maintenance.
Perhaps a lot of this work is unnecessary.
On Sol Trader, by ditching the complex dependency checking compilation system I was doing in favour of a simple homegrown script, I cut my build time from several minutes down to 5 seconds.
I’m not talking about continuous integration tools such as Jenkins, but tools such as CMake, Boost.Build and autotools. Perhaps these build tools are white elephants? They require endless maintenance and tinkering: does this outweigh their actual usefulness?
Incremental compilation: the end of the rainbow
One of the main aims of a compilation tool is to allow us to divide all the pieces of a system up into component parts to be built individually (another main aim is portability, which I’ll address below). This allows us to only build the part of the code which changed each time, and link all the compiled pieces together at the end.
However every time we build a source file, we have to grab a piece of code, grab all the dependencies of that code from disk. The disk is probably the slowest thing in our machines, and we have to grab everything from disk every time, for each source file we’re building. If we’re building a lot of files, this can get very slow.
The second problem with this is when we change an often-reused piece of code, such as a header file, we have to compile the whole lot again. In order to cut the amount of things to build down, we can set up complex dependency management systems to try to limit the amount of things built. We can also set up a precompiled header which tries to minimise disk access by building a lot of the code in advance for us, but more and more of our time is handling the side effects of pushing for an incremental build system.
Trying to get a build tool set up is like searching for a pot of gold at the end of a rainbow, which gets further away no matter how much effort we put into finding it. Even when it’s working, it’s not that fast, and it requires constant tinkering to get it right.
How I build code now: the Unity build
How about instead of building incrementally, we build everything every time? Sounds counter-intuitive, doesn’t it? It’s actually faster, easier to maintain, and doesn’t require setting up a complicated build tool.
We create one Unity.cpp
file. This includes all the C files and headers that I wish to build. We build that one file each time, and then link it with the 3rd party libraries. Done. It takes about 3-4 seconds to run, or 10 seconds on the Jenkins server.
Now, when I change a header, the script just builds everything again, so it doesn’t take any long that a few seconds to see the effects of any change I want to make.
Caveats
“Strategy is about making choices, trade-offs; it’s about deliberately choosing to be different.”
– Michael Porter
There are a few caveats with Unity builds that we should be aware of:
One compilation unit means no code isolation
The static
keyword will stop working as we expect: we won’t be able to constrain variables and methods to one file any longer. The power of good naming helps us out here. We also have to be disciplined about keeping our code modular and not referring to code that we shouldn’t.
We still need to discover platform-specific properties
On an open source project which must be built everywhere, we’re never going to get away with something as simple as this: we’re going to need to check to see what headers exist and which libraries are available.
However, there’s no reason we can’t set up make
to do a simple unity build such as this one.
Also, many of these portability problems we patch over with our build tools stem from the fact that our code wasn’t correctly written to be portable in the first place. Also, many build systems still in wide use today have a lot of cruft left over from the 1980s - do we really still need to check for the presence of <stdlib.h>
?
Additionally, in the case where we can control our build environment, it becomes even easier: we simply create a build script for each compilation platform we need to support (a build.bat
for Windows, for example).
Sol Trader’s Unity build setup
This is my current build setup for Sol Trader in its entirety.
This is working fine for me right now. It’ll need expanding on in the future, but instead of spending endless time screwing with my build system now, I’m actually adding game features instead.
Want to hear the other side of the debate? Here’s a well-argued opposing point of view: the evils of unity builds.
Read more4 questions to discover if you're *really* agile...
Here’s a challenge: how many of these questions are are true for your team? (Be honest.)
-
Does our team value processes and tools (i.e. our task tracker, source control program, our agile process, our meeting cadence, etc) over conversations between team members?
-
Does our team attempt to document everything (perhaps through long comprehensive ticket descriptions, or massively detailed cucumber features) before focusing on working software?
-
Does our team think about about SLAs, response times and formal release procedures before shipping something and having a conversation with the customer about it?
-
Is following the plan that you agreed in sprint planning more important than changing it in response to a customer?
If our projects sound like this, we’re doing exactly the opposite of the agile manifesto.
The anti-agile manifesto
Processes and tools over individuals and interactions
Comprehensive documentation over working software
Contract negotiation over customer collaboration
Following a plan over responding to change
Read both versions through. Which one sounds most like your project?
Read moreThe toolchain of dreams
Seems like yesterday people were saying that it was difficult to host Ruby apps. It was around the time people were saying “Rails doesn’t scale”, which thankfully has been proved dramatically wrong.
For a while now Ruby apps have been unbelieveably easy to run and host, especially when you’re getting started.
But it’s got even better than that in the last few months. I’ve now got a complete Continuous Delivery toolchain set up for my latest app, entirely in the cloud. It’s Continuous Delivery As A Service, and it’s dreamy. This is how to set it up, and how it works.
Source control: Github
I’m using Github for code hosting and source control. You probably are already too. Most of the other services integrate with it very well, so setting this toolchain up is so much easier if you’re using it.
Build server: Semaphore
Cloud-based build services have been running for a while now. I like Semaphore - the user interface is clean and easy to read, and it does automatic deploys of passing code:
Set up Semaphore by creating a trial account, connecting it with your Github account and picking the repository you’d like to build. It automatically analyses your project for a build script so if you have a standard Ruby or Rails project you probably won’t need to configure it much.
Deployment: Heroku
If you’re using Heroku to deploy your code, set it up to deploy to Heroku. It takes a few seconds in the settings menu for your project to do so. You can also make it use a Capistrano deploy script.
Quality Analysis: Code Climate
Lastly, set up Code Climate to monitor the quality of your app’s code. Setting up Code Climate is similar to Semaphore: sign up for a trial, connect to Github, select the repository. It will automatically set up the Github commit hooks for you.
To get code coverage integration, you’ll need to install a gem, but it only takes a few minutes.
How the toolchain works
Out of the box, Github tells Semaphore to build every commit I push. If I push a branch, Semaphore builds that, too, and updates the build status of the commit so that everyone can see if the pull request is ready:
Merging code into master
When the pull request is merged, the code goes into master:
- Semaphore builds the master branch. If the tests pass, the code is deployed to Heroku.
- Code Climate automatically gets notified by Github and checks to see whether coverage has improved or decreased, whether I’ve introduced any Rails security problems, or whether my code is bad:
Logging
Builds, deploys and Code Climate notifications are all automatically posted to Hipchat, so I get a log of everything that’s happened without being inundated with emails:
Just set up a Hipchat account, get a Room API key from the settings page, and plug that into Github, Code Climate and Semaphore. Done.
The dream toolchain
This is how you use this toolchain:
Every time I push some code, it’s checked carefully, and monitored for quality and security holes. The tests are run and coverage reports are generated and presented nicely. If all the tests pass the code immediately gets deployed to production, and all of this activity is reported and logged in one central place.
This is the future. It really doesn’t get much better.
Time is valuable: and this didn’t take long
This took me about 40 minutes to set up. 30 minutes of that was fiddling with the settings of the various tools: but actually leaving them all set to default does the right thing for me in all cases. Most of the tools simply connect to your Github account to set up all the access controls and keys for you.
The cost
For one project, this incredible toolchain will cost you the following:
- Github: $7 a month for the micro plan
- Semaphore: $14 a month for the solo plan
- Code Climate: $24 a month for the solo plan
- Hipchat: Free for one room
- Heroku: Free for a one dyno app.
That’s $45 a month. That’s next to nothing for such an amazingly powerful toolchain. Plus if you run more than one project, the per-project cost decreases dramatically.
I used to run one server to host one Rails app for $140 a month, for years, with no build server, deployment or code metrics built into the toolchain. Today I pay half that for a much more sophisticated setup.
Admittedly, the hosting costs with Heroku will go up once your app becomes popular, but this is a good problem to have, and at that point you shoud have the cash to invest in a chef-based cloud server deployment solution. I run one of those for an old SaaS service I run to keep costs down. It’s still very easy to connect a different deployment strategy in to this toolchain.
So: what are you waiting for?
Read moreDelegated tasks are a team anti-pattern
“Jane, I’d like you to phone up the recruiter, and tell them we need a new agency person. Don’t use Jim from Acme Recruitment again, you didn’t get very far with him last time. Make sure you book whoever it is in for a week to work with us as a trial, like last time. That worked well.”
“Jane, can you find us a great developer for the new website we mentioned in standup last week? Let me know if you need help.”
Which is better?
Goals, not tasks
How about we give our team goals, not tasks? Let them shoot for something, and work out their own tasks, rather than giving them a simple list of things to do. Goals allow people to apply their own creativity and their own flair to a solution, and the end result will be stamped with their individuality.
When learning a new skill, people need direction and tasks to follow. Matt Wynne recently re-iterated the classic Su-Ha-Ri model of learning, where we start with very clear forms to follow, then break those forms as we try new things, then advanced to a place where we no longer need the forms at all. At first, we need to work closely with people, and show them the tasks we perform to get something done. Note that this is quite different to giving people a long list of tasks to complete to ‘learn something.’
Whenever we give something away, there’s a risk that it won’t be done in quite the way that we would like. The simple fact is: no, it won’t. But assuming we’ve not overstretched someone, there’s a good chance they’ll get the job done at least 80% as well as we could have. And good people will cope with being stretched much further than we think.
There’s delegation, then there’s abdication
When we take goal setting too far, we just tend to stop giving people goals altogether and let them figure out their own jobs. This is dangerous: the best people don’t need managing, but they do need leading. Our role as a leader is to paint an exciting vision of the future, and then let our team figure out how to get there.
Micromanagement has many levels
It’s quite possible to micro-manage without realising it. We might think we’re not micro-managing because we’re not telling people exactly how to do something. However if we’re leaving little room for doubt in our own minds, and creativity in theirs, then our team will feel less able to apply their own skills and talents to the problem. They’ll get up feeling discouraged and insignificant.
Ultimately it comes down to trust, and fear. How much do we trust our people to get the job done? How much do we fear losing control?
The first step to fighting a task-oriented tendency is to realise it’s probably not a problem with our team members, but with us.
Read moreJob titles are a team anti-pattern
“We have two designers, two front-end developers, 2 back-end developers, and a tester.”
“Allie and Jim tend to lay out most of the pages, with help from the others. Joe, Alice, Bob and Alan tend to write most of the code, with Bob and Alan working mainly on the server side of things. Darren makes sure our work matches up to what’s expected.”
Which is better?
Job titles are labels
Labelling people with job titles as shorthand is one thing, but if we’re not careful our use of them can be dysfunctional:
-
Labels limit people’s potential. Our labels will limit what people will work on: they’ll subconciously start to stick to what their title says. This will happen even if they’re good people: it’s human nature to react to the culture which our team creates.
-
People hide behind the label. “That’s designer work, that’s not what I’m good at.” This gets worse when we get more specific: “I’m a front end developer: I don’t write Ruby.” This stops techniques like Kanban working effectively as people are less likely to help each other, and creates silos of knowledge in the team.
-
Labels reduce people to resources. “We need 4.2 developer days for this project, with 2.4 designer days per developer day.” Labels are interchangeable: people aren’t. Some developers are orders of magnitude more productive than others, for example. By homogenising the team, we’re extracting the soul from the company: we might as well be selling crude oil, not people’s expertise.
I’ve recently tried to stop using labels to describe myself: see my twitter bio for example. It’s been an interesting exercise, and I’d recommend it.
Selling services by team, not label
One problem we run into is when we run companies which sell client services by the hour. It’s easy to put together a rate card for different job titles, but this exacerbates the label problem and embeds it into the economics. I prefer the method of selling whole team-weeks to the client, rather than individual developers: “This crack team of people will set you back £10,000 per week”, for example.
Remember: the team environment is perfectly designed to achieve the result we’re currently getting. How are our job titles and labels affecting the way our team works today?