Over the past few months I’ve very much been a fan-boy of the work done by @racheldavies around building learning in team life. The teams at Unruly seem to have evolved a great combination of learning practices that support different types of learning: team and individual, practical and conceptual, practice and technology, etc. It’s not only the combination of learning practices that impresses me but also the amount of time committed to learning with a baseline of Gold Card days at a rate of 1/week where team members can learn pretty much anything related to their job.
I suppose as a former principal technologist in a training company where my job was essentially to find new things, learn about them and then teach them to other people this shouldn’t really be a surprise. I’m a great believer in ongoing learning, team learning and reflective practice, especially in the software industry where there is a lot of activity to track in the technology space and a lot of learning to do in the design space. Even though my career in training was over 10 years ago, learning is still one of my great interests and re-surfaced again last year when I ran a workshop around ongoing organisational learning with Chris Cooper-Bland at SPA Conference called When will we ever learn?.
One of the things about being a reflective practitioner is that, well, you reflect on things. While considering how I could shamelessly rip these ideas off and apply them in my own organisation, a question popped up: how do you measure the impact? Now, some people would be interested in measuring the impact for some form of cost-benefit analysis (although down this road lies madness as people start to talk of the initiative costing the company 20% x number of developers x average daily cost…). However, I’m not interested in that sort of discussion. What interests me is getting feedback on what you’re doing. Although the set of practices at Unruly feel instinctively good, how could I know that they are actually achieving the objective of making our software development, and the artefacts produced thereby, better? Also, this set of practices works for Unruly but would a similar set of practices suit our context. I asked Rachel about this but they don’t have any measures and are happy to take it on gut instinct that it works for them. That’s OK when you have a former Connextra person as your CTO but I’ll probably need something more.
This question seems to be haunting me at the moment as I went to a BoF session at SPA Conference this year on how to measure the success of practices and ended up discussing exactly the same issue with Marina Haase and Soheir Ghallab.
So, thinking about it, what benefit should we get out of this sort of approach to learning? Basically I would expect the development organisation to improve in various ways: being better at building the right software to solve the customer’s problems, higher quality software, faster delivery of business value, etc. Ultimately, if this is working well then we should be delivering better software faster (to quote Dan North or Andy Carmichael and Dan Haywood) and this should lead to more customers and happier customers. So, could we measure an increase to our revenues or profitability? Could we use the net promoter score from our customer surveys to see how well we are doing with team learning? It’s fairly obvious that this type of measurement would too coarse-grained and disconnected from the learning and would be too prone to noise as, for example, the impact of a particularly good or bad sales campaign could completely mask the effects of the team learning.
At the other end of the scale, I could go back to my years in training and writing educational materials and we could look to define some learning objectives for the different things people were interested in learning. However, this would be too fine-grained. Although it would tell us whether a particular person or team had gained a new capability, it would not give us feedback on whether that particular capability was actually having a positive impact on our overall business. We could be learning a whole bunch of stuff that made us look good but it might not be the stuff we need to deliver for our customers.
So, what’s the answer? Well, at the moment I don’t know. In search of some form of answer I’ve invested in a copy of a recommended book on How to Measure Anything and I’ll keep you posted. If anyone reading this has any ideas they are willing to share then please let me know.
There two people who whose work I generally bring into any discussion of this topic and they are:
Tom Gilb https://flowchainsensei.wordpress.com/2012/10/01/quantification-vs-measurement/
Dave Snowden http://cognitive-edge.com/sensemaker/
Soheir mentioned a paper by Tom Gilb for which she was going to send me a reference. I watched one of his presentations which included his “how to quantify love” but my recollection is a little hazy so I was thinking of going back to this.