Thursday, October 20, 2011

On Admitting failure, otherwise known as learning

This post is part of this week's forum on admitting failure


When I first joined the “aid world” coming from the  private sector, one of the first things that struck me was to see how off-the-cuff calculations were presented as numbers, as opposed to what they were, estimates, (for example, gender breakdown for families).  When I suggested we call them "estimate" in the reporting I was received with a look of both shock and horror:

-"then they’ll think we don't know what we are doing!"

Anyone who understands anything about development will know that in most situations it is almost impossible to know the  gender composition of family that is benefiting from some type of assistance. You can estimate according to country statistics how large the family might be, as well as the gender composition, but it will always be that, an estimate. So why the concern of stating it as such?

At least part of the problem is (me thinks)  that development is a relatively new social science. As any new science, it is learning by doing, which necessarily implies doing mistakes. As any social science, there are no hard and fast rules, no laboratory situations that can be replicated. 2+2 only equals four in a maths class, anywhere else it will depend on the context.

So why do we have this big chip on our shoulder about being something we are not? Didn’t medicine (actually, doesn’t medicine) learn by doing? Didn’t NASA drop a couple of spaceships? Why are we expected to be perfect from the start?

Donors and the public alike need to be educated about this, which actually really means we have to stop assuming they are dumb and can’t grasp these concepts. They can.

also, we also need to understand this ourselves. We need to have enough confidence to give ourselves space to learn, to sit down and think:

“ok, what went wrong, how could I’ve done it differently, how could I do it in the future?”

which brings us back to the part where we educate donors and the public, because in order to do that not only do we nee them to understand, but we also need for them to support and fund that part of the process.

I’m constantly shocked to find projects without the support of a  Monitoring & Evaluation (or reporting) officer. The requirement to monitor and report is there (always), but the funds are assigned to the implementation, and  the implementing staff is expected to design tools that will capture the relevant information, document, collect the data, and put it into a report, (as well as implement). None of which they are trained to do, and which usually translates into process monitoring:

“how many x did we do”

as opposed to: “what do we want to achieve by doing X, and how can we measure that we are having an impact on that?”

The requested report gets written and filed. Box gets ticked. No one has the time to analyse the information, to look for synergies, to see what other organizations are doing and how they can reinforce each other, or what we can learn from them. The lessons learned get filed away, hidden or camouflaged if possible, no one really benefitting from them.


Admitting failure isn't about marketing or a mea culpa, it's about incorporating the process of learning from our mistakes as part of the management process. Anything that gets brushed under the rug does double harm: first when you did the mistake, and then every time someone else has to make it because they couldn’t learn from you. Which to me sounds like a highly inefficient and expensive way to move forward.


4 comments:

Diana said...

Your reasoning could very well be applied to many other work situations, in different sectors, the goverment and even on our private lives.
Very interesting post!

Stephen Jones said...

Thanks for the thoughts on this, made me think of two other recent blog posts.

Firstly, David Week questioning why evaluation tends to be assigned to distinct specialists rather than being seen as part of everyone's role (http://bit.ly/rnofsS). Perhaps you're suggesting that specialist M+E people are needed, but their role should be making sure everyone else actually does their own M+E?

Secondly, Waylaid Dialectic makes the nice point that we should ask evaluators how often they are wrong and make mistakes too. (http://bit.ly/o7jHow)

amanda said...

I think this post brings up one of the biggest problems in aid. Without proper M&E we just keep doing the same programs over and over in different locations and then scratching our heads when we fail over and over. If we don't know what went wrong or why then why do it again?

Unknown said...

Indeed, and let's not even get into why efforts are duplicated where they are not needed and are missing where they are needed most. For as long as we keep actioning in terms of "success"and "failure"as absolutes, this will continue to happen.