Thursday 24 October 2013

Problems about "Impact"

A lot is being written now about the upcoming REF and its associated demand for impact. At the same time, independently, our research centre ICLS has started off its second 5 years with a great deal more attention to impact matters than that with which we started out in 2008. I had developed an idea about what used to be called "dissemination" during the ESRC Resilience Network that I co-ordinated between 2003 and 2007. I called this "targeted dissemination". It was perhaps closer to what then became "user engagement". I figured that if we as a research group were going to be any use to our non-academic partners, we needed to get to know each other. The project leaders needed to come to understand the interests and needs of the partners. The kind of thing I wanted to avoid was exemplified when one of my old friends from my Civil Service days said "Oh, Mel, I would love to be able to help you but I am just snowed under with work". This was not the idea at all! I had not meant to ask for her told help me by agreeing to "engage with me as a user". So I said , look, the idea is that I am supposed to help you out, not the other way around.

Nowadays I hear similar things from people in 3rd sector groups that I still relate to. They are wise to the reason they suddenly receive a lot of messages from academics asking if they are interested in some project or other. They know this is because a call for proposals has gone out on a topic that is relevant to them (ageing, child health, etc). And other civil service friends told me they had a kind of standard paragraph they could shell out to importunate academics without taking too much trouble over it.

So over the years I have made myself available to our non-academic partners in whatever way they find useful. It might be advising on a tender they are drawing up to get some research done. It might be reviewing applications they have received. It might be a friendly chat. I always answered my own phone (retired now) which people used to like, though it surprised them. I always wrote personally to people.

One of our partners is a private firm. When I tried to involve them in an ESRC co-funding scheme, however, this did not work. They took one look at the forms we would all have to fill out and were horrified. "We don't pay you people to fill out forms" they said "we pay you to do research on the questions we are interested in. Can't we just agree a task and pay you?" They could not understand that getting a joint project co-funded was a competition where ESRC had to judge who should get the award. They knew what they wanted and they wanted us to do it, end of. And "overheads", forget it. Now we do things their way.

But this is not the main point of this blog. The main point is rather more serious. In the race for impact, I do not think enough attention is paid to the quality of the science. The literature is now filling up with stories about un-replicable research in any case. What quality control is in place to make sure that "impact" is not being attained with poor science? You have a 3 or even 5 year research programme and within that time impact must be demonstrated. what time does that allow for your results to be tested, replicated, critically discussed? Even clinical medicine finds itself under fire for prescribing useless drugs and procedures that have been thrust forward without full enough evaluation. This happens even in a field where clinical trials are supposed to stand as a guarantee; now we know that many negative results are hidden. One can see here where financial incentives play a powerful role. But do we want a situation in social and policy sciences where, in the absence of the profit motive, the "impact motive" threatens to create a similar form of corruption?

No comments:

Post a Comment