Tuesday, 12 June 2012

The Translation Fallacy

If you have ever been unlucky enough to attend a large gathering of strategy academics – as I have, many times – it may have struck you that at some point during such a feast (euphemistically called “conference”), the subject matter would turn to talks of “relevance”. It is likely that the speakers were a variety of senior and grey – in multiple ways – interchanged with aspiring Young Turks. A peculiar meeting of minds, where the feeling might have dawned on you that the senior professors were displaying a growing fear of bowing out of the profession (or life in general) without ever having had any impact on the world they spent a lifetime studying, while the young assistant professors showed an endearing naivety believing they were not going to grow up like their academic parents.

And the conclusion of this uncomfortable alliance – under the glazing eyes of some mid-career, associate professors, who could no longer and not yet care about relevance – will likely have been that “we need to be better at translating our research for managers”; that is, if we’d just write up our research findings in more accessible language, without elaborating on the research methodology and theoretical terminology, managers would immediately spot the relevance in our research and eagerly suck up its wisdom.

And I think that’s bollocks.

I don’t think it is bollocks that we – academics – should try to write something that practicing managers are eager to read and learn about; I think it is bollocks that all it needs is a bit of translation in layman’s terms and the job is done.

Don’t kid yourself – I am inclined to say – it ain’t that easy. In fact, I think there are three reasons why I never see such a translation exercise work.

1. Ignorance

I believe it is an underestimation of the intricacies of the underlying structure of a good managerial article, and the subtleties of how to convincingly write for practicing managers. If you’re an academic, you might remember that in your first year as a PhD student you had the feeling it wasn’t too difficult to write an academic article such as the ones you had been reading for your first course, only to figure out, after a year or two of training, that you had been a bit naïve: you had been (blissfully) unaware of the subtleties of writing for an academic journal; how to structure the arguments; which prior studies to cite and where; which terminology to use and what to avoid; and so on. Well, good managerial articles are no different; if you haven’t developed the skill yet to write one, you likely don’t quite realise what it takes.

2. False assumptions

It also seems that academics, wanting to write their first managerial piece, immediately assume they have to be explicitly prescriptive, and tell managers what to do. And the draft article – invariably based on “the five lessons coming out of my research” – would indeed be fiercely normative. Yet, those messages often seem impractically precise and not simple enough (“take up a central position in a network with structural holes”) or too simple to have any real use (“choose the right location”). You need to capture a busy executive’s attention and interest, giving them the feeling that they have gained a new insight into their own world by reading your work. If that is prescriptive: fine. But often precise advice is precisely wrong.

3. Lack of content

And, of course, more often than not, there is not much worth translating… Because people have been doing their research with solely an academic audience in mind – and the desire to also tell the real world about it only came later – it has produced no insight relevant for practice. I believe that publishing your research in a good academic journal is a necessary condition for it to be relevant; crappy research – no matter how intriguing its conclusions – can never be considered useful. But rigour alone, unfortunately, is not a sufficient condition for it to be relevant and important in terms of its implications for the world of business.

Monday, 4 June 2012

“Can’t Believe It" 2

My earlier post – “can’t believe it” – triggered some bipolar comments (and further denials); also to what extent this behaviour can be observed among academics studying strategy. And, regarding the latter, I think: yes.
The denial of research findings obviously relates to confirmation bias (although it is not the same thing). Confirmation bias is a tricky thing: we – largely without realising it – are much more prone to notice things that confirm our prior beliefs. Things that go counter to them often escape our attention.

Things get particularly nasty – I agree – when we do notice the facts that defy our beliefs but we still don’t like them. Even if they are generated by solid research, we’d still like to find a reason to deny them, and therefore see people start to question the research itself vehemently (if not aggressively and emotionally).

It becomes yet more worrying to me – on a personal level – if even academic researchers themselves display such tendencies – and they do. What do you think a researcher in corporate social responsibility will be most critical of: a study showing it increases firm performance, or a study showing that it does not? Whose methodology do you think a researcher on gender biases will be more inclined to challenge: a research project showing no pay differences or a study showing that women are underpaid relative to men?

It’s only human and – slightly unfortunately – researchers are also human. And researchers are also reviewers and gate-keepers of the papers of other academics that are submitted for possible publication in academic journals. They bring their biases with them when determining what gets published and what doesn’t.

And there is some evidence of that: studies showing weak relationships between social performance and financial performance are less likely to make it into a management journal as compared to a finance journal (where more researchers are inclined to believe that social performance is not what a firm should care about), and perhaps vice versa.

No research is perfect, but the bar is often much higher for research generating uncomfortable findings. I have little doubt that reviewers and readers are much more forgiving when it comes to the methods of research that generates nicely belief-confirming results. Results we don’t like are much less likely to find their way into an academic journal. Which means that, in the end, research may end up being biased and misleading.