Any press freedom supporter should be familiar with the Reporters Without Borders’ Press Freedom Index. It’s an annual survey of every country in the world that attempts to quantify the level of freedom that the general press enjoys in each country. The ranking is based on questionnaires sent to experienced correspondents in each country. The questionnaires include several factors of variable weighting that include political, legal, and economic restraints that journalists face.
It’s a powerful source because it’s comprehensive and quite a deal of thought has been spent on each factor in each country. It’s thus a perfect resource for a project that aims to compare several other factors to press freedom measures.
And as you read deeper and deeper, you realize there’s a sort of contradiction between the numbers and the goal. The questionnaire comprises too many questions to count, dozens, at least above a hundred, all categorized into very different factors.
And then you read a line like this, “Africa’s newest country [South Sudan scored at 38.04] is torn by civil war and has an extremely polarized press. In Afghanistan [scored at 37.44], it is the state’s ability to guarantee media safety that is lacking.”
And then you realize that the RWB only publishes a single value.
It is indisputable that several different factors contribute to media freedom violations. Some countries suffer more from governmental hands in the press, some from non-state terrorism and intimidation. Yet the RWB’s measure simply puts a red tag on any country that comes out with a score between 35 and 55 points.
This may not matter at all on paper. They’re only numbers derived from probably far too subjective measures.
But the organization declares it is focused on solutions to violations. It’s difficult to solve problems when you think they’re all the same. You end up approaching them in the same way. Government problems demand diametrically different solutions than internal press problems.
What caught my eye is that the RWB recognizes this, but does not publish these findings. (Their analysis of the correlations between things like per capita GDP and their measure is also severely lacking in rigor.) Instead we must rely on what they say and do as in the best interest of press freedom. I of course do not doubt their devotion.
But I asked one of the coordinators of the index if there were breakdowns of each value into their constituent components. After all, the methodology shows that there is an extensive equation that is behind each value.
The reply concerns me. They cited the index’s nature as an advocacy tool as a reason why they could not disclose the breakdowns, “to preserve its impact.” Only just last year did they release a simple measure about violence against journalists (without considering from whom the violence came).
Instead of rejecting a friendly fellow undergraduate advocate, they chose to rely on the tool’s task to tell the world the simple story: that press freedom is a problem. They chose to deem their tool as only advocacy – a tool to also reel in more money. In this kind of work, it’s naturally hard to ignore the constraints that money puts on your work. But the reply concerned me beyond this; it made me question how concrete the numbers and questionnaires behind each end value was.
What gives me hope is that one broken down abuses score, although too general to be of direct use in solutions, was released. They also leave the topic as simply “not decided yet.”