I have discussed often how poor reporting on economics and social science can adversely shape social policy.
Salim Furth recently brought up salient examples of this type of creature in the wild.
What happens is that the political persuasion is the journalist often informs what type of stories they choose to cover. No one can avoid that tendency.
But quite often that leads journalists with unseasoned science literacy to either misunderstand the evidence or fabricate an explanation altogether.
These are just the most recent examples.
It should also be noted that in empirical economics, results don’t come with their own interpretation. The scholars have to wade through it and describe what they see. That often means at least a modicum of speculation on mechanism or the decision-making process of the behaviors they are measuring. Interpretations can be readily argued over.
I would venture to guess that outside methodology, interpretation of data results are the most fertile ground for quality control in social science.
The problem is that the science community can’t run quality control measures on journalists. When the journalist messes up the initial story, very few will actually see the retraction. Even fewer will understand why.
Maybe that’s why people generally distrust journalists.
Check your numbers, check your interpretations. That goes for the layman too!