“If we don’t evaluate our impact we risk becoming our own worst enemies.” Image: mollybob.

“If we don’t evaluate our impact we risk becoming our own worst enemies.” Image: mollybob.

By Craig Cormick, Communication Adviser and Arwen Cross, Science Communicator.

Public concerns about issues such as wind farms and vaccines have led to a discussion about why some people have strong fears or adverse reactions, and why their perception of risk doesn’t align with those of scientists. As Janet McCalmun wrote recently on The Conversation:

Their problem is a problem with science, and science has a something of a problem with them.

Both sides – the scientists and the public – have a problem that could potentially be addressed by better science communication that works to include all sides of such debates rather than polarising them, and uses evaluation to measure impact and improve.

There are many good arguments for improving the general public’s understanding of science. These include a knowledge of science being useful in daily life (such as determining which medical advice is more sound); the economic benefits (a skilled workforce is good for the national economy); the cultural benefits (that it is fulfilling to know about science, history or music); or even democratic benefits (an informed society can make better decisions). We could call this 20th-century thinking.

More recent arguments suggest that people should be engaged early in the directions and outcomes of scientific research, as key stakeholders/tax payers/beneficiaries. We could call this 21st-century thinking.

But is the question really a discrepancy between 20th- and 21st-century thinking, as Jenni Metcalfe suggested recently on The Conversation? Or is it more about better matching science communication strategies with different audiences, based on evidence? Because if we’re going to debate the best way to communicate science to the public, we must use that key tool of scientific research – evidence!

We should try and be a little clearer as to whether our goals are to increase scientific literacy, or organisational awareness, or science engagement. And what exactly does science literacy mean anyway?

Are we talking about knowing a certain set of facts or principles? Or we talking about being able to think more critically about evidence, or even being able to actively take part in decisions about the directions and outcomes of scientific research?

So how can we communicate better?

Certainly the methods and the evidence of science need to come across more effectively in some of our communications. That means we can’t just talk about final outcomes and black-and-white results.

Good examples of this have occurred, such as in the public debate on embryonic stem cell research, where moderate and reasoned discussion smothered the voices of hysteria and hyperbole.

Public attitude research from Biotechnology Australia in 2005 showed that attitudes were driven by a complex value chain, influenced by an individual’s personal moral position, the source of the embryonic stem cells, the benefits of the technology and levels of social trust.

The data showed that in 2005 almost 80% of Australians were aware of the use of embryonic stem cells for medical research, and this research was supported by 63.5% of the public, compared to 24.5% against it, with the remainder undecided.

Using this knowledge, public debate by scientific organisations tended to focus more on the ethics and values of stem cell research, rather than explaining the science.

By 2007, when the dust had mostly settled, approval ratings for embryonic stem cell research had risen from 63.5% to 76% and those opposed had dropped from 24.5% to 20%. And those most in favour tended to be categorised as technophiles with a strong support for uses of science and technology.

Similar values-based communications, such as understanding differing world-views on the relationship between technological development and nature, have been applied in some instances to public debates on GM foods, nanotechnology and climate change. Effective communication efforts have sought to frame discussions in terms of the values the public is applying to the issues, rather than those of scientists. But such examples tend to be too few and too far between.

But are we making an impact?

The concern that should drive any science communication activity should be: is it making an impact? It’s very easy to believe it is when everybody cheers when you blow something up, or the science fans rave about how interesting it was. But that’s a long way from actually contributing to scientific literacy in a sustained way. And, being honest, we rarely measure the long-term impacts we really need to be measuring.

Yet if we don’t evaluate our impact we risk becoming our own worst enemies.

To quote Dan Kahan, Professor of Law at Yale University, who has undertaken significant work on how our cultural biases impact our thinking:

Not only do too many science communicators ignore evidence about what does and doesn’t work. Way way too many also shoot from the hip in a completely fact-free, imagination-run-wild way in formulating communication strategies.

If they don’t rely entirely on their own personal experience mixed with introspection, they simply reach into the grab bag of decision science mechanisms (it’s vast), picking and choosing, mixing and matching, and in the end presenting what is really just an elaborate just-so story on what the ‘problem’ is and how to ‘solve’ it.

That’s not science. It’s pseudo-science.

This article was originally published at The Conversation. Read the original article.


  1. An insightful and thought provoking piece. I’m really interested in the measuring impact of science communication you touched on at the end of this article. I’m working in the digital space of science communication, and it is very easy to spit out numbers on reach, but measuring the qualitative impact of this requires more time and effort (and money!). What are your thoughts on measuring impact, both online and offline, and what tools might you suggest for successfully measuring this?

    1. Thanks for your comment Samuel. It’s certainly easier to measure bums on seats (real and digital) than impact.

      The topic of effective evaluation came up at the Big Science Communication Summit in June, and you can read some of the ideas from the workshops at http://sciencerewired.org/summit/2013/03/developingtheevidence/

      Inspiring Australia is also developing some standard evaluation tools. Watch their space at http://inspiringaustralia.net.au/toolkit/planning-your-events-and-activities-to-make-an-impact/

Commenting on this post has been disabled.