Objectivity in Science

I recently had the good fortune to attend the Objectivity in Science conference, this past June 17 -20, 2010 at the University of British Columbia. The question that attendees set to examine was, what is objectivity and why does it matter? The conference was part of a SSHRC-funded network called Situating Science: Science in Human Contexts which aims to connect those “engaged in the humanist and social studies of science examining the sciences in situ“. Major universities across the country have held their own Situating Science events in the past year, all relating to science and technology studies or the history of the philosophy of science.

While there were many sessions to report on — in particular plenaries by Ian Hacking and Peter Galison which I found to be quite intriguing and inspiring — I’ll speak to the first two papers (apologies to Moira Howes) delivered in Session VI: Objectivity: Norms & Trust. On the whole, these papers were speaking to the facts and values which underlie science communication to the public.

What’s wrong with framing? How scientists’ norms of objectivity may (but need not) impede effective science communication — Scott Tanona et al, Philosophy, Kansas State.

Tanona and colleagues (I regret to confess that Tanona had a co-presenter but I didn’t catch her name…) conducted an interesting study in which they found that scientists tend to hold two norms around the objectivity of ‘proper methods’ within science that actually impair science communication. These norms are: 1)  public inferences from the data should be autonomous and rational; and 2) empirical data is the arbiter of theoretical disputes, according to generally accepted standards of evidence (e.g., science vs. religion). In other words, communicating ‘just the facts’ (or rather, deciding what facts are relevant for communication) is a way to a place normative emphasis on data as the objective basis for addressing disputes. However, presenting ‘just the facts’ are never just the facts as the individuals on the recipient end of the communication line do not necessarily share the same expertise as the scientist communicating the results, and bring their own values to interpretation which may differ from the scientist. Tanona et al. ultimately argued for scientists to use morally appropriate ‘framing effects’ (cf., Tversky and Kahneman) to lead to a wider acceptance of scientific findings as it helps individuals overcome cognitive bias and promotes accuracy and audience autonomy. Now, I’m not entirely sure how this approach would eliminate barriers in the public understanding of the science itself (let alone democratize the knowledge translation process). In many ways, it still seems to be a transmitter-receiver form of communication with those in positions of power still deciding what is worthy of public knowledge. Thoughts on this?

Objectivity as Trustworthiness — Naomi Scheman, University of Minnesota & Umeå Centre for Gender Studies.

So why does objectivity matter? Indeed, this was one of the underlying questions of the conference but also one Scheman posed within the context of trust. Why do we trust experts? Specifically, what might make it rational for people not to believe in what qualified scientists say? Such a situation is occurring regarding climate science: people are losing their trust in science. Scheman argued that we trust experts because of our need for an epistemic dependency – more specifically, we are dependent on various forms of expertise to find out ‘what we need to know’ (and assume who is saying it and what is said to be trustworthy) but rarely check the knowledge claims ourselves. Systems which produce these knowledge claims are embedded in various institutions such as university, government or corporate research labs. The politics and science are thus intertwined and so presents a dilemma. An example Scheman gave was regarding the modern food industry, “do I trust the science or do I trust Monsanto?” The problem of epistemic dependency thus has implications for science communication. The question then becomes, for Scheman, is, because we need to rely on others for information, who or what ought we be dependent on? Since epistemic dependency demands epistemic trust, which requires an epistemic responsibility, Scheman argued for an epistemic democracy in which to ground concepts of expertise (that was a lot of words using epistemic!). This thinking is in line with values of social justice in which objectivity is understood as a public good, and one in which a diversity of voices can be embraced.

All in all I really enjoyed attending this conference, particularly the one I briefly summarized above. Looking forward to seeing more from this group in the future.

Links

Objectivity: Vancouver 2010

Situating Science

Advertisements

2 thoughts on “Objectivity in Science

  1. Pingback: Objectivity in Science Neuroethics At the Core

  2. One might wonder why one needs to trust in the sciences when the sciences don’t deal in values.

    The bare facts are just the bare facts, to be passed over or employed. But certainly not to be “trusted”. No trust is required. If a scientist promotes a value on the back of a bare fact then that’s a good reason to distrust the sciences.

    The idea that we should trust the sciences is the best reason for not trusting them. It is values that establish facts. Only fraudsters would present a fact as a value to be trusted, or as a source of a value. Examples could be cited.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s