What is bias when the opposite may also be true?

Audrey Watters ended 2019, with a long article about The 100 Worst Ed-Tech Debacles of the Decade and introduced it with these words:

I’m sure you can come up with some rousing successes and some triumphant moments that made you thrilled about the 2010s and that give you hope for “the future of education.” Good for you. But that’s not my job. (And honestly, it’s probably not your job either.)

At the same time as Audrey’s article appeared, I have been discussing Iain McGilchrist’s book, The Master and his Emissary. The Divided Brain and the Making of the Western World, with two friends (separately), in terms of the following questions:

  1. Is McGilchrist biased towards the right hemisphere?
  2. Could the left hemisphere and the right hemisphere both serve as the Master and his Emissary at different times?
  3. Does McGilchrist work in an echo chamber?
  4. Is the importance of technological intelligence, and advances in artificial intelligence sufficiently accounted for in McGilchrist’s work?
  5. Does McGilchrist promote the superiority/primacy of the right hemisphere, to the detriment of the left hemisphere?
  6. Does the left hemisphere also have a role in recognising the new?

(If you are unfamiliar with McGilchrist’s work, then these questions won’t mean much. If you are interested in knowing more about his work, a good place to start is with this video on You Tube.

The fields of interest of these two authors are completely different, but they have both been accused of bias and both have robustly defended their positions.

Audrey Watters received many positive responses for her article, but some questioned whether she should also have mentioned Ed-Tech successes as well as the failures, a response that she clearly anticipated, given the quote above. Here are a couple of comments from Twitter.

 

But Audrey comes back fighting in her HEWN newsletter

I’m not sure why folks want me to tell them what’s praiseworthy. As I said on Twitter: get your own moral compass. Look at your own practices, at the practices of those around you. And do better.

But more importantly, let’s be clear: the technology industry — education technology or otherwise — does not need my validation. It needs criticism. It needs criticism that refuses to come with sugar-coating and a few plaudits. There are not “two sides” to this issue that deserve equal time. There are not “two sides” — some good and some bad ed-tech — that exist in any sort of equal measure.

Iain McGilchrist, who published his monumental book in 2009 (and which took him 20 years to research and write), has also received his fair share of criticism. Unlike Audrey Watters,  he does present ‘two sides’ – the side of the left hemisphere and the side of the right hemisphere – but, he says, the relationship between them is not equal:

If the two hemispheres produce two worlds, which should we trust if we are after the truth about the world? Do we simply accept that there are two versions of the world that are equally valid, and go away shrugging our shoulders? I believe that the relationship between the hemispheres is not equal, and that while both contribute to our knowledge of the world, which therefore needs to be synthesised, one hemisphere, the right hemisphere, has precedence, in that it underwrites the knowledge that the other comes to have, and is alone able to synthesise what both know into a usable whole. (p.177 The Master and his Emissary)

This hasn’t prevented the criticisms of bias, but, like Audrey Watters, he is equally able to stand his corner. See for example the exchange between him and Stephen Pinker, between him and Kosslyn and Miller, and between him and Kenan Malik. It is not hard to find more exchanges like these.

These interesting examples from two different authors, writing about different subjects, which have serendipitously come to my attention at the same time, raise the question of when does ‘taking a stand’ and fiercely stating a position, amount to bias.

The Cambridge Dictionary defines bias as follows:

… the action of supporting or opposing a particular person or thing in an unfair way, because of allowing personal opinions to influence your judgement.

On Wikipedia, bias is defined as:

…  disproportionate weight in favor of or against an idea or thing, usually in a way that is closed-minded, prejudicial, or unfair.

 So is bias a bad thing and what constitutes disproportionate weight? And have these two authors been close-minded, prejudicial, or unfair, allowing personal opinions to influence their judgement?

Or is ‘biased’ just a word that we level at people who don’t agree with us, or who we don’t agree with?

I’m not sure that thinking in terms of bias is helpful. For some questions, taking a strong position, hopefully an open-minded, fair and unprejudiced position, is needed to produce a better argument, but my experience is that it is hard to judge what counts as a well-argued, open-minded, fair and unprejudiced position. Personal perspectives and contexts are influential.

There is always the potential for an alternative perspective, or “two sides”, to quote Audrey Watters.

As Iain McGilchrist says:

The model we choose to use to understand something determines what we find. If it is the case that our understanding is an effect of the metaphors we choose, it is also true that it is a cause: our understanding itself guides the choice of metaphor by which we understand it. The chosen metaphor is both cause and effect of the relationship. Thus how we think about ourselves and our relationship to the world is already revealed in the metaphors we unconsciously choose to talk about it. That choice further entrenches our partial view of the subject. Paradoxically we seem to be obliged to understand something – including ourselves – well enough to choose the appropriate model before we can understand it. Our first leap determines where we land. ( The Master and his Emissary, p.97)

He also says (which is perhaps even more relevant to this discussion)

‘There is always a truth in the opposite of something’ (see a previous blog post. The Value and Limits of Reason )

So, whether or not as individuals we think that Audrey Watters and Iain McGilchrist have presented biased arguments, we can remember that, for some other people, the opposite could also be true.

(Source of image: http://nautil.us/blog/why-youre-biased-about-being-biased)

Living in an Algorithmic World

Two recent conferences, re:publica 18 in Berlin (May 2-4) and Theorising the Web 2018 in New York (April 27th/28th) have featured the influence of algorithms on today’s world.

This week ‘How an algorithmic world can be undermined’ was the title of danah boyd’s opening keynote for the re:publica 18 conference.

Algorithmic technologies that rely on data don’t necessarily support a social world that many of us want to live in. We must grapple with the biases embedded in and manipulation of these systems, particularly when so many parts of society are dependent on sociotechnical systems.

 

Over the course of an hour danah boyd covered:

  1. Agenda settings – the safety of the internet and how this can be manipulated by online groups.
  2. Algorithmic influence – most systems are shaped by algorithms in the belief that algorithms are the solution to everything. Boyd asks how we can challenge this and how these systems can be made accountable. How are these systems manipulated at scale (e.g. political campaigns)? She says we are starting to see a whole new ecosystem unfold.
  3. Manipulating the media – there are plenty of examples of how this can be done to gain attention and amplify messages. Who is to blame for this? Twitter, journalists, news organisations, Wikipedia, reporters? We need to think about the moral responsibility of being an amplifier. What is it? She asks what does strategic silence look like and says – if you can’t be strategic be silent. The process of amplification can cause harm, e.g. reporting on suicide can increase suicide numbers.
  4. Epistemological warfare – how doubt is fed into the system about how we produce knowledge. This is destabilising knowledge in a systematic way, creating false equivalencies that media will pick up on. “We are not living through a crisis of what’s true. We’re living through a crisis of how we know whether something is true” (Cory Doctorow).
  5. Bias everywhere. What biases are built into our systems and how are they amplified? Bias is everywhere and in algorithms. Society’s prejudices are built into the system. Machine learning systems are set up to discriminate, to segment information and create clusters. They are laden with prejudice and bias.
  6. Hateful red pills. Gaming problems and data voids. Red pills are meant to entice you into something more – radicalising people. Where does this fit into broader sets of contexts?
  7. The more power the technical system has, the more that people are intent on abusing systems. We have to try and understand the dynamics of power and the alignments of context, particularly in relation to human judgement.
  8. The new bureaucracy. How do we think about accountability and responsibility? Who is setting the agendas and under what terms? There is growing concern about the power of tech companies and power dynamics in society. It is not simply about platforms. Algorithms are an extension of what we understand bureaucracy to be. Regulating bureaucracy has been difficult throughout history. It is not necessarily the intentions but the structure and configurations that cause the pain. Bureaucracy can be mundanely awful to horribly abusive. Algorithmic systems are similar, introducing a wide range of challenges. Technology is not the cause of fear and insecurity. It is the amplifier. It is a tool for both good and evil. We are seeing a new form of social vulnerability. It comes back to questions of power. Regulating the companies alone, isn’t going to get us what we want.

Similar themes were covered in the final keynote of the Theorising the Web conference

#K4 GOD VIEW – https://livestream.com/internetsociety/ttw18/videos/174107565

This keynote took the form of a panel discussion between John Cheney Lippold, Kate Crawford, Ingrid Burrington, Navneet Alang and Kade Crockford, and moderated by Ayesha A. Siddiqi. It was a fascinating discussion, interesting not only for the content, but also for the format and the lack of point scoring between panel members. Mariana Funes describes this well in her notes on hypothes.is where she writes “This [] felt like a chat after dinner, exploring the implications of the spread of AI systems ……”

The underlying message and final question from both keynotes was: What kind of a world do we want to live in?