Living in an Algorithmic World

Two recent conferences, re:publica 18 in Berlin (May 2-4) and Theorising the Web 2018 in New York (April 27th/28th) have featured the influence of algorithms on today’s world.

This week ‘How an algorithmic world can be undermined’ was the title of danah boyd’s opening keynote for the re:publica 18 conference.

Algorithmic technologies that rely on data don’t necessarily support a social world that many of us want to live in. We must grapple with the biases embedded in and manipulation of these systems, particularly when so many parts of society are dependent on sociotechnical systems.

 

Over the course of an hour danah boyd covered:

  1. Agenda settings – the safety of the internet and how this can be manipulated by online groups.
  2. Algorithmic influence – most systems are shaped by algorithms in the belief that algorithms are the solution to everything. Boyd asks how we can challenge this and how these systems can be made accountable. How are these systems manipulated at scale (e.g. political campaigns)? She says we are starting to see a whole new ecosystem unfold.
  3. Manipulating the media – there are plenty of examples of how this can be done to gain attention and amplify messages. Who is to blame for this? Twitter, journalists, news organisations, Wikipedia, reporters? We need to think about the moral responsibility of being an amplifier. What is it? She asks what does strategic silence look like and says – if you can’t be strategic be silent. The process of amplification can cause harm, e.g. reporting on suicide can increase suicide numbers.
  4. Epistemological warfare – how doubt is fed into the system about how we produce knowledge. This is destabilising knowledge in a systematic way, creating false equivalencies that media will pick up on. “We are not living through a crisis of what’s true. We’re living through a crisis of how we know whether something is true” (Cory Doctorow).
  5. Bias everywhere. What biases are built into our systems and how are they amplified? Bias is everywhere and in algorithms. Society’s prejudices are built into the system. Machine learning systems are set up to discriminate, to segment information and create clusters. They are laden with prejudice and bias.
  6. Hateful red pills. Gaming problems and data voids. Red pills are meant to entice you into something more – radicalising people. Where does this fit into broader sets of contexts?
  7. The more power the technical system has, the more that people are intent on abusing systems. We have to try and understand the dynamics of power and the alignments of context, particularly in relation to human judgement.
  8. The new bureaucracy. How do we think about accountability and responsibility? Who is setting the agendas and under what terms? There is growing concern about the power of tech companies and power dynamics in society. It is not simply about platforms. Algorithms are an extension of what we understand bureaucracy to be. Regulating bureaucracy has been difficult throughout history. It is not necessarily the intentions but the structure and configurations that cause the pain. Bureaucracy can be mundanely awful to horribly abusive. Algorithmic systems are similar, introducing a wide range of challenges. Technology is not the cause of fear and insecurity. It is the amplifier. It is a tool for both good and evil. We are seeing a new form of social vulnerability. It comes back to questions of power. Regulating the companies alone, isn’t going to get us what we want.

Similar themes were covered in the final keynote of the Theorising the Web conference

#K4 GOD VIEW – https://livestream.com/internetsociety/ttw18/videos/174107565

This keynote took the form of a panel discussion between John Cheney Lippold, Kate Crawford, Ingrid Burrington, Navneet Alang and Kade Crockford, and moderated by Ayesha A. Siddiqi. It was a fascinating discussion, interesting not only for the content, but also for the format and the lack of point scoring between panel members. Mariana Funes describes this well in her notes on hypothes.is where she writes “This [] felt like a chat after dinner, exploring the implications of the spread of AI systems ……”

The underlying message and final question from both keynotes was: What kind of a world do we want to live in?