Living in an Algorithmic World

Two recent conferences, re:publica 18 in Berlin (May 2-4) and Theorising the Web 2018 in New York (April 27th/28th) have featured the influence of algorithms on today’s world.

This week ‘How an algorithmic world can be undermined’ was the title of danah boyd’s opening keynote for the re:publica 18 conference.

Algorithmic technologies that rely on data don’t necessarily support a social world that many of us want to live in. We must grapple with the biases embedded in and manipulation of these systems, particularly when so many parts of society are dependent on sociotechnical systems.

 

Over the course of an hour danah boyd covered:

  1. Agenda settings – the safety of the internet and how this can be manipulated by online groups.
  2. Algorithmic influence – most systems are shaped by algorithms in the belief that algorithms are the solution to everything. Boyd asks how we can challenge this and how these systems can be made accountable. How are these systems manipulated at scale (e.g. political campaigns)? She says we are starting to see a whole new ecosystem unfold.
  3. Manipulating the media – there are plenty of examples of how this can be done to gain attention and amplify messages. Who is to blame for this? Twitter, journalists, news organisations, Wikipedia, reporters? We need to think about the moral responsibility of being an amplifier. What is it? She asks what does strategic silence look like and says – if you can’t be strategic be silent. The process of amplification can cause harm, e.g. reporting on suicide can increase suicide numbers.
  4. Epistemological warfare – how doubt is fed into the system about how we produce knowledge. This is destabilising knowledge in a systematic way, creating false equivalencies that media will pick up on. “We are not living through a crisis of what’s true. We’re living through a crisis of how we know whether something is true” (Cory Doctorow).
  5. Bias everywhere. What biases are built into our systems and how are they amplified? Bias is everywhere and in algorithms. Society’s prejudices are built into the system. Machine learning systems are set up to discriminate, to segment information and create clusters. They are laden with prejudice and bias.
  6. Hateful red pills. Gaming problems and data voids. Red pills are meant to entice you into something more – radicalising people. Where does this fit into broader sets of contexts?
  7. The more power the technical system has, the more that people are intent on abusing systems. We have to try and understand the dynamics of power and the alignments of context, particularly in relation to human judgement.
  8. The new bureaucracy. How do we think about accountability and responsibility? Who is setting the agendas and under what terms? There is growing concern about the power of tech companies and power dynamics in society. It is not simply about platforms. Algorithms are an extension of what we understand bureaucracy to be. Regulating bureaucracy has been difficult throughout history. It is not necessarily the intentions but the structure and configurations that cause the pain. Bureaucracy can be mundanely awful to horribly abusive. Algorithmic systems are similar, introducing a wide range of challenges. Technology is not the cause of fear and insecurity. It is the amplifier. It is a tool for both good and evil. We are seeing a new form of social vulnerability. It comes back to questions of power. Regulating the companies alone, isn’t going to get us what we want.

Similar themes were covered in the final keynote of the Theorising the Web conference

#K4 GOD VIEW – https://livestream.com/internetsociety/ttw18/videos/174107565

This keynote took the form of a panel discussion between John Cheney Lippold, Kate Crawford, Ingrid Burrington, Navneet Alang and Kade Crockford, and moderated by Ayesha A. Siddiqi. It was a fascinating discussion, interesting not only for the content, but also for the format and the lack of point scoring between panel members. Mariana Funes describes this well in her notes on hypothes.is where she writes “This [] felt like a chat after dinner, exploring the implications of the spread of AI systems ……”

The underlying message and final question from both keynotes was: What kind of a world do we want to live in?

5 thoughts on “Living in an Algorithmic World

  1. Lisa M Lane May 10, 2018 / 2:15 am

    When I was in high school (I don’t even remember which class) I learned about how advertising is designed to manipulate you into buying products and ideas. Since then, I’ve always assumed advertising was trying to manipulate me, and I don’t like being manipulated, so I’m rarely enticed to buy anything through advertising – instead I notice the tricks being used.

    So it should be with algorithms. #3 and #5 make it clear that we need to have greater awareness of the ways in which algorithms can skew data. Once we have that understanding, or even just an awareness that it happens (without understanding the technicalities), it should be possible to counter the ill effects with our humanity.

    However, the massive success of advertising at all levels speaks to the ignorance people have about being manipulated, and that is the same problem with algorithms. With so many people tied up in the emotions of the last year or so, it is unlikely that many will see the manipulation without it becoming a cultural meme. It’s hard to pick out the forest fires when so many little fires capture ones attention.

  2. jennymackness May 12, 2018 / 6:16 am

    Many thanks for this comment Lisa. I do wonder about the extent to which raising awareness can counteract this problem. I agree that the more we are educated about the issues surrounding algorithms the better, but at a recent course I attended where we were thinking about the ethics of advertising, we discussed the subliminal effects of advertising, and that simple exposure is enough to manipulate you unconsciously. As you say, ” With so many people tied up in the emotions of the last year or so, it is unlikely that many will see the manipulation without it becoming a cultural meme.”

  3. francesbell May 15, 2018 / 8:40 am

    Thanks Jenny. I saw re:publica and TTW on my horizon but haven’t had time to read it so your report is very welcome. I think about algorithms a lot but it’s very difficult to come to any conclusions. It’s not surprising that algorithms will incorporate the assumptions and bias of their designers and also their commissioners who will pose constraints that help serve their own goals e.g. Venture Capitalists may have an influence on the algorithms in social media that need to monetise – Twitter is a relevant recent example I think. In some ways this is also the case with advertising over the last century or more. As Lisa says, we can resist manipulation when we have information but like you, I think we are going beyond the limits of this.
    I think a lot about what’s different now. When I studied AI and Expert systems in the late 1980s, I remember the presence of explanatory interfaces in Expert Systems (much other AI was opaque, black boxes generated by algorithms). Here’s an example (chosen because it’s open) of a system with an explanatory interface https://pdfs.semanticscholar.org/a1f1/7e0866bd10f98a638fe8dc8bb625885132fc.pdf As the article demonstrates, there was the acceptance that designs/ decisions achieved by humans and machines were seen to need communication between human and machine designers, as human designers do.
    So what’s different now? Systems have escaped into the wild in one sense though commercial needs mean that openness is controlled by the platforms who largely decide what is (in)visible to people, companies and other (non)human nodes in the network.
    For me, we need to look at algorithms but the data view point is also very important. A new algorithm might incorporate a new set of biases but additional biases and errors may be incorporated in the data used by the new algorithm, and the path by which this piece of data emerged will very likely be lost due to the invisibility (and often shoddiness) of the network of commercial platforms that tweaked it along the way.
    I’d better stop now as this is a ridiculously long comment but let’s look at the data perspective and as Safiya Noble said in a talk I was privileged to attend last week “Never give up hope”.

  4. jennymackness May 16, 2018 / 6:08 pm

    Thank you Frances for your detailed comment. I suspect you know far more about algorithms than I do. I did listen to the recording of Safiya Noble’s presentation on Algorithms of Oppression – https://youtu.be/Q7yFysTBpAo, which was interesting, but I found dana boyd’s presentation more interesting, particularly the reference to epistemological warfare. Having said that, I had not thought about e-waste before listening to Safiya Noble. I still have some presentations to follow up 🙂

  5. ggoldbergmd September 5, 2018 / 11:32 am

    I have been reading about Jean Gebser’s ‘structures of consciousness’ and the nature of our ‘moment’ and it seems that a purely linear algorithmic mode of understanding is connected to McGilchrist’s hypothesis regarding the domination of our culture by left-hemispheric mode of conceptualization, much to the detriment of our own humanity as a species. By the way, there is much to be learned in this general context from looking at Jean Gebser’s book, ‘The Ever-Present Origin’. There is a nice overview of it in a paper written by UJ Mohrhoff…

    https://antimatters2.files.wordpress.com/2018/04/2-3-05-gebser-origin.pdf

    Be forewarned: While this paper discusses Gebser’s ‘solution’ to our current crisis through the emergence of ‘Integral Consciousness’ out of our current ‘Age of Perspectivity’–which is dominated by a spatially perspectival physical worldview (eg. as typified by the ‘block universe conceptualization’ of the Newtonian-Einsteinian cosmological paradigm) that lines up with the linear sequential algorithmic approach to solving problems, and provides a nice overview of Gebser’s concept of the evolution of ‘structures of consciousness,’ it is a pretty heavy-duty projection for what will likely occur if we do not manage to figure this all out in due time.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s