#NRC01PL Personal Learning Assistants

noa-aldebaran

Source of image (see Footnote)

This week I have lost my wonderful personal trainer who has been coming to me twice a week for the past three years or so. She was wonderful because in all that time no two sessions were ever exactly the same, she talked throughout the sessions making them seem like social events rather than gruelling exercise, she knew exactly what I needed to keep me fit and also what would motivate me. She personalized my fitness training. I will miss her, even though I will still have my twice weekly circuit training sessions in our village hall. I might have to step that up to three times a week. The circuit training sessions are for a group; as such they are not individualised/personalized for me.

So it has been interesting to listen to Stephen Downes and George Siemens talking about personal learning assistants this week in the Personal Learning MOOC. Stephen illustrated the idea of a digital personal learning assistant very well by showing us how he uses an online Food tracker, into which he records the details of all his food intake and his online Fitness tracker into which he records his walking and cycling sessions. The point was that these are his personal learning assistants. He can set targets for how many daily calories he will eat or how much weekly exercise he does, and by inputting his data, he will receive feedback on how well he is meeting his targets. I have to say that my immediate reaction was that I wouldn’t want to spend the time on inputting the data – and as for targets – being interested in emergent learning, targets isn’t a word I easily relate to, although I can easily relate to the idea of challenge.

Not so long ago I bought myself a Fitbit  to count my daily steps – a target driven device. My enthusiasm for it was very short-lived and I didn’t even have to enter data for that – just wear it and it recorded my steps automatically. The wonderful thing about my personal trainer was that I rarely had to provide her with any data, but I received the exact sessions and feedback I needed, there were no targets, and it felt like a social event. This aligns with Stephen’s comment that automation should take out the drudgery.

Stephen and George discussed what automation of personal learning should and shouldn’t do (see the video). Automation should not only remove the drudgery of tasks (for me that would include inputting data!) but also enable choice, honour autonomy, respect human agency, provide appropriate support and most importantly provide feedback. Basically we are talking about what good teachers have always done. This is what my personal trainer did for me. She planned my training sessions, but I was always able to say to her ‘No, I don’t want to, or I am not able to do that today’ and then she would modify the activity. She listened and her plans were highly adaptable. She always left me with next steps, but it was up to me whether I took those steps and she never judged me if I didn’t.

But my personal trainer was not an automaton. I learned as much about her as she did about me and whilst I learned a lot from her, I know that she also learned from me. Could this be the case for a digital personal trainer? Yes I expect so, but my gut feeling tells me that now that I have ‘lost’ her, (she has moved on in her life and I am pleased for her) I don’t think she could be replaced by a machine.

Meanwhile research seems to be turning towards investigating what aspects of a teacher’s role could be automated. See for example:

Bayne, S. (2015). Teacherbot: interventions in automated teaching. Teaching in Higher Education, 20(4), 455–467. doi:10.1080/13562517.2015.1020783 – http://www.tandfonline.com/doi/pdf/10.1080/13562517.2015.1020783

and

Lim, S. L., & Goh, O. S. (2016). Intelligent Conversational Bot for Massive Online Open Courses (MOOCs). doi:10.1017/CBO9781107415324.004 – http://arxiv.org/pdf/1601.07065v1.pdf

And at the end of his talk with Stephen, George said that he thought that those working on developing machine learning will be the ones to become wealthy in the future, which for some reason at this point in time feels a bit depressing, but hopefully won’t be so.

Footnote

A few years ago, whilst working as a consultant for the University of Birmingham, I saw these robots (the ones in the first image in this post) being used to support children on the autism spectrum in a forward thinking Birmingham school. Research into this programme showed that these children were able to relate to these robots and that the robots helped them develop their communication skills.

Further resources related to Week 6 in the Personal Learning MOOC

Levy, D. M. (2007). No Time to Think. Ethics & Information Technology, 9(4), 1–24. doi:10.1007/s10676 – http://faculty.washington.edu/dmlevy/Levy_No_Time_to_Think.pdf

Halevy, A., Norvig, P., & Pereira, F. (2009). The Unreasonable Effectiveness of Data. IEEE Intelligent Systems, 24(2), 8–12. doi:10.1109/MIS.2009.36 – http://static.googleusercontent.com/media/research.google.com/en//pubs/archive/35179.pdf 

 

Automating teaching and assessment

George Veletsianos gave an interesting and thought provoking talk to the University of Edinburgh yesterday. This was live streamed and hopefully a recording will soon be posted here.  A good set of rough notes has been posted by Peter Evans on Twitter

Peter Evans@eksploratore

My live and rough notes on #edindice seminar from@veletsianos on #moocs, automation & artificial intelligence at pj-evans.net/2014/06/moocs-…

As he points out, there were three main topics covered by George’s talk:

  • MOOCs as sociocultural phenomenon;
  • automation of teaching and
  • pedagogical agents and the automation of teaching.

George’s involvement with MOOCs started in 2011 when he gave a presentation to the Change11 MOOC, which I blogged about at the time .

I found myself wondering during his talk to the University of Edinburgh, whether we would be discussing automating teaching, if he had started his MOOC involvement in 2008, as this presentation seemed to come from a background of xMOOC interest and involvement. Those first cMOOCs, with their totally different approach to pedagogy, were not mentioned.

I feel uncomfortable with the idea of automating teaching and having robotic pedagogical agents to interact with learners. The thinking is that this would be more efficient, particularly when teachers are working with large numbers as in MOOCs, and would ‘free up’ teachers’ time so that they can focus on more important aspects of their work. I can see that automating some of the administration processes associated with teaching would be welcome, but I am having difficulty seeing what could be more important, as a teacher, than interacting with students.

George pointed out that many of us already use a number of automating services, such as Google Scholar alerts, RSS feeds, IFTTT and so on, so why not extend this to automating teaching, or teaching assistants, through the use of pedagogical agents such as avatars.

What was interesting is that the audience for this talk seemed very taken with the idea of pedagogical agents, what gender they should be, what appearance they should have, what culture they should represent etc. For me the more interesting question is what do we stand to lose and/or gain by going down this route of replacing teachers with machines.

For some of my colleagues, Karen Guldberg and her team of researchers at Birmingham University, robots have become central to their research on autism and their work with children on the autism spectrum. These children respond in previously unimaginable ways to robots. For some there will be gains from interacting with robots.

But I was reminded, during George’s talk, of Sherry Turkle’s concerns about what we stand to lose by relying on robots for interaction.

And coincidentally I was very recently pointed, by Matthias Melcher, to this fascinating article – Biology’s Shameful Refusal to Disown the Machine-Organism – which whilst not about automating teaching through the use of avatars/robots, does consider the relationship between machines and living things from a different perspective and concludes:

The processes of life are narratives. The functional ideas manifested in the organism belong to the intrinsic inwardness of its life, and are not imposed from without by the mind of an engineer. (Stephen L. Talbott, 2014).

Finally, George Veletsianos’ talk was timely as I am currently discussing with Roy Williams, not how teaching and assessment should be automated, but rather whether and if so how, it can be put in the hands of learners.

This topic will be the focus of a presentation we will give to the University of Applied Sciences, ZML – Innovative Learning Scenarios, FH JOANNEUM in Graz, Austria on September 17th 2014.