Beyond Assessment – Recognizing Achievement in a Networked World

Beyond Assessment slideshare

 

This was the third in a series of 3 talks that Stephen Downes gave in London this week.

Jul 11, 2014
Keynote presentation delivered to 12th ePortfolio, Open Badges and Identity Conference , University of Greenwich, Greenwich, UK.

ePortfolios and Open Badges are only the first wave in what will emerge as a wider network-based form of assessment that makes tests and reviews unnecessary. In this talk I discuss work being done in network-based automated competency development and recognition, the challenges it presents to traditional institutions, and the opportunities created for genuinely autonomous open learning.

For recordings of all three talks see OLDaily

Beyond Assessment – Recognizing Achievement in a Networked World
Jul 11, 2014. 12th ePortfolio, Open Badges and Identity Conference , University of Greenwich, Greenwich, UK (Keynote).

Beyond Institutions – Personal Learning in a Networked World
Jul 09, 2014. Network EDFE Seminar Series, London School of Economics (Keynote).

Beyond Free – Open Learning in a Networked World
Jul 08, 2014. 12th Annual Academic Practise & Technology Conference, University of Greenwich, Greenwich, UK (Keynote).

This was perhaps the most forward thinking and challenging of the three talks. I wasn’t at the talk, but listened to the recording. What follows is my interpretation of what Stephen had to say, but it was a long talk and I would expect others to take different things from it and interpret the ideas presented differently.

Educators have been wrestling with the issue of assessment, how to do it well, how to make it authentic, fair and meaningful, how to engage learners in the process and so on for many, many years.

Assessment has become even more of a concern since the advent of MOOCs and MOOC are symptomatic of the changes that are happening in learning. How do you assess thousands of learners in a MOOC?  The answer is that you don’t – or not in the way that we are all accustomed to – which is testing and measurement to award credentials such as degrees and other qualifications. This has resulted in many institutions experimenting with offering a host of alternative credentials in the form of open badges and certificates.

Stephen’s vision is that in the future assessment will be based not on what you ‘know’ but on what you ‘do’ – what you do on the public internet. The technology now exists to map a more precise assessment of people through their online interactions. Whilst this raises concerns around issues of privacy and ethical use of data, it also means that people will be more in control of their own assessment. In the future we will have our own personal servers and will personally manage our multiple identities through public and private social networks. Prospective employers seeking a match for the jobs they want filled can then view the details of these identities. There is some evidence that learners are already managing their own online spaces. See for example Jim Groom’s work on A Domain of One’s Own.

Why might new approaches to assessment such as this be necessary? Here are some of the thoughts that Stephen shared with us.

It is harder and harder these days to get a job, despite the fact that employers have job vacancies.  There is a skills gap.  The unemployed don’t have the skills that employers need. We might think that the solution would be to educate people in the needed skills and then employers could hire them, but employers don’t seem to know what skills are needed and although learning skills inventories help people to recognise what they don’t know, these inventories don’t help them to get to what they do know.

Education is crucial for personal and skills development and more education leads to happier people and a more developed society. The problem is that we confuse the outcomes of education with the process of education. We think that we can determine/control learning outcomes and what people learn. See Slide 14

instructional design

But useful outcomes are undefinable (e.g. understand that …..) and we need an understanding of understanding. Definable outcomes such as ‘recite’ and ‘display’ are simpler but behaviourist (Slide 18).   There is more to knowing than a set of facts that you need to pass the test.  Knowing something is to recognise it, in the sense that you can’t unknow it.  Stephen used ‘Where’s Wally’ as an example of this:

Wallywhere's wally

Knowing, according to Stephen, is a physical state – it is the organisation of connections in our brain. Our brain is a pattern recogniser. Knowing is about ‘doing’ rather that some mental state.

My understanding of what Stephen is saying is that if we believe that knowing is about pattern recognition, then achievement will be recognized in how good learners are at pattern recognition as evidenced by what they ‘do’ in their online interactions. ‘Assessors’ will also need to be good at pattern recognition.

Learners are increasingly more sensitive to the patterns they see in the huge amount of data that they interact with on the internet, and machines are getting closer to being able to grade assignments through pattern recognition.  As they interact online learners leave digital traces. Big data is being used to analyse these internet interactions.  This can be used for assessment purposes. But this has, of course, raised concerns about the ethics of big data analysis and the concern for privacy is spreading – as we have recently seen with respect to Facebook’s use of our data. (Slide 55)

Facebook research

A move to personally managed social networks rather than centrally managed social networks will enable learners to control what they want prospective employers to know about them and human networks will act as quality filters.

Stephen’s final word was that assessment of the future will redefine ‘body of work’.

assessment of the future

All these are very interesting ideas. I do wonder though whether it’s a massive assumption that all learners will be able to manage their own online identities such that they become employable. What are the skills needed for this? How will people get these skills? Will this be a more equitable process than currently exists, or will it lead to another set of hierarchies and marginalisation of a different group.

Lots to think about – but I really like the move to putting assessment more in the control of learners.

26-09-2014 Postscript

See also this post by Stephen Downes – http://halfanhour.blogspot.co.uk/2014/09/beyond-assessment-recognizing.html – which provides all the details of this talk

Automating teaching and assessment

George Veletsianos gave an interesting and thought provoking talk to the University of Edinburgh yesterday. This was live streamed and hopefully a recording will soon be posted here.  A good set of rough notes has been posted by Peter Evans on Twitter

Peter Evans@eksploratore

My live and rough notes on #edindice seminar from@veletsianos on #moocs, automation & artificial intelligence at pj-evans.net/2014/06/moocs-…

As he points out, there were three main topics covered by George’s talk:

  • MOOCs as sociocultural phenomenon;
  • automation of teaching and
  • pedagogical agents and the automation of teaching.

George’s involvement with MOOCs started in 2011 when he gave a presentation to the Change11 MOOC, which I blogged about at the time .

I found myself wondering during his talk to the University of Edinburgh, whether we would be discussing automating teaching, if he had started his MOOC involvement in 2008, as this presentation seemed to come from a background of xMOOC interest and involvement. Those first cMOOCs, with their totally different approach to pedagogy, were not mentioned.

I feel uncomfortable with the idea of automating teaching and having robotic pedagogical agents to interact with learners. The thinking is that this would be more efficient, particularly when teachers are working with large numbers as in MOOCs, and would ‘free up’ teachers’ time so that they can focus on more important aspects of their work. I can see that automating some of the administration processes associated with teaching would be welcome, but I am having difficulty seeing what could be more important, as a teacher, than interacting with students.

George pointed out that many of us already use a number of automating services, such as Google Scholar alerts, RSS feeds, IFTTT and so on, so why not extend this to automating teaching, or teaching assistants, through the use of pedagogical agents such as avatars.

What was interesting is that the audience for this talk seemed very taken with the idea of pedagogical agents, what gender they should be, what appearance they should have, what culture they should represent etc. For me the more interesting question is what do we stand to lose and/or gain by going down this route of replacing teachers with machines.

For some of my colleagues, Karen Guldberg and her team of researchers at Birmingham University, robots have become central to their research on autism and their work with children on the autism spectrum. These children respond in previously unimaginable ways to robots. For some there will be gains from interacting with robots.

But I was reminded, during George’s talk, of Sherry Turkle’s concerns about what we stand to lose by relying on robots for interaction.

And coincidentally I was very recently pointed, by Matthias Melcher, to this fascinating article – Biology’s Shameful Refusal to Disown the Machine-Organism – which whilst not about automating teaching through the use of avatars/robots, does consider the relationship between machines and living things from a different perspective and concludes:

The processes of life are narratives. The functional ideas manifested in the organism belong to the intrinsic inwardness of its life, and are not imposed from without by the mind of an engineer. (Stephen L. Talbott, 2014).

Finally, George Veletsianos’ talk was timely as I am currently discussing with Roy Williams, not how teaching and assessment should be automated, but rather whether and if so how, it can be put in the hands of learners.

This topic will be the focus of a presentation we will give to the University of Applied Sciences, ZML – Innovative Learning Scenarios, FH JOANNEUM in Graz, Austria on September 17th 2014.

 

Power and control in ModPo

I am now, 3 weeks into ModPo, very aware of the differences between the original cMOOCs (e.g. CCK08 – the very first MOOC run by Stephen Downes and George Siemens) and xMOOCs – and I think it relates to this slide that Stephen Downes recently talked us through at the ALT-C Conference

SD ALT-C slideshareWhat are cultures of Learning – http://www.slideshare.net/Downes/2013-09-12-altc

xMOOCs might be either A) Centralised or B) Decentralised but they are not C) Distributed, i.e. not in the same sense that CCK08 and subsequent MOOCs such as Change 11, run by Downes and Siemens, were.  Although xMOOCs such as ModPo do have a Twitter stream and a Facebook group, they do not encourage people to find and create their own discussion groups in locations of their choice, as the original cMOOCs did.

ModPo for me is very centralized – with the centre being Al Filreis and to a certain extent his TAs. No Al Filreis – no ModPo. He is the ‘sage on the stage’. And it seems to be working well for most people. Al is charismatic. There are hundreds of discussion threads and Al Filreis and his team of TAs are very visible in there. They must be exhausted.

I am loving the poetry in ModPo – all new to me – and the video discussions which model and demonstrate how to close read these poems are very engaging. Even within one week I felt I had learned a lot, not least that some poets resonate and others do not.

But, despite this, there are elements of ModPo that I find disturbing and they are mostly to do with the assessment process, which on a professional level (as an educator) have concerned me.

I have already mentioned in a previous post  that I can’t see any value in having to post to discussion forums as an assessment requirement. Now there are three other points related to assessment that I find troubling.

1. The assessment criteria (peer review instructions) were not posted before people submitted their assignments and this does make a difference – because, for example, the reviewers were asked to judge whether assignment writers had understood Emily Dickinson’s use of dashes in her poetry. Whilst dashes were discussed at length in the videos, they were not mentioned in the assignment writing guidance. Participants/students should always know the criteria they are being assessed against.

2. The fact that all the assignments, once they have received one peer review, are automatically posted to one of the Coursera forums, i.e. all the 30 000 participants can see the submitted assignment if they have the time and energy to wade through the 75 (at the last count) that have automatically been posted.

Assignment writers were not asked whether they would be willing for this to happen.  In an Announcement to the class they were told that “This enables everyone to participate, at least a little bit, in the reading and reviewing of essays” – but frankly all it does it load even more discussion threads to the forums, which are already overloaded and – more significantly – takes the control and ownership of the assignment and learning process out of the hands of the learner more than is necessary.

For me a successful adult learning process relies on learners having as much autonomy as possible (another principle from the early cMOOCs, but also one backed up by research into adult learning). All it needed was consent from the assignment writer.

3. The third point is the worst. A participant has been publicly named and shamed for plagiarism in the assignment submission forum mentioned above. Her assignment was automatically posted as explained – so she had no choice over the matter. The reviewer had not noticed the plagiarism (a section copied from Wikipedia) – but to the title of her post has been added (Note from Al: this essay has been plagiarized).  At the beginning of the course there was a stern warning in the initial announcement about plagiarism – although I can’t find it now – and participants submitting assignments are asked to tick a box saying that the work is their own.

It could be argued that public naming and shaming of a participant serves as a warning to all other participants – but I think it is cruel and ultimately destructive. I know from experience that foreign students often have difficulty understanding what plagiarism means and as far as I can see there is no advice on the site about citing sources. However you look at it, I don’t believe a student should ever be publicly named and shamed. She should have been contacted privately by email. That would have been enough – especially since she may not get the certificate anyhow, since she hasn’t made any discussion forum posts. Did anyone bother to check?

These exhibitions of power, control and centralization are a long way off the original conception of MOOCs.

Assessment of discussion forum posts in ModPo

Today I am disappointed in ModPo for the first time. Why? Because, I realize that it has fallen into the trap of believing that requiring posts to a discussion forum can in some way measure the success of learning.

On checking I see that it does say this on the Announcements page in the very first post ‘a thought on plagiarism’, but I failed to notice it until it was mentioned in the audio discussion between Al Filreis and Julia Bloch that was posted today.

To be considered a student who has “completed” the course, you need to have written and submitted the four short essays, commented on others’, submitted (and minimally ‘passed’) the quizzes, and participated in the discussion forum.

Evidently to get a certificate of completion, a ModPo participant must make a post in each week of the course, in one of the staff initiated weekly forums.

I completely fail to see the point of this. It is not as if ModPo is short of discussion in the forums. It is completely swamped with discussion. In addition it is the kind of assessment requirement that tempts me to simply ‘play the game’ (if I was that keen to get a certificate, which I am not). I could put any meaningless post about any meaningless thing in each week’s forum and theoretically I have fulfilled the requirement.

I have already accepted that ModPo is not completely open, simply because it is tied to the Coursera platform and therefore does not have ‘open’ resources in the original cMOOC sense of participants being able to aggregate, remix, repurpose and feed forward resources at will.

But I have otherwise been very impressed by the pedagogy – the standard of teaching is very high, the level of support from and engagement by the tutors is beyond the call of duty for a MOOC, and the content is so stimulating.  All credit to the tutors and TAs.

But this requirement to post to the forums is a definite blip, in my book. Why? – because it puts (in this context) an unnecessary constraint on the autonomy of those learners who would like to achieve a certificate of completion, and won’t necessarily add anything to the learner experience. It definitely wouldn’t to mine.

I have listened to all the videos, read all the poems, completed the quizzes for Weeks 1 and 2, written and submitted my first assignment, but this requirement to post to the forums is one hoop that I will not be jumping through. If there had been a meaningful activity around being engaged in the discussion forums, then I would have been happy to comply. As it is, I don’t feel that I need to learn how to post to forums, I have done lots of this in the past, nor do I need to learn the value of social learning. I have been practicing and promoting it for years. If I feel that I can genuinely make a contribution in a forum, then I will.

Despite this disappointment, ModPo remains a highly stimulating experience, on a number of levels, and one that I would recommend to anyone interested in open learning, pedagogy and poetry.

 

 

Update on OldGlobeMOOC and Peer Assessment

OldGlobeMOOC is about to start it’s 4th week (following a week’s break for July 4th celebrations in the US), and the Week 3 assignment peer reviews are in. For me this assessment process is one of the most interesting aspects of this xMOOC. I have thought since the first MOOC in 2008 (CCK08 Connectivism and Connective Knowledge), designed and run by Stephen Downes and George Siemens, that assessment may be the sticking point for MOOCs.

In my last post , I outlined some of the difficulties that OldGlobeMOOC is experiencing with the assessment and peer review process. It seems to me, once again, but this time for an xMOOC, that if MOOCs are going to be sustainable and successful, then the assessment process has to be ‘cracked’ and meaningful.

Some MOOCs have taken the approach of restricting the number of participants who can be assessed. CCK08 did this. I think the number was 25, and FSLT12 and 13 have done this with a similar number – the idea being that  a small number of participants can be assessed by a tutor. FSLT13 offers credit for this:

The course has been recently accredited (10 transferrable academic credits at level 7, postgraduate). FSLT is recognised towards the Oxford Brookes Postgraduate Certificate in Teaching in Higher Education (PCTHE) and Associate Teachers (AT) courses. (http://openbrookes.net/firststeps13/)

But these are cMOOCs.

OldGlobeMOOC has taken a different approach as I described in my last post and I understand from other participants that this is similar to a number of other Coursera MOOCs.  For me this my first xMOOC, but it is not for quite a few OldGlobeMOOC participants, who have taken numerous Coursera courses and in the forums have shared their experience of the peer review process.

I will add my experience to the mix, and just so you know what we are talking about here are links to my assignments with their peer reviews.

Assignment 1 with peer review

Assignment 2 with peer review

Assignment 3 with peer review

If you read these, you will see that the assignments are not very different in their style and level to my blog posts, i.e. they are not academic pieces of work  – rather discussion pieces or personal reflection. And judging by the assignments I have reviewed, other participants’ assignments are of a similar level.

Which brings me to the review process, which I reflected on in my last post, but will add a few things here.

  • The idea is that each participant submits an assignment and peer reviews five assignments for each week, which I have done. If this is not done, i.e. the peer review, then a 20% penalty is incurred.

All students wishing to obtain a Statement of Accomplishment must achieve 7 out of 12 points and submit 5 peer reviews each week. If a student fails to complete the 5 peer reviews, that week’s assignment will incur a 20% penalty.

Despite the fact that I definitely submitted five peer reviews for Assignment 3, I received a 20% penalty and therefore scored 1.6 instead of 2. It’s very easy to know that you have completed the 5 peer reviews, by the way the Coursera system takes you through the 5 assignments allocated for review; and the system confirms for you at the end of the process that you have submitted 5 – so I know that I did. So there’s been a blip in the system somewhere. It’s not a big deal for me, as I’m only doing this to experience the process and because I like the assignments and find the discussions interesting. I am not doing the course for the Certificate – but I do wonder how a blip in the system affects people who are really keen to receive a Statement of Accomplishment.

  • There is no guarantee that you will receive 5 peer reviews. I received five in Week 1, three in Week 2 and four in Week 3. There has been some discussion in the forums about how this might affect the overall system and whether or not you have to review more than 5 assignments to receive 5 reviews.
  • I have no complaints about the quality of most of the peer reviews and so far no one has given me a score of less than 2 – but this peer review for Assignment 3 is indicative of how the game can be played to ensure that you get a Certificate. It made me smile 🙂

peer 2 I’m headed for an airplane so don’t have time to review, and I won’t be back until after evaluation time ends so I’m just giving everyone a 2. 

Aside from this here are some further reflections. The OldGlobeMOOC is a great experience in terms of the diversity of participants. Unfortunately the younger participants, in their teens, who signed up, seem to have fallen out of the discussion forums. This does not mean that they are no longer participating through observation and reading – it’s difficult to know. But I have wondered how an 11 year old might review the assignment of an academic Professor, or how an academic Professor might respond to a learner with special needs, or a very young participant, or someone whose first language is not English, and so on. The assignment submission is anonymous. Do these differences have implications for the equity of the peer review process?

Despite all this I am finding OldGlobeMOOC a fascinating and enjoyable experience and am looking forward to the start of Week 4.

 

SCoPE Seminar: Digital Badges Implementation

Peter Rawsthorne  spoke to the SCoPE community about badge system design and implementation in a live webinar last night. See the SCopE site for a recording

Peter is a mine of information  about this subject (see his blog). It seems that digital badges are probably here to stay. Some pretty heavyweight organizations appear to be investing in them –  see Peter’s post An introduction to badge systems design. Some current key questions for those in the digital badges community seem to be around

  • how to come up with a common international standard for badges
  • how to develop the technology to easily design and issue badges

What has been most interesting for me during this seminar, is my own feeling of discomfort with all this discussion about badges. I have been reflecting on why.

First I was reminded in last night’s webinar of Etienne Wenger’s ‘purple in the nose’ story. When meeting a friend to share a glass of wine, he suddenly realized that his wine-tasting friend (who described wine using an unknown language – ‘purple in the nose’), was a member of a community to which Etienne did not belong. Etienne had to decide whether he wanted to belong to that community and learn that language. I have felt the same about this seminar. I feel ‘outside’ this community of digital badge enthusiasts.

Maybe those involved in designing and implementing badges have already been through the questions which remain for me; questions about the credibility of these badges, their value, their integrity, their status, what they represent, who they represent and so on.

A most telling comment for me in the SCoPE discussion forum has been

‘More hack, less yak!”

Our facilitator has clearly been frustrated that the group has been ‘yakking’ about the issues rather than getting on and completing the tasks. As he put it, with good humour, ‘Sheesh…. What a bunch of academics <big smile>’

So I still wonder whether the badge system will promote the ‘completion of tasks’ approach to learning, more than a focus on developing a depth of understanding.

The word that kept going through my head in last night’s webinar was ‘control’.  The discussion of the design and implementation of badge systems made me wonder whether this could ultimately disempower learners rather than empower them. Given that my current research interests are related to emergent learning, I am struggling to see where digital badges would fit with this.

There was a brief discussion at the end of the webinar about the possibility of individual self-directed learners designing their own badges and legitimizing them.  For me this was the most interesting aspect of the discussion. I would have liked more ‘yak’ on this 🙂 .

Finally I wonder whether the earning of badges will be more important to some learners than others and if so, what the reasons for this might be.  I say this because one member of my family is very keen to earn and collect badges, whereas I don’t seem to have much enthusiasm for it.

#digitalbadges: SCoPE seminar on Digital Badges

Screen shot 2012-12-04 at 20.13.25

(screenshot from Peter Rawsthorne’s presentation)

Peter Rawsthorne is facilitating a lively two week seminar in the SCoPE community on the concept and implementation of Digital Badges. This is how he describes his intentions for the seminar

During this two-week seminar we will explore digital badges from concept through to implementation. The seminar will focus on the possible pedagogies and technology required for implementing digital badges. We will also take a critical look at the current state of digital badges with discussion of the required and possible futures. If you have a few hours to read and discuss focused topics and participate in two mid-day webinars then please join is this lively learning experience focused on digital badges.

As well as the discussion forums there are two web conferences – the first took place last night. Details of the seminar and conferences can be found here – http://scope.bccampus.ca/mod/forum/view.php?id=9010

The seminar has been designed to be task driven and with the intention of awarding badges on completion, based on a 3 badge system design

  1. Learner badge – person introduces themselves to the group via the discussion forum and contributes to a couple of discussion threads. Mostly, they could be considered lurkers (much can be learned through lurking)
  2. Participant badge – person introduces themselves to the group via the discussion forum and actively contributes to 7 of the 12 primary discussion threads, also participates in one of the two lunch-and-learn sessions.
  3. Contributor badge – does everything the participant does with the addition of contributing;
    • by designing badge images
    • creating a badge system design for another curriculum
    • blogs about their participation in this seminar series
    • other creative endeavours regarding digital badges

The daily tasks that have been posted so far are

Task 1  

  • Identify a merit badge you earned during your lifetim
  • Describe how you displayed the merit badges

Task 2   

  • Identify the digital and internet technologies best suited to create a digital merit badge
  • Describe the technologies that could be used to attach (reference or link) the learning to the digital badge

Task 3  

  • Identify the completion criteria for any badge you have earned (traditional or digital)
  • Describe the hierarchy or network of badges

Task 4

  • Identify a variety of sites that issue badge
  • Describe the skills, knowledge and curriculum the badges represent

Some sites that reference badges that have been mentioned in the forums…

From the synchronous webinar last night Peter Rawsthorne made the point that there are 4-5 billion people on the planet who are not attending school. How will their achievements/accomplishments be recognized? I think the idea is that learning that happens outside traditional settings should be honoured and recognized.

Screen shot 2012-12-04 at 20.14.29

(Screenshot from Peter Rawsthorne’s presentation)

At this point I feel a bit skeptical about the whole thing, but it is very early days. Three questions I have at this time are:

  • Will badges promote quality learning or will they simply encourage people to ‘jump through hoops’?

For example – I notice in the discussion forums that there is in fact, very little discussion. The tasks are being completed but there is little discussion about them. Completing tasks does not necessarily lead to quality learning.

  • Will badges be ‘recognised/valued’ by employers – will they need to be?

Verena Roberts in last night’s webinar wrote ‘Do badges need to lead to something, or identify a person’s passion?’ For me, I don’t need a badge to identify a personal passion, but I might need one for my CV, depending on the context and my personal circumstances.

  • Will badges stifle creativity and emergent learning?

There has been discussion about how badges fit together and Gina Bennett (in the webinar) thought that the ‘Scouts’ have the badge thing really figured out.  But for me that model is based on a very ‘linear’ way of thinking about learning, whereas research has shown that even small children (for example when learning mathematics), don’t learn in a linear way – they go backwards, forwards and sideways. Frogmarching children (and adults) through a curriculum has always been a problem for curriculum design and the award of badges based on a linear approach might just reinforce this.