The Deceptive Allure of Clarity

You could be forgiven for thinking that a statement such as ‘The deceptive allure of clarity’ must have come straight from the mouth of Iain McGilchrist, author of ‘The Master and His Emissary. The Divided Brain and the Making of the Western World’. McGilchrist would align this statement with the way in which the left hemisphere attends to the world. In his book he explains that we are living in a left hemisphere dominated world. For the left hemisphere, the parts are more important than the whole. The left hemisphere values the known, familiar, certain, distinct, fragmentary, isolated and unchanging. It abstracts ideas from body and context, seeing things as inanimate and representational. In the left hemisphere’s view of the world, quality is replaced by quantity, and unique cases are replaced with categories.

But this statement, ‘The Deceptive Allure of Clarity’, did not come from McGilchrist, but from a Lancaster University online Department of Education Research Seminar that I attended this week, which was presented by Jan McArthur and Joanne Wood. The full title of their talk was ‘Towards Wicked Marking Criteria: the deceptive allure of clarity’, which is what drew me, and many others, in (the session was very well attended). This was how the session was advertised.

In this seminar we consider the dissonance between two major themes in the scholarship of teaching, learning and assessment in higher education: the engagement with complex and structured forms of knowledge and the development of increasingly precise marking criteria for assessment.  We question what is lost when we aim to make assessment a more and more precise practice?  We argue that academic knowledge cannot always be broken into manageable “bits” but often should be evaluated holistically.  Finally we propose that students who perform “badly” in assessments have often not done this by accident or neglect but rather through diligent and conscientious following of implicit messages we send out as teachers, often in the name of clarity.

They started the session by asking the question: ‘What if the pursuit of clarity is part of the problem?’ By this they were making reference to what they called ‘The Monster Rubric’, which is so detailed and atomised that it loses all sense of what it is trying to achieve.

What follows is my reaction to this seminar and should not be attributed to either of the speakers.

It is easy to find examples of these rubrics online, through a simple search for rubric images. For example here is one with an excessive level of granularity. I can’t imagine how much time it must have taken to develop this rubric – time that perhaps could have been better spent in the service of students?

Most institutions use rubrics for marking students work. Why? Well, principally for quality assurance reasons. The institution/tutor has to demonstrate that the marking is fair and equitable. But in reality, my experience is that for experienced tutors/markers, the rubric is not helpful and so they make the rubric fit their marking rather than the other way round. The rubric does not inform the marking. An experienced marker knows that the whole is greater than the sum of the parts. An experienced marker knows that complex knowledge can’t be broken down into bits. An experienced marker knows that there are qualities in assignments, which contribute to the whole being greater than the sum of the parts, that simply can’t be measured, but nevertheless contribute to the mark.  An experienced marker can pick up an assignment, flip through it and know straight away roughly what mark it will receive. The marker then reads the assignment carefully to check this initial assessment and give critical feedback. Only finally does the marker make sure (for quality assurance purposes) that the rubric fits the given mark.

We do students a disservice by misleading them into thinking that their achievements can be broken into bits and that each bit is worth a certain percentage. Complex knowledge cannot be defined in these terms. A rubric cannot cross all the t’s and dot all the i’s. The rubric should not be so atomised that there is no room for students to move in.  As Iain McGilchrist says:

‘… the gaps in the structure are where the light gets in. If you tighten everything up, then you get total darkness’. (https://youtu.be/0Zld-MX11lA).

If we must have rubrics, then they should be guides rather than prescriptive, and students and staff should be encouraged to move beyond them.

Badges are not sufficient

This week Stephen invited Viplav Baxi to join him in a discussion about this week’s topic – Recognition – for the E-Learning 3.0 MOOC.

They only mentioned badges briefly, but the task for participants this week has been to create a badge – see my last post.

I have been struggling to identify the key issues in this week’s topic.  I don’t think it is badges. As Stephen himself said at a keynote presentation in Delhi in 2012, to which he was invited by Viplav Baxi:

Badges are not sufficient, analytics are not sufficient, it’s the interactivity, it’s the relative position with everybody else in the network, that represents learning in this sort of environment. (Stephen Downes, 2012)

See – Stephen Downes. Education as Platform: The MOOC Experience and what we can do to make it better. Keynote presentation delivered to EdgeX, Delhi, India. March 14, 2012. Slides and audio available. http://www.downes.ca/presentation/293

See also Downes, S. (2012). Connectivism and connective knowledge: Essays on meaning and learning networks. National Research Council Canada, p.541 https://www.downes.ca/files/books/Connective_Knowledge-19May2012.pdf

So what are we to make of the topic this week? I have watched the conversation between Stephen and Viplav, checked out some of the resources for this week (which I have copied from the course site at the end of this post), read the Synopsis for this week, and explored my own ‘library’  that I have collected over the years, not specifically on badges, but on assessment in a digital world, and how this might be changing.

I have a terrible memory, so having a library and a blog to refer back to is essential. My blog reminded me that I travelled to Greenwich in 2014 to hear Stephen give this keynote.

I blogged about it at the time. Here is a quote from that blog post, which seems to identify the key issues as I interpreted them.

“Stephen’s vision is that in the future assessment will be based not on what you ‘know’ but on what you ‘do’ – what you do on the public internet. The technology now exists to map a more precise assessment of people through their online interactions. Whilst this raises concerns around issues of privacy and ethical use of data, it also means that people will be more in control of their own assessment. In the future we will have our own personal servers and will personally manage our multiple identities through public and private social networks. Prospective employers seeking a match for the jobs they want filled can then view the details of these identities.”

Viplav and Stephen discussed the role of Artificial Intelligence in tracking students and scaling up assessment, a real need for Viplav in India given the huge numbers of students requiring assessment and recognition.  Stephen has written this week:

…. we need to think of the content of assessments more broadly. The traditional educational model is based on tests and assignments, grades, degrees and professional certifications. But with activity data we can begin tracking things like which resources a person reads, who they spoke to, and what questions they asked. We can also gather data outside the school or program, looking at actual results and feedback from the workplace. In the world of centralized platforms, such data collection would be risky and intrusive, but in a distributed data network where people manage their own data, greater opportunities are afforded.

This paragraph immediately raised concerns for me, about privacy. The thought of being constantly ‘observed’ in class and out of class feels very uncomfortable and I wonder to what extent the ethics of these new forms of assessment have been considered.

And then there is the question of what information is being gathered, and, as Stephen asks  ‘How do we know what someone has learned?’ Further questions must also be: What is knowledge and how do we recognise it? Will a certificate or a badge accurately represent a learner’s knowledge?

Connectivism seems to be the learning theory most applicable to the distributed web, proposing that:

Knowledge is literally the set of connections between entities. In humans, this knowledge consists of connections between neurons. In societies, this knowledge consists of connections between humans and their artifacts. What a network knows is not found in the content of its entities, nor in the content of messages sent from one to the other, but rather can only be found through recognition of patterns emergent in the network of connections and interactions. [i.e. in what people ‘do’ – see above]

See Downes, S. (2012). Connectivism and connective knowledge: Essays on meaning and learning networks. National Research Council Canada, p.9

And on p.584 of this book Stephen quotes Rob Wall (2007) as saying:

“Literacy, of any type, is about pattern recognition, about seeing how art is like physics is like literature is like dance is like architecture is like …Literacy is not about knowing where the dots are. Literacy is not about finding dots about which you may not know. Literacy is about connecting the dots and seeing the big picture that emerges.”

Rob Wall. What You Really Need to Learn: Some Thoughts. Stigmergic Web (weblog). June 3, 2007. http://stigmergicweb.org/2007/06/03/what-you-really-need-to-learn-some-thoughts/ No longer extant.

This seems to describe how knowledge on the distributed web will be recognised, i.e. by trying the see the emergent big picture that a learner’s activity demonstrates. How this will be formalised to be able to award badges, certificates and the like, is unclear to me.

I don’t know if Stephen still believes that ‘badges won’t be sufficient’. He sounds more optimistic in his Synopsis, writing “with trustworthy data from distributed networks we will be able to much more accurately determine the skills – and potential – of every individual.”

But it makes sense to me to be cautious about badges. As Viplav Baxi said in the video (relating this to his context in India, but relevant, I think, in many contexts), it’s not all about technology and pedagogy, but also about trust and identity. A change of mindset, culture and beliefs will be needed, if new approaches to assessment which take advantage of the distributed web are to be adopted.

Resources (provided by Stephen Downes for the E-Learning 3.0 MOOC)

Testing for Competence Rather Than for “Intelligence”
David McClelland, 2018/11/26

“…the fact remains that testing has had its greatest impact in  the schools and currently is doing the worst damage in that area by falsely leading people to believe that doing well in school means that people are more  competent and therefore more likely to do well in life because of some real ability factor.”

How did we get here? A brief history of competency‐based higher education in the United States
T.R. Nodine, The Journal of Competency-Based Education, 2018/11/26

Competency‐based education (CBE) programs have spread briskly in higher education over the past several years and their trajectory continues to rise. In light of the spread of competency‐based models, this article provides a brief history of CBE in the United States.

Competency & Skills System (CaSS)
Advanced Distributed Learning, 2018/11/26

The Competency and Skills System (CASS) enables collection, processing, and incorporation of credentials and data (“assertions”) about an individual’s competencies into accessible, sharable learner profiles. CaSS will create an infrastructure enabling competencies, competency frameworks, and competency-based learner models to be managed and accessed independently of a learning management system, course, training program, or credential. See also: CASS Documentation.

Knowledge as Recognition 
Stephen Downes, Half an Hour, 2018/11/27

In my view, knowledge isn’t a type of belief or opinion at all, and knowledge isn’t the sort of thing that needs to be justified at all. Instead, knowledge is a type of perception, which we call ‘recognition’, and knowledge serves as the justification for other things, including opinions and beliefs.

Beyond Assessment – Recognizing Achievement in a Networked World
Stephen Downes, 2018/11/27

ePortfolios and Open Badges are only the first wave in what will emerge as a wider network-based form of assessment that makes tests and reviews unnecessary. In this talk I discuss work being done in network-based automated competency development and recognition, the challenges it presents to traditional institutions, and the opportunities created for genuinely autonomous open learning. See also the transcript of this talk.

Beyond Assessment – Recognizing Achievement in a Networked World

Beyond Assessment slideshare

 

This was the third in a series of 3 talks that Stephen Downes gave in London this week.

Jul 11, 2014
Keynote presentation delivered to 12th ePortfolio, Open Badges and Identity Conference , University of Greenwich, Greenwich, UK.

ePortfolios and Open Badges are only the first wave in what will emerge as a wider network-based form of assessment that makes tests and reviews unnecessary. In this talk I discuss work being done in network-based automated competency development and recognition, the challenges it presents to traditional institutions, and the opportunities created for genuinely autonomous open learning.

For recordings of all three talks see OLDaily

Beyond Assessment – Recognizing Achievement in a Networked World
Jul 11, 2014. 12th ePortfolio, Open Badges and Identity Conference , University of Greenwich, Greenwich, UK (Keynote).

Beyond Institutions – Personal Learning in a Networked World
Jul 09, 2014. Network EDFE Seminar Series, London School of Economics (Keynote).

Beyond Free – Open Learning in a Networked World
Jul 08, 2014. 12th Annual Academic Practise & Technology Conference, University of Greenwich, Greenwich, UK (Keynote).

This was perhaps the most forward thinking and challenging of the three talks. I wasn’t at the talk, but listened to the recording. What follows is my interpretation of what Stephen had to say, but it was a long talk and I would expect others to take different things from it and interpret the ideas presented differently.

Educators have been wrestling with the issue of assessment, how to do it well, how to make it authentic, fair and meaningful, how to engage learners in the process and so on for many, many years.

Assessment has become even more of a concern since the advent of MOOCs and MOOC are symptomatic of the changes that are happening in learning. How do you assess thousands of learners in a MOOC?  The answer is that you don’t – or not in the way that we are all accustomed to – which is testing and measurement to award credentials such as degrees and other qualifications. This has resulted in many institutions experimenting with offering a host of alternative credentials in the form of open badges and certificates.

Stephen’s vision is that in the future assessment will be based not on what you ‘know’ but on what you ‘do’ – what you do on the public internet. The technology now exists to map a more precise assessment of people through their online interactions. Whilst this raises concerns around issues of privacy and ethical use of data, it also means that people will be more in control of their own assessment. In the future we will have our own personal servers and will personally manage our multiple identities through public and private social networks. Prospective employers seeking a match for the jobs they want filled can then view the details of these identities. There is some evidence that learners are already managing their own online spaces. See for example Jim Groom’s work on A Domain of One’s Own.

Why might new approaches to assessment such as this be necessary? Here are some of the thoughts that Stephen shared with us.

It is harder and harder these days to get a job, despite the fact that employers have job vacancies.  There is a skills gap.  The unemployed don’t have the skills that employers need. We might think that the solution would be to educate people in the needed skills and then employers could hire them, but employers don’t seem to know what skills are needed and although learning skills inventories help people to recognise what they don’t know, these inventories don’t help them to get to what they do know.

Education is crucial for personal and skills development and more education leads to happier people and a more developed society. The problem is that we confuse the outcomes of education with the process of education. We think that we can determine/control learning outcomes and what people learn. See Slide 14

instructional design

But useful outcomes are undefinable (e.g. understand that …..) and we need an understanding of understanding. Definable outcomes such as ‘recite’ and ‘display’ are simpler but behaviourist (Slide 18).   There is more to knowing than a set of facts that you need to pass the test.  Knowing something is to recognise it, in the sense that you can’t unknow it.  Stephen used ‘Where’s Wally’ as an example of this:

Wallywhere's wally

Knowing, according to Stephen, is a physical state – it is the organisation of connections in our brain. Our brain is a pattern recogniser. Knowing is about ‘doing’ rather that some mental state.

My understanding of what Stephen is saying is that if we believe that knowing is about pattern recognition, then achievement will be recognized in how good learners are at pattern recognition as evidenced by what they ‘do’ in their online interactions. ‘Assessors’ will also need to be good at pattern recognition.

Learners are increasingly more sensitive to the patterns they see in the huge amount of data that they interact with on the internet, and machines are getting closer to being able to grade assignments through pattern recognition.  As they interact online learners leave digital traces. Big data is being used to analyse these internet interactions.  This can be used for assessment purposes. But this has, of course, raised concerns about the ethics of big data analysis and the concern for privacy is spreading – as we have recently seen with respect to Facebook’s use of our data. (Slide 55)

Facebook research

A move to personally managed social networks rather than centrally managed social networks will enable learners to control what they want prospective employers to know about them and human networks will act as quality filters.

Stephen’s final word was that assessment of the future will redefine ‘body of work’.

assessment of the future

All these are very interesting ideas. I do wonder though whether it’s a massive assumption that all learners will be able to manage their own online identities such that they become employable. What are the skills needed for this? How will people get these skills? Will this be a more equitable process than currently exists, or will it lead to another set of hierarchies and marginalisation of a different group.

Lots to think about – but I really like the move to putting assessment more in the control of learners.

26-09-2014 Postscript

See also this post by Stephen Downes – http://halfanhour.blogspot.co.uk/2014/09/beyond-assessment-recognizing.html – which provides all the details of this talk

Automating teaching and assessment

George Veletsianos gave an interesting and thought provoking talk to the University of Edinburgh yesterday. This was live streamed and hopefully a recording will soon be posted here.  A good set of rough notes has been posted by Peter Evans on Twitter

Peter Evans@eksploratore

My live and rough notes on #edindice seminar from@veletsianos on #moocs, automation & artificial intelligence at pj-evans.net/2014/06/moocs-…

As he points out, there were three main topics covered by George’s talk:

  • MOOCs as sociocultural phenomenon;
  • automation of teaching and
  • pedagogical agents and the automation of teaching.

George’s involvement with MOOCs started in 2011 when he gave a presentation to the Change11 MOOC, which I blogged about at the time .

I found myself wondering during his talk to the University of Edinburgh, whether we would be discussing automating teaching, if he had started his MOOC involvement in 2008, as this presentation seemed to come from a background of xMOOC interest and involvement. Those first cMOOCs, with their totally different approach to pedagogy, were not mentioned.

I feel uncomfortable with the idea of automating teaching and having robotic pedagogical agents to interact with learners. The thinking is that this would be more efficient, particularly when teachers are working with large numbers as in MOOCs, and would ‘free up’ teachers’ time so that they can focus on more important aspects of their work. I can see that automating some of the administration processes associated with teaching would be welcome, but I am having difficulty seeing what could be more important, as a teacher, than interacting with students.

George pointed out that many of us already use a number of automating services, such as Google Scholar alerts, RSS feeds, IFTTT and so on, so why not extend this to automating teaching, or teaching assistants, through the use of pedagogical agents such as avatars.

What was interesting is that the audience for this talk seemed very taken with the idea of pedagogical agents, what gender they should be, what appearance they should have, what culture they should represent etc. For me the more interesting question is what do we stand to lose and/or gain by going down this route of replacing teachers with machines.

For some of my colleagues, Karen Guldberg and her team of researchers at Birmingham University, robots have become central to their research on autism and their work with children on the autism spectrum. These children respond in previously unimaginable ways to robots. For some there will be gains from interacting with robots.

But I was reminded, during George’s talk, of Sherry Turkle’s concerns about what we stand to lose by relying on robots for interaction.

And coincidentally I was very recently pointed, by Matthias Melcher, to this fascinating article – Biology’s Shameful Refusal to Disown the Machine-Organism – which whilst not about automating teaching through the use of avatars/robots, does consider the relationship between machines and living things from a different perspective and concludes:

The processes of life are narratives. The functional ideas manifested in the organism belong to the intrinsic inwardness of its life, and are not imposed from without by the mind of an engineer. (Stephen L. Talbott, 2014).

Finally, George Veletsianos’ talk was timely as I am currently discussing with Roy Williams, not how teaching and assessment should be automated, but rather whether and if so how, it can be put in the hands of learners.

This topic will be the focus of a presentation we will give to the University of Applied Sciences, ZML – Innovative Learning Scenarios, FH JOANNEUM in Graz, Austria on September 17th 2014.

 

Power and control in ModPo

I am now, 3 weeks into ModPo, very aware of the differences between the original cMOOCs (e.g. CCK08 – the very first MOOC run by Stephen Downes and George Siemens) and xMOOCs – and I think it relates to this slide that Stephen Downes recently talked us through at the ALT-C Conference

SD ALT-C slideshareWhat are cultures of Learning – http://www.slideshare.net/Downes/2013-09-12-altc

xMOOCs might be either A) Centralised or B) Decentralised but they are not C) Distributed, i.e. not in the same sense that CCK08 and subsequent MOOCs such as Change 11, run by Downes and Siemens, were.  Although xMOOCs such as ModPo do have a Twitter stream and a Facebook group, they do not encourage people to find and create their own discussion groups in locations of their choice, as the original cMOOCs did.

ModPo for me is very centralized – with the centre being Al Filreis and to a certain extent his TAs. No Al Filreis – no ModPo. He is the ‘sage on the stage’. And it seems to be working well for most people. Al is charismatic. There are hundreds of discussion threads and Al Filreis and his team of TAs are very visible in there. They must be exhausted.

I am loving the poetry in ModPo – all new to me – and the video discussions which model and demonstrate how to close read these poems are very engaging. Even within one week I felt I had learned a lot, not least that some poets resonate and others do not.

But, despite this, there are elements of ModPo that I find disturbing and they are mostly to do with the assessment process, which on a professional level (as an educator) have concerned me.

I have already mentioned in a previous post  that I can’t see any value in having to post to discussion forums as an assessment requirement. Now there are three other points related to assessment that I find troubling.

1. The assessment criteria (peer review instructions) were not posted before people submitted their assignments and this does make a difference – because, for example, the reviewers were asked to judge whether assignment writers had understood Emily Dickinson’s use of dashes in her poetry. Whilst dashes were discussed at length in the videos, they were not mentioned in the assignment writing guidance. Participants/students should always know the criteria they are being assessed against.

2. The fact that all the assignments, once they have received one peer review, are automatically posted to one of the Coursera forums, i.e. all the 30 000 participants can see the submitted assignment if they have the time and energy to wade through the 75 (at the last count) that have automatically been posted.

Assignment writers were not asked whether they would be willing for this to happen.  In an Announcement to the class they were told that “This enables everyone to participate, at least a little bit, in the reading and reviewing of essays” – but frankly all it does it load even more discussion threads to the forums, which are already overloaded and – more significantly – takes the control and ownership of the assignment and learning process out of the hands of the learner more than is necessary.

For me a successful adult learning process relies on learners having as much autonomy as possible (another principle from the early cMOOCs, but also one backed up by research into adult learning). All it needed was consent from the assignment writer.

3. The third point is the worst. A participant has been publicly named and shamed for plagiarism in the assignment submission forum mentioned above. Her assignment was automatically posted as explained – so she had no choice over the matter. The reviewer had not noticed the plagiarism (a section copied from Wikipedia) – but to the title of her post has been added (Note from Al: this essay has been plagiarized).  At the beginning of the course there was a stern warning in the initial announcement about plagiarism – although I can’t find it now – and participants submitting assignments are asked to tick a box saying that the work is their own.

It could be argued that public naming and shaming of a participant serves as a warning to all other participants – but I think it is cruel and ultimately destructive. I know from experience that foreign students often have difficulty understanding what plagiarism means and as far as I can see there is no advice on the site about citing sources. However you look at it, I don’t believe a student should ever be publicly named and shamed. She should have been contacted privately by email. That would have been enough – especially since she may not get the certificate anyhow, since she hasn’t made any discussion forum posts. Did anyone bother to check?

These exhibitions of power, control and centralization are a long way off the original conception of MOOCs.

Assessment of discussion forum posts in ModPo

Today I am disappointed in ModPo for the first time. Why? Because, I realize that it has fallen into the trap of believing that requiring posts to a discussion forum can in some way measure the success of learning.

On checking I see that it does say this on the Announcements page in the very first post ‘a thought on plagiarism’, but I failed to notice it until it was mentioned in the audio discussion between Al Filreis and Julia Bloch that was posted today.

To be considered a student who has “completed” the course, you need to have written and submitted the four short essays, commented on others’, submitted (and minimally ‘passed’) the quizzes, and participated in the discussion forum.

Evidently to get a certificate of completion, a ModPo participant must make a post in each week of the course, in one of the staff initiated weekly forums.

I completely fail to see the point of this. It is not as if ModPo is short of discussion in the forums. It is completely swamped with discussion. In addition it is the kind of assessment requirement that tempts me to simply ‘play the game’ (if I was that keen to get a certificate, which I am not). I could put any meaningless post about any meaningless thing in each week’s forum and theoretically I have fulfilled the requirement.

I have already accepted that ModPo is not completely open, simply because it is tied to the Coursera platform and therefore does not have ‘open’ resources in the original cMOOC sense of participants being able to aggregate, remix, repurpose and feed forward resources at will.

But I have otherwise been very impressed by the pedagogy – the standard of teaching is very high, the level of support from and engagement by the tutors is beyond the call of duty for a MOOC, and the content is so stimulating.  All credit to the tutors and TAs.

But this requirement to post to the forums is a definite blip, in my book. Why? – because it puts (in this context) an unnecessary constraint on the autonomy of those learners who would like to achieve a certificate of completion, and won’t necessarily add anything to the learner experience. It definitely wouldn’t to mine.

I have listened to all the videos, read all the poems, completed the quizzes for Weeks 1 and 2, written and submitted my first assignment, but this requirement to post to the forums is one hoop that I will not be jumping through. If there had been a meaningful activity around being engaged in the discussion forums, then I would have been happy to comply. As it is, I don’t feel that I need to learn how to post to forums, I have done lots of this in the past, nor do I need to learn the value of social learning. I have been practicing and promoting it for years. If I feel that I can genuinely make a contribution in a forum, then I will.

Despite this disappointment, ModPo remains a highly stimulating experience, on a number of levels, and one that I would recommend to anyone interested in open learning, pedagogy and poetry.

 

 

Update on OldGlobeMOOC and Peer Assessment

OldGlobeMOOC is about to start it’s 4th week (following a week’s break for July 4th celebrations in the US), and the Week 3 assignment peer reviews are in. For me this assessment process is one of the most interesting aspects of this xMOOC. I have thought since the first MOOC in 2008 (CCK08 Connectivism and Connective Knowledge), designed and run by Stephen Downes and George Siemens, that assessment may be the sticking point for MOOCs.

In my last post , I outlined some of the difficulties that OldGlobeMOOC is experiencing with the assessment and peer review process. It seems to me, once again, but this time for an xMOOC, that if MOOCs are going to be sustainable and successful, then the assessment process has to be ‘cracked’ and meaningful.

Some MOOCs have taken the approach of restricting the number of participants who can be assessed. CCK08 did this. I think the number was 25, and FSLT12 and 13 have done this with a similar number – the idea being that  a small number of participants can be assessed by a tutor. FSLT13 offers credit for this:

The course has been recently accredited (10 transferrable academic credits at level 7, postgraduate). FSLT is recognised towards the Oxford Brookes Postgraduate Certificate in Teaching in Higher Education (PCTHE) and Associate Teachers (AT) courses. (http://openbrookes.net/firststeps13/)

But these are cMOOCs.

OldGlobeMOOC has taken a different approach as I described in my last post and I understand from other participants that this is similar to a number of other Coursera MOOCs.  For me this my first xMOOC, but it is not for quite a few OldGlobeMOOC participants, who have taken numerous Coursera courses and in the forums have shared their experience of the peer review process.

I will add my experience to the mix, and just so you know what we are talking about here are links to my assignments with their peer reviews.

Assignment 1 with peer review

Assignment 2 with peer review

Assignment 3 with peer review

If you read these, you will see that the assignments are not very different in their style and level to my blog posts, i.e. they are not academic pieces of work  – rather discussion pieces or personal reflection. And judging by the assignments I have reviewed, other participants’ assignments are of a similar level.

Which brings me to the review process, which I reflected on in my last post, but will add a few things here.

  • The idea is that each participant submits an assignment and peer reviews five assignments for each week, which I have done. If this is not done, i.e. the peer review, then a 20% penalty is incurred.

All students wishing to obtain a Statement of Accomplishment must achieve 7 out of 12 points and submit 5 peer reviews each week. If a student fails to complete the 5 peer reviews, that week’s assignment will incur a 20% penalty.

Despite the fact that I definitely submitted five peer reviews for Assignment 3, I received a 20% penalty and therefore scored 1.6 instead of 2. It’s very easy to know that you have completed the 5 peer reviews, by the way the Coursera system takes you through the 5 assignments allocated for review; and the system confirms for you at the end of the process that you have submitted 5 – so I know that I did. So there’s been a blip in the system somewhere. It’s not a big deal for me, as I’m only doing this to experience the process and because I like the assignments and find the discussions interesting. I am not doing the course for the Certificate – but I do wonder how a blip in the system affects people who are really keen to receive a Statement of Accomplishment.

  • There is no guarantee that you will receive 5 peer reviews. I received five in Week 1, three in Week 2 and four in Week 3. There has been some discussion in the forums about how this might affect the overall system and whether or not you have to review more than 5 assignments to receive 5 reviews.
  • I have no complaints about the quality of most of the peer reviews and so far no one has given me a score of less than 2 – but this peer review for Assignment 3 is indicative of how the game can be played to ensure that you get a Certificate. It made me smile 🙂

peer 2 I’m headed for an airplane so don’t have time to review, and I won’t be back until after evaluation time ends so I’m just giving everyone a 2. 

Aside from this here are some further reflections. The OldGlobeMOOC is a great experience in terms of the diversity of participants. Unfortunately the younger participants, in their teens, who signed up, seem to have fallen out of the discussion forums. This does not mean that they are no longer participating through observation and reading – it’s difficult to know. But I have wondered how an 11 year old might review the assignment of an academic Professor, or how an academic Professor might respond to a learner with special needs, or a very young participant, or someone whose first language is not English, and so on. The assignment submission is anonymous. Do these differences have implications for the equity of the peer review process?

Despite all this I am finding OldGlobeMOOC a fascinating and enjoyable experience and am looking forward to the start of Week 4.

 

SCoPE Seminar: Digital Badges Implementation

Peter Rawsthorne  spoke to the SCoPE community about badge system design and implementation in a live webinar last night. See the SCopE site for a recording

Peter is a mine of information  about this subject (see his blog). It seems that digital badges are probably here to stay. Some pretty heavyweight organizations appear to be investing in them –  see Peter’s post An introduction to badge systems design. Some current key questions for those in the digital badges community seem to be around

  • how to come up with a common international standard for badges
  • how to develop the technology to easily design and issue badges

What has been most interesting for me during this seminar, is my own feeling of discomfort with all this discussion about badges. I have been reflecting on why.

First I was reminded in last night’s webinar of Etienne Wenger’s ‘purple in the nose’ story. When meeting a friend to share a glass of wine, he suddenly realized that his wine-tasting friend (who described wine using an unknown language – ‘purple in the nose’), was a member of a community to which Etienne did not belong. Etienne had to decide whether he wanted to belong to that community and learn that language. I have felt the same about this seminar. I feel ‘outside’ this community of digital badge enthusiasts.

Maybe those involved in designing and implementing badges have already been through the questions which remain for me; questions about the credibility of these badges, their value, their integrity, their status, what they represent, who they represent and so on.

A most telling comment for me in the SCoPE discussion forum has been

‘More hack, less yak!”

Our facilitator has clearly been frustrated that the group has been ‘yakking’ about the issues rather than getting on and completing the tasks. As he put it, with good humour, ‘Sheesh…. What a bunch of academics <big smile>’

So I still wonder whether the badge system will promote the ‘completion of tasks’ approach to learning, more than a focus on developing a depth of understanding.

The word that kept going through my head in last night’s webinar was ‘control’.  The discussion of the design and implementation of badge systems made me wonder whether this could ultimately disempower learners rather than empower them. Given that my current research interests are related to emergent learning, I am struggling to see where digital badges would fit with this.

There was a brief discussion at the end of the webinar about the possibility of individual self-directed learners designing their own badges and legitimizing them.  For me this was the most interesting aspect of the discussion. I would have liked more ‘yak’ on this 🙂 .

Finally I wonder whether the earning of badges will be more important to some learners than others and if so, what the reasons for this might be.  I say this because one member of my family is very keen to earn and collect badges, whereas I don’t seem to have much enthusiasm for it.

#digitalbadges: SCoPE seminar on Digital Badges

Screen shot 2012-12-04 at 20.13.25

(screenshot from Peter Rawsthorne’s presentation)

Peter Rawsthorne is facilitating a lively two week seminar in the SCoPE community on the concept and implementation of Digital Badges. This is how he describes his intentions for the seminar

During this two-week seminar we will explore digital badges from concept through to implementation. The seminar will focus on the possible pedagogies and technology required for implementing digital badges. We will also take a critical look at the current state of digital badges with discussion of the required and possible futures. If you have a few hours to read and discuss focused topics and participate in two mid-day webinars then please join is this lively learning experience focused on digital badges.

As well as the discussion forums there are two web conferences – the first took place last night. Details of the seminar and conferences can be found here – http://scope.bccampus.ca/mod/forum/view.php?id=9010

The seminar has been designed to be task driven and with the intention of awarding badges on completion, based on a 3 badge system design

  1. Learner badge – person introduces themselves to the group via the discussion forum and contributes to a couple of discussion threads. Mostly, they could be considered lurkers (much can be learned through lurking)
  2. Participant badge – person introduces themselves to the group via the discussion forum and actively contributes to 7 of the 12 primary discussion threads, also participates in one of the two lunch-and-learn sessions.
  3. Contributor badge – does everything the participant does with the addition of contributing;
    • by designing badge images
    • creating a badge system design for another curriculum
    • blogs about their participation in this seminar series
    • other creative endeavours regarding digital badges

The daily tasks that have been posted so far are

Task 1  

  • Identify a merit badge you earned during your lifetim
  • Describe how you displayed the merit badges

Task 2   

  • Identify the digital and internet technologies best suited to create a digital merit badge
  • Describe the technologies that could be used to attach (reference or link) the learning to the digital badge

Task 3  

  • Identify the completion criteria for any badge you have earned (traditional or digital)
  • Describe the hierarchy or network of badges

Task 4

  • Identify a variety of sites that issue badge
  • Describe the skills, knowledge and curriculum the badges represent

Some sites that reference badges that have been mentioned in the forums…

From the synchronous webinar last night Peter Rawsthorne made the point that there are 4-5 billion people on the planet who are not attending school. How will their achievements/accomplishments be recognized? I think the idea is that learning that happens outside traditional settings should be honoured and recognized.

Screen shot 2012-12-04 at 20.14.29

(Screenshot from Peter Rawsthorne’s presentation)

At this point I feel a bit skeptical about the whole thing, but it is very early days. Three questions I have at this time are:

  • Will badges promote quality learning or will they simply encourage people to ‘jump through hoops’?

For example – I notice in the discussion forums that there is in fact, very little discussion. The tasks are being completed but there is little discussion about them. Completing tasks does not necessarily lead to quality learning.

  • Will badges be ‘recognised/valued’ by employers – will they need to be?

Verena Roberts in last night’s webinar wrote ‘Do badges need to lead to something, or identify a person’s passion?’ For me, I don’t need a badge to identify a personal passion, but I might need one for my CV, depending on the context and my personal circumstances.

  • Will badges stifle creativity and emergent learning?

There has been discussion about how badges fit together and Gina Bennett (in the webinar) thought that the ‘Scouts’ have the badge thing really figured out.  But for me that model is based on a very ‘linear’ way of thinking about learning, whereas research has shown that even small children (for example when learning mathematics), don’t learn in a linear way – they go backwards, forwards and sideways. Frogmarching children (and adults) through a curriculum has always been a problem for curriculum design and the award of badges based on a linear approach might just reinforce this.

#FSLT12 Week 3 with Etienne and Bev Wenger-Trayner

We have had what feels like a bit of a pause over the weekend – many UK participants were maybe taking a break for the Queen’s Diamond Jubilee Celebrations. Its not often we get two Bank Holidays in a row, Monday and Tuesday. But people are beginning to drift back now.

(Click on the diagram to see it more clearly)

Etienne and Beverly Wenger-Trayner

The Open Academic Practice thread of Week 3 features Etienne and Beverly Wenger-Trayner,  who will be presenting in the live session on “Theory, pedagogy, and Identity in Higher-Education Teaching.” Wednesday 06 June, 2012, 1500 BST.  I am really looking forward to this session. I have been following Etienne’s work for quite a few years and now that he has married Bev, I will be following Bev too 🙂

Click here   to enter the Blackboard Collaborate room.

Check your time zone

Feedback

The First Steps Curriculum this week is covering Feedback, i.e. how to give feedback to students. Research has shown that despite teachers best efforts many students are only concerned with the grade and don’t even read the feedback we give them, i.e. they jump through the necessary hoops to get their qualification, but don’t appear to be interested in learning for its own sake. See for example, this paper

Gibbs, G. & Simpson, C. (2004-05) Conditions Under Which Assessment Supports Students’ Learning. Learning and Teaching in Higher Education, Issue 1.

An internet search will result in finding a PDF of the paper and it is well worth reading.

Of course there are many students who are passionate about learning (and they are such a privilege to work with) – but also many do just need and want that piece of paper. As a teacher, it can be disappointing when this is the case, but never more so than when the student is a PhD student. A question for teachers is whether feedback can be used to engage students (not just PhD students) and leverage higher quality learning. Apostolos Koutropoulos has initiated a discussion about this in the #fslt12 Week 3 Moodle Forum

I interpret Apostolos’ comments as relating to feed forward. I have long felt that unless the student is ‘bone idle’, or clearly on the wrong course (i.e. their strengths simply do not align with course requirements), then if the student fails, the tutor has to carefully question their own failings. As Apostolos writes – ‘feed forward’, i.e. catching the student before they ‘go wrong’, can raise standards and make the learning experience more satisfactory for learners and teachers. Reading University has done some work on feed forward

Activity 2 Collaborative Bibliography

Finally, Activity 2 is due to be completed this week. This collaborative bibliography wiki activity  is beginning to yield some interesting outcomes. The purpose of the activity is to consider the requirements of a literature review and how to critically review a piece of scholarly literature.  There is a link on Oxford Brookes’ own website which is a helpful starting point, but some other helpful resources  have been posted on the Moodle site and I’m sure there are many more out there. It would be useful to gather some together. For example

I like this blog  and The Thesis Whisperer is another great blog for PhD students or those working with PhD students.

And finally another great source of information for PhD students is #phdchat on Twitter

So there’s never a dull moment in FSLT12 🙂