#digitalbadges: SCoPE seminar on Digital Badges

Screen shot 2012-12-04 at 20.13.25

(screenshot from Peter Rawsthorne’s presentation)

Peter Rawsthorne is facilitating a lively two week seminar in the SCoPE community on the concept and implementation of Digital Badges. This is how he describes his intentions for the seminar

During this two-week seminar we will explore digital badges from concept through to implementation. The seminar will focus on the possible pedagogies and technology required for implementing digital badges. We will also take a critical look at the current state of digital badges with discussion of the required and possible futures. If you have a few hours to read and discuss focused topics and participate in two mid-day webinars then please join is this lively learning experience focused on digital badges.

As well as the discussion forums there are two web conferences – the first took place last night. Details of the seminar and conferences can be found here – http://scope.bccampus.ca/mod/forum/view.php?id=9010

The seminar has been designed to be task driven and with the intention of awarding badges on completion, based on a 3 badge system design

  1. Learner badge – person introduces themselves to the group via the discussion forum and contributes to a couple of discussion threads. Mostly, they could be considered lurkers (much can be learned through lurking)
  2. Participant badge – person introduces themselves to the group via the discussion forum and actively contributes to 7 of the 12 primary discussion threads, also participates in one of the two lunch-and-learn sessions.
  3. Contributor badge – does everything the participant does with the addition of contributing;
    • by designing badge images
    • creating a badge system design for another curriculum
    • blogs about their participation in this seminar series
    • other creative endeavours regarding digital badges

The daily tasks that have been posted so far are

Task 1  

  • Identify a merit badge you earned during your lifetim
  • Describe how you displayed the merit badges

Task 2   

  • Identify the digital and internet technologies best suited to create a digital merit badge
  • Describe the technologies that could be used to attach (reference or link) the learning to the digital badge

Task 3  

  • Identify the completion criteria for any badge you have earned (traditional or digital)
  • Describe the hierarchy or network of badges

Task 4

  • Identify a variety of sites that issue badge
  • Describe the skills, knowledge and curriculum the badges represent

Some sites that reference badges that have been mentioned in the forums…

From the synchronous webinar last night Peter Rawsthorne made the point that there are 4-5 billion people on the planet who are not attending school. How will their achievements/accomplishments be recognized? I think the idea is that learning that happens outside traditional settings should be honoured and recognized.

Screen shot 2012-12-04 at 20.14.29

(Screenshot from Peter Rawsthorne’s presentation)

At this point I feel a bit skeptical about the whole thing, but it is very early days. Three questions I have at this time are:

  • Will badges promote quality learning or will they simply encourage people to ‘jump through hoops’?

For example – I notice in the discussion forums that there is in fact, very little discussion. The tasks are being completed but there is little discussion about them. Completing tasks does not necessarily lead to quality learning.

  • Will badges be ‘recognised/valued’ by employers – will they need to be?

Verena Roberts in last night’s webinar wrote ‘Do badges need to lead to something, or identify a person’s passion?’ For me, I don’t need a badge to identify a personal passion, but I might need one for my CV, depending on the context and my personal circumstances.

  • Will badges stifle creativity and emergent learning?

There has been discussion about how badges fit together and Gina Bennett (in the webinar) thought that the ‘Scouts’ have the badge thing really figured out.  But for me that model is based on a very ‘linear’ way of thinking about learning, whereas research has shown that even small children (for example when learning mathematics), don’t learn in a linear way – they go backwards, forwards and sideways. Frogmarching children (and adults) through a curriculum has always been a problem for curriculum design and the award of badges based on a linear approach might just reinforce this.

#FSLT12 Week 3 with Etienne and Bev Wenger-Trayner

We have had what feels like a bit of a pause over the weekend – many UK participants were maybe taking a break for the Queen’s Diamond Jubilee Celebrations. Its not often we get two Bank Holidays in a row, Monday and Tuesday. But people are beginning to drift back now.

(Click on the diagram to see it more clearly)

Etienne and Beverly Wenger-Trayner

The Open Academic Practice thread of Week 3 features Etienne and Beverly Wenger-Trayner,  who will be presenting in the live session on “Theory, pedagogy, and Identity in Higher-Education Teaching.” Wednesday 06 June, 2012, 1500 BST.  I am really looking forward to this session. I have been following Etienne’s work for quite a few years and now that he has married Bev, I will be following Bev too 🙂

Click here   to enter the Blackboard Collaborate room.

Check your time zone

Feedback

The First Steps Curriculum this week is covering Feedback, i.e. how to give feedback to students. Research has shown that despite teachers best efforts many students are only concerned with the grade and don’t even read the feedback we give them, i.e. they jump through the necessary hoops to get their qualification, but don’t appear to be interested in learning for its own sake. See for example, this paper

Gibbs, G. & Simpson, C. (2004-05) Conditions Under Which Assessment Supports Students’ Learning. Learning and Teaching in Higher Education, Issue 1.

An internet search will result in finding a PDF of the paper and it is well worth reading.

Of course there are many students who are passionate about learning (and they are such a privilege to work with) – but also many do just need and want that piece of paper. As a teacher, it can be disappointing when this is the case, but never more so than when the student is a PhD student. A question for teachers is whether feedback can be used to engage students (not just PhD students) and leverage higher quality learning. Apostolos Koutropoulos has initiated a discussion about this in the #fslt12 Week 3 Moodle Forum

I interpret Apostolos’ comments as relating to feed forward. I have long felt that unless the student is ‘bone idle’, or clearly on the wrong course (i.e. their strengths simply do not align with course requirements), then if the student fails, the tutor has to carefully question their own failings. As Apostolos writes – ‘feed forward’, i.e. catching the student before they ‘go wrong’, can raise standards and make the learning experience more satisfactory for learners and teachers. Reading University has done some work on feed forward

Activity 2 Collaborative Bibliography

Finally, Activity 2 is due to be completed this week. This collaborative bibliography wiki activity  is beginning to yield some interesting outcomes. The purpose of the activity is to consider the requirements of a literature review and how to critically review a piece of scholarly literature.  There is a link on Oxford Brookes’ own website which is a helpful starting point, but some other helpful resources  have been posted on the Moodle site and I’m sure there are many more out there. It would be useful to gather some together. For example

I like this blog  and The Thesis Whisperer is another great blog for PhD students or those working with PhD students.

And finally another great source of information for PhD students is #phdchat on Twitter

So there’s never a dull moment in FSLT12 🙂

4th Networked Learning Hot Seat is underway

This year’s fourth Hot Seat discussion in the area of networked learning (in preparation for the 2012 conference) runs from January 9-13. Lone Dirckinck-Holmfeld, Vivien Hodgson, and David McConnell are facilitating a week-long asynchronous discussion, Exploring the Theory, Pedagogy and Practice of Networked Learning.

The Hot Seat discussion has 3 parts:

  1. History of Networked Learning in the UK and underpinning values (this thread has, so far, attracted the most discussion)
  2. The history of networked learning in a Danish context and its relationship to problem based learning (pbl), the role of technology and web 2.0, and the net generation and digital literacy
  3. Ontology, epistemology and pedagogy of networked learning, and relevance to mainstream higher education in the 21st century.

I arrived late for the discussion and it has been difficult to catch up with such a wealth of posting – but so far I have taken away two key ideas.

First, the definition of networked learning used for these Hot Seat discussions is quite narrow and only relates to networked learning in higher education courses. As such David McConnell introduces Part 1 of the Hot Seat by saying that

Networked Learning is based on:
Dialogue
Collaboration and cooperation in the learning process
Group work
Interaction with online materials
Knowledge production

With such a heavy emphasis on interaction, collaboration and group work, this raises the ever difficult question of whether or not participation should be assessed and if so how. In the Hot Seat David McConnell shares his model for assessment which is based on peer and self review. He writes:

The model is discussed, with examples of the process, in CHAPTER FOUR, “Assessing Learning in E-Groups and Communities”in the book: MCCONNELL, D. (2006) E-Learning Groups and Communities. Maidenhead, SRHE/OU Press (pp 209)

With respect to learner autonomy, the premise is the same as that expressed by Erik Duval in his presentation to ChangeMooc (Week 10) – i.e. that if a learner chooses to take a particular course, then s/he must expect to abide by the conditions (such as collaboration, interaction, online participation) stipulated by that course and be assessed in line with these. This was discussed in a previous blog post – https://jennymackness.wordpress.com/2011/11/18/the-tyranny-of-sharing/

However, it is clear from the Hot Seat that a lot of thought has gone into and continues to go into, how assessment can be best designed to fit with principles such as learner autonomy, peer-to-peer learning and negotiation.

Autonomy, assessment and guiding forces

Lisa Lane has written a blog post  – The Guiding  Force –  that has captured my interest. In her post, she asks us to identify  our ‘guiding forces’ in planning our work as teachers – or as she calls them – instructors.  (As an aside, I find the use of language here an interesting cultural (?) difference – I assume it is a cultural difference – because I interpret ‘instruct ‘differently to ‘teach’).

For me my guiding forces (as they stand now – but this has not always been the case) are informed by my involvement with MOOCs and connectivism. I cannot think of better guiding forces than autonomy, diversity, openness and connectedness – the four principles of learning in Moocs  (described by Stephen Downes ) – with for me an emphasis on autonomy. If we understand what we mean by autonomy (which Carmen Tschofen and I have discussed as ‘psychological autonomy’ – autonomy as an expression of the self – in a paper we have had accepted by IRRODL – but not yet published), then diversity, openness and connectedness all fall into place.

I think assessment would also fall into place – because it would mean that the control of assessment would be in the hands of the autonomous learners – but as yet I can’t see clearly how this would work – other than it would need to be negotiated. So, if autonomy is the ‘guiding force’  and part of that autonomy is that students want their efforts to be validated and accredited – then students will need to have much more control over their assessment. But where does this leave ‘the expert’ and will students have the skills to take control of their assessment?

I think Lisa’s question about guiding principles, highlights the changing role of the ‘teacher’, ‘educator’ ‘instructor’ in relation to their students. Lots to think about in this – thanks Lisa 🙂

#PLENK2010 Assessment in distributed networks

I have been struggling to clearly identify the issues associated with assessment in PLEs/PLNs – which are probably similar to those in MOOCs or distributed networks.

There seem to be a number of questions.

  • Is it desirable/possible to assess learners in a course which takes place in a distributed network?
  • Is it possible/desirable to accredit learning in a course which takes place in a distributed network?
  • What assessment strategies would be appropriate?
  • Who should do the assessing?

Whether assessment is desirable in a PLENK/MOOC etc. will depend on the purpose of the course and the learning objectives that the course convenors had in mind when they designed the course. PLENK2010 does not include formal assessment and yet has attracted over 1000 participants, many of whom are still active in Week 5. Presumably these participants are not looking for their learning outcomes to be assessed. CCK08 attracted over 2000 participants and did include assessment for those who wished it – but the numbers were small (24 – I’m not sure if the number who could do the course for credit was limited or only 24 wanted it) – so it was not only possible for the course convenors to assess these participants but also to offer accreditation.

Both assessment and accreditation are possible across distributed networks if the numbers are manageable. It is not the distributed network that is the problem, although this might affect the assessment strategies that are used. It is the numbers. Just as it is not possible for course convenors of a MOOC to interact on an individual level with participants, so it is physically not possible for them to assess such large numbers of individuals, and without this assessment no accreditation can be offered other than perhaps a certificate of attendance – but even this would need to be monitored and would be contrary to the principles of autonomy expected in a MOOC.

So how to assess large numbers. Traditionally this been done through tests and exams which can be easily marked by assessors. Whilst these make the assessment process manageable for the tutors, they offer little more than a mark or grade to the students – since very often there is no feedback-feedforward loop associated with the grade. Also tests and exams are not the best assessment strategy for all situations and purposes.

So what better assessment strategies would work with large numbers? Actually this might be the wrong starting question. The starting point should be what learning objectives do we have, what outcomes do we expect these objective to lead to and what assessment strategy will enable the learner to achieve the learning objective as demonstrable through the outcome. There is a wealth of information now available on assessment strategies, both for formative and summative assessment. Focus in the UK has for many years now (from the time of Paul Black’s article, Inside the Black Box, to Gibbs and Simpson’s article – Conditions Under Which Assessment Supports Students’ Learning, to the REAP project and the work of the JISC) been on formative assessment and providing effective feedback. In Higher Education there has been even more of a push on this recently since students are demanding more and better feedback (National Student Survey) – so effective assessment strategies are there if we are aware of them and know how to use them. These include a whole range of possibilities including audio and video feedback-feedforward between students and tutors, students writing/negotiating their own assessment criteria, peer, group and self-assessment. But how can these strategies be used with MOOC-like numbers whilst maintaining the validity, reliability, authenticity and transparency of assessment?

There appear to be no easy answers to this question. Alec Couros – in his open course – is experimenting with the use of mentors – is this a way forward? We know that there are many trained teachers in PLENK2010. Could they be possible assessors? How would their credentials be checked? Would they work voluntarily?

Peer assessment has been suggested. I have experience of this, but have always found that student peer assessment whether it is based on their own negotiated criteria or criteria written by the tutor – often needs tutor moderation, if a grade which could lead to a degree qualification is involved. Similarly with self-assessment. We don’t know what we don’t know – so we may need someone else to point this out.

The nearest thing I have seen to trying to overcome the question of effectively teaching and assessing large numbers of students is in Michael Wesch’s 2008 video – A Portal to Media Literacy – where he shows how technology can support effective teaching and learning of large groups of students – but he is talking about hundreds, not thousands of students and himself admits that the one thing that didn’t work was asking students to grade themselves. This was two years ago – so I wonder if he has overcome that problem.

So – from these musings it seems to me that

  • Learning in large courses distributed over a range of networks is a worthwhile pursuit. They offer the learner diversity, autonomy and control over their own learning environment and extensive opportunities for open sharing of learning.
  • The purpose of these courses needs to be very clear from the outset – particularly with regard to assessment, i.e. course convenors need to be clear about the learning objectives, how learners might demonstrate that those objectives have been met through the outcomes they produce and whether or not those outcomes need to be assessed.
  • There has been plenty written about what effective assessment entails. The problem in MOOCs is how to apply these effective strategies to large numbers.
  • If we cannot rely on peer assessment and self-assessment (which we may not be able to do for validated/accredited courses), then we need more assessors.

Would a possibility be for an institution/group of institutions to build up a bank/community of trained assessors who could be called upon to voluntarily assess students in a MOOC (as Alec Couros has done with mentors).  Even if this was possible I could see a number of stumbling blocks, e.g. assessor credentials, subject expertise, moderation between assessors, would institutions allow accreditation to be awarded when the assessment has been done by people who don’t work for the institution?  – what else?

#PLENK2010 More thoughts about evaluation and assessment

I have to admit to being confused about exactly what the focus of this week of the PLENK course has been about. The title of the week has been – ‘Evaluating Learning in PLE/Ns’. But the language used has been very confusing, starting with the words assessment and evaluation being used interchangeably – which I was relieved to see from Heli’s comment – I am not alone in being concerned about. And then in today’s Elluminate session there was a lot of ‘talk’ about learning outcomes, when sometimes it seemed to me that what was being talked about was learning objectives.

I have also not been clear about whether the focus is on evaluating PLEs/PLNs and therefore this PLENK as a learning environment, or on individual participants’ learning within this environment, which are two different things and would require different processes despite being linked.

Then there is the confusion about whether we are talking about people assessing their own learning within the PLENK course or whether we are trying to decide whether it is possible to assess people in these environments – again two different processes.

The question was asked ‘How do I know that I am making progress/have learned something’? It would be interesting to collect people’s thoughts about that. My own quick response to this would be that I often don’t know until some time after the event and recognising that I made progress/learned something is very context dependent and therefore for me there is no one answer to this question.

Also for me an important question from this week is whether or not personal learning environments enhance learning – its interesting to consider how this might be measured.

Another interesting question (not related directly to the assessment issue) which was raised in today’s Elluminate session is ‘How do you get the balance right between providing course structure and allowing students the type of freedom that is characteristic of a MOOC.’ I loved the ‘Roots and Wings’ metaphor that someone posted – sorry not to have noted the name to be able to attribute this correctly.

And then Heli has thrown down the gauntlet in her comment in response to my last blog post

Almost all questions are open .. are we afraid of measuring because we want to be up-to-date and postmodern or ..?

Now there’s an interesting question. My own feeling is that there does seem to be a tendency to move away from measurement (e.g. in the UK there has been resistance by teachers of young children to using standardised tests)  – mainly because it’s so difficult to get the measures correct. This was implicit in the You Tube video link that was posted in the Elluminate session tonight – RSA Animate – Changing Education Paradigms – but I don’t think this is because we are afraid – more because traditional modes of assessment just don’t seem to fit with learning that takes place in distributed personal learning networks – particularly if the learning is taking place in a MOOC.

Finally, my own thinking about assessment of learners in traditional settings has been influenced by Gibbs and Simpson’s article – Gibbs, G. & Simpson, C. (2004) ‘Conditions under which assessment supports students’ learning’, Learning and Teaching in Higher Education, 1, 3- 31 http://resources.glos.ac.uk/shareddata/dms/2B70988BBCD42A03949CB4F3CB78A516.pdf

… but if learning is to take place in distributed networks and we want this learning to be accredited then how do we apply Gibbs and Simpson’s advice? Do we need to – or should we be trying to think outside the box and swim against the tide?

#PLENK2010 Evaluation and assessment

I am skating around the edges of this MOOC. I am not unduly worried about this. I have been involved in MOOCs before and know that you need a lot of time to be involved in and make sense of the chaotic mess that is nearer the ‘heart’ of it – and currently I don’t have that time – or more likely, my priorities are elsewhere.

However, ever since CCK08 I have been thinking about the problematic issue of assessment when learning in a MOOC – (see for example the final paragraph of this post – A Pause for Thought – in October 2008)  – so Helene Fournier’s presentation on Elluminate tonight attracted my attention (recording not yet posted – but eventually it will be posted here – http://ple.elg.ca/course/moodle/mod/wiki/view.php?id=60&page=Recordings )

I will have to listen to the recording again – because I know there is lots of thought provoking stuff in there – but I was distracted through a lot of it by the insistent thought in my head that there is a distinction between assessment and evaluation which Helene said she used interchangeably. I’m not usually pedantic – but assessment has such an impact on so many people’s lives that I think it is important to ensure that we are all talking about the same thing ( as much as is possible).

Many thanks to Viplav Baxi (who I ‘know’ from CCK08 :-))  for posting this link – http://www.adprima.com/measurement.htm in the chat room and which I think is really helpful in making the distinctions between measurement, assessment and evaluation – and the further link within this link – http://www.edpsycinteractive.org/topics/intro/sciknow.html – is also helpful.

In my job as an Education Consultant – I don’t use measurement (according to the definitions in these links), but I do assess – in that I assess students’ work against given learning objectives and criteria – and even though I do this I am very aware of how very difficult this is and the many associated contradictions. For example – as a tutor – what do you do when you know that the students’ work is better/ more creative and innovative than the learning objectives set by the course?   A dilemma for the tutor!

I also evaluate – but usually I don’t do this myself but ask students to – for example – evaluate the course, or my teaching – or I evaluate someone else’s course/teaching.  This usually takes the form of a questionnaire or interviews. The questionnaire is more likely to tie me down to specific criteria but the interviews less likely. For evaluations I am not judging individual responses but looking at responses as a whole. For assessment I am thinking about individuals. I am also aware of the problems of evaluation. How do you know that the right questions have been asked or that the respondents have interpreted your questions as you intended? Not at all straightforward.

I am still not sure that I have these distinctions or my understanding of these terms completely clear in my head, but like the author of the first link – Dr. Bob Kizlik – I think they are important. I was not even clear about exactly what it was that people are trying to assess/evaluate in a MOOC/PLENK.  According to Dave Cormier in a MOOC we don’t know what the learning is supposed to be. If this is the case, then what are we supposed to be assessing? Is assessment even relevant in a MOOC, PLE/PLN?

Stephen ‘said’ in the chat room –

everybody wants me to be focused (and especially focused on outcomes). But I am the antithesis of focus

If this is what MOOCs are about – and one or two people in the chat room said that they thrive on chaos – then is it worth thinking about assessment at all in these circumstances?

A fascinating subject and I am still thinking/pondering/questioning 🙂