#PLENK2010 Assessment in distributed networks

I have been struggling to clearly identify the issues associated with assessment in PLEs/PLNs – which are probably similar to those in MOOCs or distributed networks.

There seem to be a number of questions.

  • Is it desirable/possible to assess learners in a course which takes place in a distributed network?
  • Is it possible/desirable to accredit learning in a course which takes place in a distributed network?
  • What assessment strategies would be appropriate?
  • Who should do the assessing?

Whether assessment is desirable in a PLENK/MOOC etc. will depend on the purpose of the course and the learning objectives that the course convenors had in mind when they designed the course. PLENK2010 does not include formal assessment and yet has attracted over 1000 participants, many of whom are still active in Week 5. Presumably these participants are not looking for their learning outcomes to be assessed. CCK08 attracted over 2000 participants and did include assessment for those who wished it – but the numbers were small (24 – I’m not sure if the number who could do the course for credit was limited or only 24 wanted it) – so it was not only possible for the course convenors to assess these participants but also to offer accreditation.

Both assessment and accreditation are possible across distributed networks if the numbers are manageable. It is not the distributed network that is the problem, although this might affect the assessment strategies that are used. It is the numbers. Just as it is not possible for course convenors of a MOOC to interact on an individual level with participants, so it is physically not possible for them to assess such large numbers of individuals, and without this assessment no accreditation can be offered other than perhaps a certificate of attendance – but even this would need to be monitored and would be contrary to the principles of autonomy expected in a MOOC.

So how to assess large numbers. Traditionally this been done through tests and exams which can be easily marked by assessors. Whilst these make the assessment process manageable for the tutors, they offer little more than a mark or grade to the students – since very often there is no feedback-feedforward loop associated with the grade. Also tests and exams are not the best assessment strategy for all situations and purposes.

So what better assessment strategies would work with large numbers? Actually this might be the wrong starting question. The starting point should be what learning objectives do we have, what outcomes do we expect these objective to lead to and what assessment strategy will enable the learner to achieve the learning objective as demonstrable through the outcome. There is a wealth of information now available on assessment strategies, both for formative and summative assessment. Focus in the UK has for many years now (from the time of Paul Black’s article, Inside the Black Box, to Gibbs and Simpson’s article – Conditions Under Which Assessment Supports Students’ Learning, to the REAP project and the work of the JISC) been on formative assessment and providing effective feedback. In Higher Education there has been even more of a push on this recently since students are demanding more and better feedback (National Student Survey) – so effective assessment strategies are there if we are aware of them and know how to use them. These include a whole range of possibilities including audio and video feedback-feedforward between students and tutors, students writing/negotiating their own assessment criteria, peer, group and self-assessment. But how can these strategies be used with MOOC-like numbers whilst maintaining the validity, reliability, authenticity and transparency of assessment?

There appear to be no easy answers to this question. Alec Couros – in his open course – is experimenting with the use of mentors – is this a way forward? We know that there are many trained teachers in PLENK2010. Could they be possible assessors? How would their credentials be checked? Would they work voluntarily?

Peer assessment has been suggested. I have experience of this, but have always found that student peer assessment whether it is based on their own negotiated criteria or criteria written by the tutor – often needs tutor moderation, if a grade which could lead to a degree qualification is involved. Similarly with self-assessment. We don’t know what we don’t know – so we may need someone else to point this out.

The nearest thing I have seen to trying to overcome the question of effectively teaching and assessing large numbers of students is in Michael Wesch’s 2008 video – A Portal to Media Literacy – where he shows how technology can support effective teaching and learning of large groups of students – but he is talking about hundreds, not thousands of students and himself admits that the one thing that didn’t work was asking students to grade themselves. This was two years ago – so I wonder if he has overcome that problem.

So – from these musings it seems to me that

  • Learning in large courses distributed over a range of networks is a worthwhile pursuit. They offer the learner diversity, autonomy and control over their own learning environment and extensive opportunities for open sharing of learning.
  • The purpose of these courses needs to be very clear from the outset – particularly with regard to assessment, i.e. course convenors need to be clear about the learning objectives, how learners might demonstrate that those objectives have been met through the outcomes they produce and whether or not those outcomes need to be assessed.
  • There has been plenty written about what effective assessment entails. The problem in MOOCs is how to apply these effective strategies to large numbers.
  • If we cannot rely on peer assessment and self-assessment (which we may not be able to do for validated/accredited courses), then we need more assessors.

Would a possibility be for an institution/group of institutions to build up a bank/community of trained assessors who could be called upon to voluntarily assess students in a MOOC (as Alec Couros has done with mentors).  Even if this was possible I could see a number of stumbling blocks, e.g. assessor credentials, subject expertise, moderation between assessors, would institutions allow accreditation to be awarded when the assessment has been done by people who don’t work for the institution?  – what else?

#PLENK2010 More thoughts about evaluation and assessment

I have to admit to being confused about exactly what the focus of this week of the PLENK course has been about. The title of the week has been – ‘Evaluating Learning in PLE/Ns’. But the language used has been very confusing, starting with the words assessment and evaluation being used interchangeably – which I was relieved to see from Heli’s comment – I am not alone in being concerned about. And then in today’s Elluminate session there was a lot of ‘talk’ about learning outcomes, when sometimes it seemed to me that what was being talked about was learning objectives.

I have also not been clear about whether the focus is on evaluating PLEs/PLNs and therefore this PLENK as a learning environment, or on individual participants’ learning within this environment, which are two different things and would require different processes despite being linked.

Then there is the confusion about whether we are talking about people assessing their own learning within the PLENK course or whether we are trying to decide whether it is possible to assess people in these environments – again two different processes.

The question was asked ‘How do I know that I am making progress/have learned something’? It would be interesting to collect people’s thoughts about that. My own quick response to this would be that I often don’t know until some time after the event and recognising that I made progress/learned something is very context dependent and therefore for me there is no one answer to this question.

Also for me an important question from this week is whether or not personal learning environments enhance learning – its interesting to consider how this might be measured.

Another interesting question (not related directly to the assessment issue) which was raised in today’s Elluminate session is ‘How do you get the balance right between providing course structure and allowing students the type of freedom that is characteristic of a MOOC.’ I loved the ‘Roots and Wings’ metaphor that someone posted – sorry not to have noted the name to be able to attribute this correctly.

And then Heli has thrown down the gauntlet in her comment in response to my last blog post

Almost all questions are open .. are we afraid of measuring because we want to be up-to-date and postmodern or ..?

Now there’s an interesting question. My own feeling is that there does seem to be a tendency to move away from measurement (e.g. in the UK there has been resistance by teachers of young children to using standardised tests)  – mainly because it’s so difficult to get the measures correct. This was implicit in the You Tube video link that was posted in the Elluminate session tonight – RSA Animate – Changing Education Paradigms – but I don’t think this is because we are afraid – more because traditional modes of assessment just don’t seem to fit with learning that takes place in distributed personal learning networks – particularly if the learning is taking place in a MOOC.

Finally, my own thinking about assessment of learners in traditional settings has been influenced by Gibbs and Simpson’s article – Gibbs, G. & Simpson, C. (2004) ‘Conditions under which assessment supports students’ learning’, Learning and Teaching in Higher Education, 1, 3- 31 http://resources.glos.ac.uk/shareddata/dms/2B70988BBCD42A03949CB4F3CB78A516.pdf

… but if learning is to take place in distributed networks and we want this learning to be accredited then how do we apply Gibbs and Simpson’s advice? Do we need to – or should we be trying to think outside the box and swim against the tide?

#PLENK2010 The relevance of learning theories

I was interested to see what George would come up with re the relationship between learning theories and PLE/PLNs. The Moodle discussion forums have been much quieter – but perhaps this is because it is Week 4 of the course. Dave Cormier has posted somewhere – I think – that this is a hard week in a MOOC – probably made even harder by the subject of learning theories  🙂

I wouldn’t claim to know a lot about learning theories and certainly not all the learning theories that George mentioned, but I do know that they have strongly influenced my life as a teacher. For me, learning theories inform the way I teach. They are perspectives that I take according to the context/situation I find myself in;  I use them to inform my teaching according to my own and my learners’ needs. So for example:

I find myself usually opposed to behaviourism, e.g. I do not want my learners to ‘jump through hoops’. I do not want them to think only about the qualification, but to learn for its own sake. On the other hand I am realistic enough to know that their qualifications are important and that they need them – also I know that whilst I might do everything to encourage intrinsic motivation, they also need extrinsic motivation – particularly young children, who love those gold stars, but also adults who respond to those motivational strokes. With enough ‘rewards’, we can encourage even the most reluctant learners to reach their/our goal and they and we are satisfied and happy.

However, there are many occasions when I not only want my learners to simply achieve a given outcome, but also to think about how they have arrived at the outcome. An example would be to ask children to explain how they arrived at a given answer to a mathematical problem/calculation. I have always found this fascinating – if you ask a number of different children to explain how they each arrived at the solution to a given calculation/problem, they are all likely to have come to the answer differently. This cognitivist approach also helps children who get the answer wrong – as they begin to examine their own thinking.

As a science teacher (in the past) I was always interested in the constructivist approach to teaching and learning. This approach, for me, acknowledged that learners have prior experiences which influence how they think about new learning experiences. In the case of misconceptions, which are extremely prevalent in science education, learners need to deconstruct their misconceptions and reconstruct their thinking in the light of their new learning. In science teaching this usually involves a practical activity in which learners’ misconceptions are physically/mentally challenged by the evidence before them. For example, if a learner sees a metal ball and a polystyrene ball of the same shape and volume, dropped from the same height, reach the ground at the same time, then their misconception that heavier objects fall faster than lighter objects is challenged.

Behaviourism, cognitivism and constructivist approaches can all be used with individual learners. They apply to the individual’s behaviour or individual learning. But in all my teaching there has very often (but not exclusively) been an acknowledgement that people learn from each other. This has involved learners in communities of practice, group activities and collaborative learning and has been context dependent. These social contructivist approaches engage the learner in development of knowledge and personal identity as they grow as much through their relationships with others as they do through engagement with the concepts being taught/learned. As George said today in his presentation – Week 4: George Siemens – Complex Knowledge & PLE/Ns – learning is socially negotiated and developed.

So where does this leave connectivism? Again according to George – in his presentation today, connectivism is driven by network formation – growing and pruning connections. The spectrum of learning from a connectivist view involves resonance, synchronicity, wayfinding, amplification, learning/knowledge symmetry. A while back I wrote another blog post about connectivism as a learning theory – https://jennymackness.wordpress.com/2010/07/02/some-notes-on-connectivism/ in preparation for the Networked Learning Conference and in an attempt to understand connectivism as a learning theory and how it be useful from a teaching perspective.

According to George a theory of learning should

  • Explain what’s happening
  • Predict what could happen
  • Be a foundation for action
  • Be a foundation for preparing for future needs

All the theories mentioned above seem to fulfil these requirements, including connectivism. They all seem to be useful for providing differing perspectives according to specific contexts. I definitely wouldn’t want to throw the baby out with the bathwater,  just because connectivism, PLEs and PLNs have come along.

#PLENK2010 – Open courses and the ‘Granny Cloud’ phenomenon

Thanks to Alec Couros for further information about his open course – EC&I 831: Social Media & Open Education – and for the link to his call for mentors for this course which made for very interesting reading – and has had me reflecting on the question of how to scaffold open courses further – or whether we need to scaffold them at all.

Voluntary mentoring of online courses is not a new idea. John Smith, Bron Stuckey and Etienne Wenger always use mentors on their Foundations of Communities of Practice online course in CPsquare. This is not an open course, but mentors work voluntarily, having first participated in the course themselves. I was privileged to be a mentor myself for one the courses. As in Alec’s course, the mentor plays a different role to the course convenor. In CPsquare this is to support participants in finding their way in the course, to support them in their learning and interaction, to promote and encourage discussion and to support the course facilitators in their management of the course. This sounds similar to what happens in Alec’s course – the difference being that in the CPsquare course all mentors are already known to the course convenors and have been participants on the course for which they are a mentor.

Alec’s idea of a ‘call for mentors’ also struck me as very similar to Sugata Mitra’s ‘granny cloud’. Mitra is renowned for his ‘Hole -in- the -Wall’ experiments in India, which resulted in evidence that children can organise their own learning and teach each other – see http://www.ted.com/talks/sugata_mitra_the_child_driven_education.html for details. However, there was also evidence that the experiments did not always work – see Arora’s work and this blog post for an introduction. Following this critique by Arora, Mitra decided that children’s interest and motivation to learn in the absence of a teacher would be more likely if they were supported by what he has called a ‘granny cloud’. So he recruited hundreds of British grandmothers who are willing to voluntarily connect with the children online and answer their questions – a very similar idea to Alec Couros’ call for mentors.

These ‘experiments’ in learning with minimum intervention from a teacher raise all sorts of complex questions about the role of teaching, both in traditional settings and in open settings. The one that strikes me as being important is how the quality of mentoring is controlled. What sorts of checks do we need to have in place to ensure the safety of learners and that they get a ‘fair’ deal. Under what circumstances would it be worse to be ‘mentored’ than to be left to manage your learning on your own?

I need to read around a bit more (Mitra, Arora and Couros) and see whether these questions have already been addressed.

#PLENK2010 Scaffolding Open Courses

I have just attended the Friday Elluminate session of the Plenk2010 course (will post recording as soon as it is available).

I have been out of touch for more than a week trying to meet research and work deadlines and so it was great to be able to attend this session and also that the session focussed on a topic which is of great interest to me. The question that I honed in on was around the role of educators in ‘massive/large’ open courses. I have missed more than one week’s content of the course, so I am unsure of the context in which this question arose, but since I have participated in at least one other large open course – notably CCK08 – I do have some thoughts about this.

It hit me today that in a MOOC, the massiveness is not a given in that for an open course the facilitators/moderators/tutors (whatever you wish to call them) can have no idea of how many people the course will attract. CCK08 was massive – more than 2000 people signed up. The critical Literacies course was less ‘massive’ in terms of numbers and definitely fairly small by the end. PLENK2010 is massive – more than 1000 participants – many of whom are very active.

But the ‘openness’ is a given. We can attend for ‘free’ – but – we are expected to work autonomously and openly ‘share’ our resources and thinking in a very diverse group. The expectation is that thinking and learning processes will be transparent – but despite these expectations, we can still choose not to – making the whole experience very flexible. This flexibility can be experienced as a double-edged sword.

We found in our research following CCK08 that the more massive the open course the more difficult it becomes to function effectively as autonomous independent learners and the more difficult it is to adhere to the expectation of openness – Also, the more likely it is that participants will congregate in small groups and therefore be liable to ‘group think’ – another problem that was mentioned today – although my personal experience has been that small group work does not necessarily lead to group think, but can instead lead to significant learning as has been my experience with Matthias, John Mak, Roy and other f2f colleagues.

But if we stick with learning in a ‘massive’ network – as a number of people have already noted in PLENK2010 it is easy to feel lost, to find the open course lacking in terms of ‘tutor’ support and scaffolding, to experience the ‘dark side of networking’ as was mentioned in the chat room today.

It seems to me that it’s not possible to have it all ways. Evidently Alec Couros has managed to provide scaffolding in his open course (which I admit I know nothing about so this is second hand information) by ‘recruiting’ mentors to support his online learners. I would have to see this for myself to be able to judge it in action.

My feeling – during the session this evening (UK time:-)) – was that it’s a question of knowing what you have signed up for and what you can expect – and given that these open courses are free, then, as learners, we have a responsibility to check on what we have signed up for and what we can expect.

My expectations would be:

– for a small open course, there would be recognisable structure and ‘tutor’ input (small I would regard as anything under 30 – or possibly 50)
– for a medium sized open course, I would expect less tutor interaction and more peer-to-peer support (not sure about the numbers here, but anything between 50 and 200)
-for a large open course – I would be thinking in terms of ‘networked learning’ rather than course and not expect any personal interaction with the ‘tutor’ at all and to have to rely totally on my peers for support.

The numbers I have put in here are arbitrary – and just to give and idea of what I mean.

However, if I was paying for the course I would expect significant tutor interaction and support, but not to ‘have my hand held’ by the tutor. I would hope that even on a paying course a tutor would be encouraging independent autonomous learning.

I think it’s rather a shame that convenors of a ‘MOOC’ have to justify their approach when they are giving freely of their time and effort. That’s not to say that we and they don’t have a lot to learn about the management of open courses – but it is something we can do together rather than being an ‘us’ and ‘them’ situation.

Learner Autonomy and Teacher Intervention

I am sorry that I missed Paul Bouchard’s talk. I see that the recording has finally been posted today, but I have not yet had a chance to listen to it (so I am more than a week behind now in the Crit Lit course) – but I had to make a long train journey today, so had the time to  read his article…

Bouchard, P. (2009). Some factors to consider when Designing Semi-autonomous Learning Environments. European Journal of e-learning, Volume 7, (2),June 2009, pg. 93-100. Available from http://www.ejel.org/Volume-7/v7-i2/v7-i2-art-2.htm

… which offered some perspectives on learner autonomy that I haven’t previously though about.

For the most part his article reflects my own experience. Online/distance learners often equate the flexibility offered by the environment with ‘easier’, ‘less-time consuming’ etc.  which of course isn’t true – and – as he says/writes, it’s up to the course convenors to make this explicit at the start.

However, I was surprised by the generalisation that distance education equates with excessive teacher control. My experience is that instructional designers may tend to do this, not with the intent of controlling the learner, but because they get carried away with the design and technology and lose sight of the learner. Also from my experience, online teachers/course designers can get carried away with the possibilities offered by online resources/information/technology. Again, they don’t necessarily want to control their learners. Rather it may be that they think that the web offers their learners increased choice in the resources available and therefore increased autonomy in choosing which resources to select, so they overload the course with resources and hyperlinks. In doing this they assume that the learners have the skills to filter and select from the wide range of resources that they upload, or even understand that that is what they are supposed to do.

Alternatively online teachers may be concerned that they need to support their learners and they cannot do this unless they can see them visibly online, so unwittingly subject them to the tyranny of participation in discussion forums, in the belief that this is a form of support.

So I suspect it is not so much a matter of teachers/instructors wanting to control the learning, but more that they may lack understanding of how learning occurs in an online environment, what learner autonomy means and that learner autonomy can be a double-edged sword.

One thing I am having difficulty with in Bouchard’s article is where he writes that the teacher/instructor should not participate on a level with the students. Bouchard doesn’t explain or justify this. For me this assumes a distinction (possibly hierarchical) between teacher and learner and that the teacher can’t learn from discussion with the learners or that the learners wouldn’t know, understand or want this. This does call in to question again, who is the teacher and who has the expertise?

Jean Lave in her article Teaching as Learning in Practice (1996) Vol.3, No. 3. Mind, Culture and Activity, about apprenticeship learning, gives us lots to think about. She discusses the case of apprentice tailors in Liberia and apprentice lawyers in a mosque in Egypt, where there is is a lot of self-directed learning, but it is clear who the ‘experts’ and ‘masters’ are – and the experts/masters do intervene. Jean Lave emphasises the benefits of situated and social learning. Is it possible to have an apprenticeship model of learning at a distance and does having a clearly identified ‘master/expert/teacher’ militate against learner autonomy?

So the two key questions that come out Paul Bouchard’s article for me are:

1. Is there a common understanding of what learner autonomy means?

2. Does teacher intervention militate against learner autonomy?

Uncertainty and learning

This week the Critical Literacies course bears the title ‘Change’ and Stephen has made a great post about ‘Patterns of Change’. Whilst a lot of this was not new to me (down to having a science background), I was really impressed by the lucidity with which the information was presented.

I have had a good look at the Report capacity, change and performance article as it relates to some research that I am currently involved with and I sent the link about 50 ways to foster a culture of innovation to my eldest son who is an entrepreneur, although if you are an entrepreneur you probably don’t need to read articles like this.  And I have lightly skimmed this – Technology, complexity, economy, catastrophe – Article Globe and Mail . But I haven’t yet had time to check out the other readings.

I’m going to be very interested to hear what Dave Snowden has to say this week (assuming that I can hear the presentation – I wasn’t able to hear Grainne’s last week) – because it seems to me that the critical literacy that is being addressed this week is an ability to cope with uncertainty. I don’t know enough about this to comment about it any further at this stage.

Related to this is Heli’s post today in which I was struck by her comment:

The Basic Message is that learning and development is not linear, it has individual phases, it goes up and down or straight foreward.

I would add to this that it can also go sideways – and diverge into areas that teachers do not expect. In thinking about this I was reminded of a course I went on a very long time ago about teaching mathematics to young children. We were asked to carry out an action research project about how children progressed through the National Curriculum (UK) for mathematics – and what were our findings? Well that the National Curriculum expected children to follow a linear course through prescribed stages – but did they? No – they certainly did not. They jumped all over the place – forwards in jumps instead of a nice linear sequence, sideways and even backwards.

This would suggest that a good teacher needs to be able to cope with this unpredictability in students’  learning – this uncertainty as to how learners are going to learn.