4th Networked Learning Hot Seat is underway

This year’s fourth Hot Seat discussion in the area of networked learning (in preparation for the 2012 conference) runs from January 9-13. Lone Dirckinck-Holmfeld, Vivien Hodgson, and David McConnell are facilitating a week-long asynchronous discussion, Exploring the Theory, Pedagogy and Practice of Networked Learning.

The Hot Seat discussion has 3 parts:

  1. History of Networked Learning in the UK and underpinning values (this thread has, so far, attracted the most discussion)
  2. The history of networked learning in a Danish context and its relationship to problem based learning (pbl), the role of technology and web 2.0, and the net generation and digital literacy
  3. Ontology, epistemology and pedagogy of networked learning, and relevance to mainstream higher education in the 21st century.

I arrived late for the discussion and it has been difficult to catch up with such a wealth of posting – but so far I have taken away two key ideas.

First, the definition of networked learning used for these Hot Seat discussions is quite narrow and only relates to networked learning in higher education courses. As such David McConnell introduces Part 1 of the Hot Seat by saying that

Networked Learning is based on:
Dialogue
Collaboration and cooperation in the learning process
Group work
Interaction with online materials
Knowledge production

With such a heavy emphasis on interaction, collaboration and group work, this raises the ever difficult question of whether or not participation should be assessed and if so how. In the Hot Seat David McConnell shares his model for assessment which is based on peer and self review. He writes:

The model is discussed, with examples of the process, in CHAPTER FOUR, “Assessing Learning in E-Groups and Communities”in the book: MCCONNELL, D. (2006) E-Learning Groups and Communities. Maidenhead, SRHE/OU Press (pp 209)

With respect to learner autonomy, the premise is the same as that expressed by Erik Duval in his presentation to ChangeMooc (Week 10) – i.e. that if a learner chooses to take a particular course, then s/he must expect to abide by the conditions (such as collaboration, interaction, online participation) stipulated by that course and be assessed in line with these. This was discussed in a previous blog post – https://jennymackness.wordpress.com/2011/11/18/the-tyranny-of-sharing/

However, it is clear from the Hot Seat that a lot of thought has gone into and continues to go into, how assessment can be best designed to fit with principles such as learner autonomy, peer-to-peer learning and negotiation.

Autonomy, assessment and guiding forces

Lisa Lane has written a blog post  – The Guiding  Force –  that has captured my interest. In her post, she asks us to identify  our ‘guiding forces’ in planning our work as teachers – or as she calls them – instructors.  (As an aside, I find the use of language here an interesting cultural (?) difference – I assume it is a cultural difference – because I interpret ‘instruct ‘differently to ‘teach’).

For me my guiding forces (as they stand now – but this has not always been the case) are informed by my involvement with MOOCs and connectivism. I cannot think of better guiding forces than autonomy, diversity, openness and connectedness – the four principles of learning in Moocs  (described by Stephen Downes ) – with for me an emphasis on autonomy. If we understand what we mean by autonomy (which Carmen Tschofen and I have discussed as ‘psychological autonomy’ – autonomy as an expression of the self – in a paper we have had accepted by IRRODL – but not yet published), then diversity, openness and connectedness all fall into place.

I think assessment would also fall into place – because it would mean that the control of assessment would be in the hands of the autonomous learners – but as yet I can’t see clearly how this would work – other than it would need to be negotiated. So, if autonomy is the ‘guiding force’  and part of that autonomy is that students want their efforts to be validated and accredited – then students will need to have much more control over their assessment. But where does this leave ‘the expert’ and will students have the skills to take control of their assessment?

I think Lisa’s question about guiding principles, highlights the changing role of the ‘teacher’, ‘educator’ ‘instructor’ in relation to their students. Lots to think about in this – thanks Lisa 🙂

#PLENK2010 Assessment in distributed networks

I have been struggling to clearly identify the issues associated with assessment in PLEs/PLNs – which are probably similar to those in MOOCs or distributed networks.

There seem to be a number of questions.

  • Is it desirable/possible to assess learners in a course which takes place in a distributed network?
  • Is it possible/desirable to accredit learning in a course which takes place in a distributed network?
  • What assessment strategies would be appropriate?
  • Who should do the assessing?

Whether assessment is desirable in a PLENK/MOOC etc. will depend on the purpose of the course and the learning objectives that the course convenors had in mind when they designed the course. PLENK2010 does not include formal assessment and yet has attracted over 1000 participants, many of whom are still active in Week 5. Presumably these participants are not looking for their learning outcomes to be assessed. CCK08 attracted over 2000 participants and did include assessment for those who wished it – but the numbers were small (24 – I’m not sure if the number who could do the course for credit was limited or only 24 wanted it) – so it was not only possible for the course convenors to assess these participants but also to offer accreditation.

Both assessment and accreditation are possible across distributed networks if the numbers are manageable. It is not the distributed network that is the problem, although this might affect the assessment strategies that are used. It is the numbers. Just as it is not possible for course convenors of a MOOC to interact on an individual level with participants, so it is physically not possible for them to assess such large numbers of individuals, and without this assessment no accreditation can be offered other than perhaps a certificate of attendance – but even this would need to be monitored and would be contrary to the principles of autonomy expected in a MOOC.

So how to assess large numbers. Traditionally this been done through tests and exams which can be easily marked by assessors. Whilst these make the assessment process manageable for the tutors, they offer little more than a mark or grade to the students – since very often there is no feedback-feedforward loop associated with the grade. Also tests and exams are not the best assessment strategy for all situations and purposes.

So what better assessment strategies would work with large numbers? Actually this might be the wrong starting question. The starting point should be what learning objectives do we have, what outcomes do we expect these objective to lead to and what assessment strategy will enable the learner to achieve the learning objective as demonstrable through the outcome. There is a wealth of information now available on assessment strategies, both for formative and summative assessment. Focus in the UK has for many years now (from the time of Paul Black’s article, Inside the Black Box, to Gibbs and Simpson’s article – Conditions Under Which Assessment Supports Students’ Learning, to the REAP project and the work of the JISC) been on formative assessment and providing effective feedback. In Higher Education there has been even more of a push on this recently since students are demanding more and better feedback (National Student Survey) – so effective assessment strategies are there if we are aware of them and know how to use them. These include a whole range of possibilities including audio and video feedback-feedforward between students and tutors, students writing/negotiating their own assessment criteria, peer, group and self-assessment. But how can these strategies be used with MOOC-like numbers whilst maintaining the validity, reliability, authenticity and transparency of assessment?

There appear to be no easy answers to this question. Alec Couros – in his open course – is experimenting with the use of mentors – is this a way forward? We know that there are many trained teachers in PLENK2010. Could they be possible assessors? How would their credentials be checked? Would they work voluntarily?

Peer assessment has been suggested. I have experience of this, but have always found that student peer assessment whether it is based on their own negotiated criteria or criteria written by the tutor – often needs tutor moderation, if a grade which could lead to a degree qualification is involved. Similarly with self-assessment. We don’t know what we don’t know – so we may need someone else to point this out.

The nearest thing I have seen to trying to overcome the question of effectively teaching and assessing large numbers of students is in Michael Wesch’s 2008 video – A Portal to Media Literacy – where he shows how technology can support effective teaching and learning of large groups of students – but he is talking about hundreds, not thousands of students and himself admits that the one thing that didn’t work was asking students to grade themselves. This was two years ago – so I wonder if he has overcome that problem.

So – from these musings it seems to me that

  • Learning in large courses distributed over a range of networks is a worthwhile pursuit. They offer the learner diversity, autonomy and control over their own learning environment and extensive opportunities for open sharing of learning.
  • The purpose of these courses needs to be very clear from the outset – particularly with regard to assessment, i.e. course convenors need to be clear about the learning objectives, how learners might demonstrate that those objectives have been met through the outcomes they produce and whether or not those outcomes need to be assessed.
  • There has been plenty written about what effective assessment entails. The problem in MOOCs is how to apply these effective strategies to large numbers.
  • If we cannot rely on peer assessment and self-assessment (which we may not be able to do for validated/accredited courses), then we need more assessors.

Would a possibility be for an institution/group of institutions to build up a bank/community of trained assessors who could be called upon to voluntarily assess students in a MOOC (as Alec Couros has done with mentors).  Even if this was possible I could see a number of stumbling blocks, e.g. assessor credentials, subject expertise, moderation between assessors, would institutions allow accreditation to be awarded when the assessment has been done by people who don’t work for the institution?  – what else?

#PLENK2010 More thoughts about evaluation and assessment

I have to admit to being confused about exactly what the focus of this week of the PLENK course has been about. The title of the week has been – ‘Evaluating Learning in PLE/Ns’. But the language used has been very confusing, starting with the words assessment and evaluation being used interchangeably – which I was relieved to see from Heli’s comment – I am not alone in being concerned about. And then in today’s Elluminate session there was a lot of ‘talk’ about learning outcomes, when sometimes it seemed to me that what was being talked about was learning objectives.

I have also not been clear about whether the focus is on evaluating PLEs/PLNs and therefore this PLENK as a learning environment, or on individual participants’ learning within this environment, which are two different things and would require different processes despite being linked.

Then there is the confusion about whether we are talking about people assessing their own learning within the PLENK course or whether we are trying to decide whether it is possible to assess people in these environments – again two different processes.

The question was asked ‘How do I know that I am making progress/have learned something’? It would be interesting to collect people’s thoughts about that. My own quick response to this would be that I often don’t know until some time after the event and recognising that I made progress/learned something is very context dependent and therefore for me there is no one answer to this question.

Also for me an important question from this week is whether or not personal learning environments enhance learning – its interesting to consider how this might be measured.

Another interesting question (not related directly to the assessment issue) which was raised in today’s Elluminate session is ‘How do you get the balance right between providing course structure and allowing students the type of freedom that is characteristic of a MOOC.’ I loved the ‘Roots and Wings’ metaphor that someone posted – sorry not to have noted the name to be able to attribute this correctly.

And then Heli has thrown down the gauntlet in her comment in response to my last blog post

Almost all questions are open .. are we afraid of measuring because we want to be up-to-date and postmodern or ..?

Now there’s an interesting question. My own feeling is that there does seem to be a tendency to move away from measurement (e.g. in the UK there has been resistance by teachers of young children to using standardised tests)  – mainly because it’s so difficult to get the measures correct. This was implicit in the You Tube video link that was posted in the Elluminate session tonight – RSA Animate – Changing Education Paradigms – but I don’t think this is because we are afraid – more because traditional modes of assessment just don’t seem to fit with learning that takes place in distributed personal learning networks – particularly if the learning is taking place in a MOOC.

Finally, my own thinking about assessment of learners in traditional settings has been influenced by Gibbs and Simpson’s article – Gibbs, G. & Simpson, C. (2004) ‘Conditions under which assessment supports students’ learning’, Learning and Teaching in Higher Education, 1, 3- 31 http://resources.glos.ac.uk/shareddata/dms/2B70988BBCD42A03949CB4F3CB78A516.pdf

… but if learning is to take place in distributed networks and we want this learning to be accredited then how do we apply Gibbs and Simpson’s advice? Do we need to – or should we be trying to think outside the box and swim against the tide?

#PLENK2010 Evaluation and assessment

I am skating around the edges of this MOOC. I am not unduly worried about this. I have been involved in MOOCs before and know that you need a lot of time to be involved in and make sense of the chaotic mess that is nearer the ‘heart’ of it – and currently I don’t have that time – or more likely, my priorities are elsewhere.

However, ever since CCK08 I have been thinking about the problematic issue of assessment when learning in a MOOC – (see for example the final paragraph of this post – A Pause for Thought – in October 2008)  – so Helene Fournier’s presentation on Elluminate tonight attracted my attention (recording not yet posted – but eventually it will be posted here – http://ple.elg.ca/course/moodle/mod/wiki/view.php?id=60&page=Recordings )

I will have to listen to the recording again – because I know there is lots of thought provoking stuff in there – but I was distracted through a lot of it by the insistent thought in my head that there is a distinction between assessment and evaluation which Helene said she used interchangeably. I’m not usually pedantic – but assessment has such an impact on so many people’s lives that I think it is important to ensure that we are all talking about the same thing ( as much as is possible).

Many thanks to Viplav Baxi (who I ‘know’ from CCK08 :-))  for posting this link – http://www.adprima.com/measurement.htm in the chat room and which I think is really helpful in making the distinctions between measurement, assessment and evaluation – and the further link within this link – http://www.edpsycinteractive.org/topics/intro/sciknow.html – is also helpful.

In my job as an Education Consultant – I don’t use measurement (according to the definitions in these links), but I do assess – in that I assess students’ work against given learning objectives and criteria – and even though I do this I am very aware of how very difficult this is and the many associated contradictions. For example – as a tutor – what do you do when you know that the students’ work is better/ more creative and innovative than the learning objectives set by the course?   A dilemma for the tutor!

I also evaluate – but usually I don’t do this myself but ask students to – for example – evaluate the course, or my teaching – or I evaluate someone else’s course/teaching.  This usually takes the form of a questionnaire or interviews. The questionnaire is more likely to tie me down to specific criteria but the interviews less likely. For evaluations I am not judging individual responses but looking at responses as a whole. For assessment I am thinking about individuals. I am also aware of the problems of evaluation. How do you know that the right questions have been asked or that the respondents have interpreted your questions as you intended? Not at all straightforward.

I am still not sure that I have these distinctions or my understanding of these terms completely clear in my head, but like the author of the first link – Dr. Bob Kizlik – I think they are important. I was not even clear about exactly what it was that people are trying to assess/evaluate in a MOOC/PLENK.  According to Dave Cormier in a MOOC we don’t know what the learning is supposed to be. If this is the case, then what are we supposed to be assessing? Is assessment even relevant in a MOOC, PLE/PLN?

Stephen ‘said’ in the chat room –

everybody wants me to be focused (and especially focused on outcomes). But I am the antithesis of focus

If this is what MOOCs are about – and one or two people in the chat room said that they thrive on chaos – then is it worth thinking about assessment at all in these circumstances?

A fascinating subject and I am still thinking/pondering/questioning 🙂

Teachers talk too much

The CCK08 round up was an interesting meeting. It seems like it was held at a difficult time for some and clashed with their teaching commitments, so a few familiar faces were not present.

There was quite a lot of talk about assessment. I think it will be worth listening to the recording again to capture this conversation. 

Most intriguing was George’s apparent frustrations with lurkers. He thought that in the next run of the course they would have to do more to encourage participation – in his view people need to participate more to make the course work. ‘Lurking is not appropriate.’ George expects everyone to be transparent in their learning and by default become a teacher in the course.

There seemed to me to be loads of participation – both in the blogs and in the discussion forums.  I personally would not have coped with any more. I’m not sure what percentage of people were participating in blogs and forums – probably not the 10% that Nancy White recommends should be active in an online course – but then this wasn’t a course in the true sense of the word – or was it? This question of whether the word course should be used to decribe the CCK08 experience was also discussed.

Perhaps Stephen and George need to be really clear about whether they are running a course or not ; they do appear to have different views on it. If they are just establishing and managing a learning community or network, then I think participants would view their responsibilities differently. In a community or network, peripheral participation is legitimate (Wenger) as George acknowledged. Lurking (I prefer to think of it as reading or observing) is legitimate. People get drawn into conversations as and when they need them. Stephen seems happier with this than George.

However, in both a community and on a course, (but maybe not a network), there are leaders who try to draw in participants and increase levels of interactivity. This requires skills and ‘teacher-type’ interventions, whereas I think Stephen and George’s model was more – let them (the participants) get themselves organised into groups, decide for themselves where they want to communicate and get on with it.

So it seems to me that you can’t really have it both ways. Either you let participants just get on with it, in which case you leave them to lurk if they want to, don’t worry about it and are happy with whoever, however small the number, actively participates. Or you go for skilled teacher intervention. George stated that he wanted a less didactic style for the CCK08 ‘course’ where he and Stephen would become less prominent as the course progressed – but this ‘hands off’ approach isn’t something that just happens. It has to be cultivated by skilled facilitators/online teachers. In my experience as an online tutor, I have to work very hard at the beginning of a course, helping participants to make appropriate relationships, establishing an ethos of security, trust and mutual respect, and that once this is set up I can withdraw. But it doesn’t just happen. It depends on my initial interventions (and I don’t necessarily equate interventions with ‘talking’).

I agree that very few participants took the mic. in the synchronous Elluminate sessions, but I don’t think that is necessarily down to a lack of willingness to speak;  maybe more to the teaching style adopted for these sessions. It occurred to me yesterday that maybe it was a case of  ‘the teacher talks too much’. It was very noticeable in yesterday’s session that at the beginning of the session when there was only one tutor (George) participants took the mic. a lot more than they did at the end of the session when there were both George and Stephen, who then tend to talk to each other. 

There’s plenty of research around about teachers talking too much and there has been for many years. The original research showed that teachers are really surprised when they are observed and are given the evidence of exactly how much they do talk. Student teachers also always struggle to see that their job is not so much about their teaching, but about their learners’ learning and that if the focus is on learners’ learning, then the learners need more time to talk, even if this means tolerating silence while learners gather their thoughts.

So what I am saying is that if George and Stephen want more people to speak in Elluminate sessions, then perhaps the way in which the sessions are organised needs a rethink. Again, if they want more participation in the various communication groups, Ning, Facebook, Second Life, blogs, wikis, Moodle, then there might need to be more teacher intervention at the beginning of the course to establish this – but it seems to me that more teacher intervention is the antithesis to what the CCK08 experience (or whatever you want to call it) is all about.

So we are back to the tension between CCK08 being a course and the type of open learning experience it is trying to achieve. Not an easy one, for which there are no easy answers. Would a discussion about these very issues right at the beginning of the CCK08 course help?

I’ll be very interested to see how the course is run next time round at the end of this year.

A pause for thought

Before starting on this week’s readings – I just want to draw breath a little. Getting the balance between action and reflection is not always easy and I have come to realise that writing a ‘public’ blog does not necessarily equate to reflection.

I have spent the weekend busy on other things (connecting with my life away from this course), but also trying to refocus on what it is I am really interested in. It is so easy with the wealth of information that is available on this course to go off at tangents, or to think that because the content has been provided on this course it must be significant and therefore I must spend some time on it, even if it isn’t of direct relevance to my area of interest. It’s difficult to know what to let go. So this course, and perhaps networking in general, and, I think,  active reflection, requires a degree of self-discipline that needs to be practised – or at least it does for me.

Despite my wanderings and straying off down a multitude of paths, I always come back to the questions -‘How does all this apply to teaching and learning?’ and ‘Do I need to change my current practice?’

I have spent some time today searching for blogs to check whether others have the same questions or interest in teaching and learning, and of course most people do to a degree because everyone is a learner, but some people stand out for me as being particularly interested in the practicalities teaching and learning. I’m sure there are more, who I haven’t yet connected with.

Pierfranco has years of experience, which comes shining through his posts. The fact that learning is chaotic is no surprise to him, but he is still looking for verifiable results.

Carmen’s post is a wonderful description of the complex interactions that take place in a classroom

Tom  gives us a window into his classroom/s, which reinforces for us how unpredictable learning can be

John’s blog has many posts related to his deep interest in teaching and learning. He asks whether the curriculum should be negotiated between teachers and learners (see his post of Oct 16th).

Dave Pollard has made a great post this week where he writes: How much of what senior people know will never be learned by younger workers, simply because the networks of trust necessary for valuable conversations will not have been forged. This is not the first potential digital divide that has cropped up in this course.

Eugene Wallingford in the post Social Networks and the Changing Relationship Between Students and Faculty writes about ‘…. the newly transparent wall between me and my students…’ The question of transparency must be one being considered by many online teachers.

Adrian Hill draws our attention for the need for creativity in teaching and learning and asks how creativity should be understood in connectivism terms.

Lani has made a wonderfully reflective post this week about the potential of thinking about teaching and learning in terms of chaos and complexity

Matthias always has something thought provoking to say about teaching and learning: Certainly teaching influences learning in some way, but we don’t really know in which way (deterministic unpredictability), and certainly it is not that simple and controllable that teaching them neat concepts (input) will enable them (output) to make the world neat. 

and Maru (see her post of October 14th) brings up the question of feedback – how do we get the type of feedback that we need for our learning in an online network. This resonates with all my thinking about where assessment fits into a connectivism model and takes us back round to Pierfranco’s post where he says he is still looking for verifiable results despite wanting to encourage autonomous and open learning.

So there is still a lot to think about and I have more questions than answers, but I think the biggest question for me (which I have raised in previous posts) still lies around assessment. I think teachers can develop their understanding of teaching and learning and change their practice to meet current learners’ needs, but ultimately their best efforts may be constrained by assessment requirements.

Going with the flow of non-linear learning

I have just read Renata Phelps’ article – Developing Online From Simplicity toward Complexity: Going with the Flow of Non-Linear Learning.

It is interesting from a variety of perspectives and has certainly made me think.

1. I don’t find all aspects of the article very clear. The development of a non-linear course structure is described. The author presents a non-linear curriculum as one that is not presented in a linear format, that can be accessed in a non-linear way by the learners and that is open to choice about how much and what is studied.

2. The article describes the development of a teacher training course – ICT in primary and secondary education. I don’t think enough is made of the fact that the context is ICT education, as I do think that when talking about non-linear learning, going with the flow and that the ‘curriculum becomes a process of development rather than body of knowledge to be covered and learned’, the context is important. I suspect that some subjects can have a more flexible curriculum and course structure than others. I’m not so sure how selective a trainee medic can be about curriculum. 

3. The article doesn’t really evaluate the success of changing the curriculum from a linear to a more complexity-based model, other than to quote two positive remarks from students. In the 60s it was very fashionable to ‘go with the flow’ in school classrooms in the UK. I remember on being appointed to a new job and asking for the maths syllabus (so that I would have some idea of what we should cover in the term), being told by the headteacher that they didn’t teach in that way in his school – they followed the children’s interests, so if the children wanted to talk about birds’ nests all week,  they could.  The very strictly linear National Curriculum was introduced in the UK to combat the massive gaps that were becoming in apparent in children’s knowledge as a result of ‘going with the flow’ and ‘discussing birds’ nests for a week’ at the expense of time spent on the 3 Rs. My experience suggests that a curriculum is actually a good thing, so long as you don’t expect learners to learn in a linear way. You only have to observe young children learning mathematics to know that they don’t and won’t.

3. The article then equates learning objectives with domination, control, reductionism and an undermining of emergent learning. I have always thought about learning objectives as being about clarity of forward thinking and about knowing what to assess. I don’t see that learning objectives need to control or undermine emergent learning.  Assessment isn’t mentioned in the article and that seems to me to be a big omission.

4. There is a lot in the article about ‘authentic’ and ‘problem-based’ learning that encourages reflective and self-directed learners. This is not new. Donald Schon’s book on the reflective practitioner was published at least 10 years before this article was written and my teaching colleagues have been discussing how to encourage learners to become independent, motivated, self-directed and reflective since the 60s and I’m sure previous generations of teachers have done the same.

So although any article which promotes this way of working is welcome, I don’t think the ideas presented in terms of learning are particularly new. However, it is interesting to think about to what extent you want your curriculum to be ‘flexible, open, disruptive, uncertain and unpredictable ….accepting …tension, anxiety and problem creating as the norm’.

I would be interested in knowing whether a course structure such as the one described in the article would work for a curriculum such as medicine.

Concept/Mental Mapping

I’m interested that this has been chosen as an assessment tool for this course. If my understanding is correct then people that have signed up to be assessed have to produce a mental/concept map on a fairly regular basis.

I’m interested in this because I first began to explore concept maps at least 15 years ago. At the time I was a primary school teacher (ages 5-11 in the UK) and I was a science graduate with an interest in developing science knowledge and skills in the children I was teaching. Concept mapping seemed the ideal tool. I read up on Novak and Novak, and although their work was never originally intended for applying to young children, the primary science journals at that time were full of it. So I tried it out on the 5 and 6 year old children in my class

I won’t go into how I set it up, but what I found extremely interesting was that it wasn’t the children that I expected to be good at it who were. I had some extremely bright boys in my class that year, who just could not get their heads around it. And then there was Valerie – youngest in the class, wouldn’t say ‘boo to a goose’, was never noticeable in any other way, who was a complete whizz at concept mapping.

So is Valerie brighter than Ian and Stephen (the bright boys who couldn’t do it). No I don’t think so. Valerie just learned in a different way and internally organised her knowledge and information in a different way. The boys and Valerie just learned differently.

So this worries me a bit about using concept/mental mapping as an assessment tool, as some people simply don’t learn this way. It doesn’t make them any less able as a learner. They just learn differently.

Where have they been?

I have just finished listening to the UStream session and the very last 5 or 10 minutes made me prick my ears up. The question was put to SD and GS – Give one simple practical suggestion for implementing connectivism in classrooms (with children). The suggestions were

  1. Connect classrooms from people round the world.
  2. Encourage children to work together to participate in a real way to produce something real of benefit to society.

Neither of these ideas is new.  My first experience of networking across schools was when I was at school myself in about 1962 or 63, when a group from my school in the North of England linked with a group from a school in London (which in those days might as well have been in a different country) to work on a project. Since then I have experienced this kind of activity both nationally and internationally, both as a learner and teacher many times. The same is true of working collaboratively on ‘real’ projects to produce  a recognisably useful outcome. Interesting though that collaboarative group work doesn’t seem to have been built into this course. Not yet anyhow.

No – I think Dave Cormier is much nearer what the change might need to be and that is in a negotiated curriculum. We need to start encouraging children to negotiate their own curriculum. Even this is not new. I remember that at least 15 years ago, when teaching 5 and 6 year old children, I once started the half term’s work by asking the class to plan their own work for the 8 week period. They were perfectly able to do this and planned a wonderful topic based on a nursery rhyme, in which they were able to say what maths, english, science, geography etc. etc. we would need to work on that term.

What is new for me – but not completely new is allowing students to negotiate their assessment. I have done this in the past as well – i.e. asked children to work together to determine assessment criteria and then peer assess, but there has always been a limit to how far I have been able to go with this because of quality assurance standards.

It seems to me that for connectivism to be useful to education, some of the issues surrounding assessment and a negotiated curriculum need to be resolved. In particular, I do believe it is very important to determine whether it can be applied to young children’s education.