Future directions for the Footprints of Emergence framework

This is the last in a series of five posts written in preparation for an e-learning conference keynote that Roy Williams and I will be giving on September 17 in Graz, Austria.

First slide in presentation

Previous posts relating to this presentation are:

  1. Evaluation of Open Learning Scenarios
  2. Characteristics of Open Learning Environments
  3. Emergent Learning in Open Environments
  4. Theoretical influences on the characteristics of open learning environments

Our research [1] [2] focuses on how learners experience complex, unpredictable, uncertain environments, such as MOOCs, where their learning is likely to be emergent.

Over the last two or three years the amount of research into learning in MOOCs has grown. See for example MOOC Research Initiative Reports from the Gates Foundation funded projects  and the proceedings from the European MOOCs Stakeholders Summit 2014 .

Some researchers, like George Veletsianos [3], have questioned whether there is enough emphasis in recent research on the ‘learner voice’. This is a question that also concerns us. We believe that it is essential to encourage and listen to the ‘learner voice’ (whoever that learner might be), if we are to understand the epistemic and ontological shifts and transformational learning that can happen in open learning environments.

The Footprints of Emergence framework is a tool [2] [ See also previous posts in this series], which can be used by learners to surface the deep, tacit knowledge and understanding that is associated with these transformational shifts in open learning environments such as MOOCs. We are interested in learning more about the impact of open learning environments on these shifts by encouraging learners to be researchers of their own experience.

The Footprints of Emergence framework [2] [4] can be regarded as a probe for evaluating learning in open environments. It engages learners in deep reflection, supports them in taking control of their own reflection and evaluation, can be used to encourage discussion and collaboration between learners, teachers and designers, and can be used to visualise the dynamic changes that occur in learning over time.

A difficulty that we have encountered with the framework and drawing tool is that they require explanation and practise in use, i.e. they require time and effort to engage with, sometimes more time than people have and more effort than people want to make. Our aim is to try and simplify the process, without losing the depth of reflection that the current process leads to. To this end we are, through a colleague, hoping to develop some software, which will make the drawing process more straightforward. This would leave the user freer to concentrate on the meaning and use of the factors (see the second post in this series for more information about the factors) and the interpretation of the final footprint visualisation. The development of some software would also potentially make it easier to work with larger groups of learners.

Such a development would enable us to focus on the meaning of evaluation of learning in open learning environments. This has been challenging us for some time. If learning in these environments is emergent, surprising and unpredictable, how can we ‘capture’ it and value it. The common response in current MOOC research has been to try and scale up traditional assessment methods through the use of big data, automated assessment or peer review. Our current thinking is that new paradigms such as open learning may require new ways of thinking about assessment. The Footprints of Emergence framework enables a move away from traditional approaches and puts the emphasis on reflection and self-assessment. This aligns with the view expressed recently by Stephen Downes [5] that we need to move beyond assessment [6] as we know it and put it in the hands of learners.

To summarise the directions in which we are moving: We are interested in –

  1. Exploring further the characteristics of open learning environments that result in transformative learning
  2. Increasing our understanding of how learners learn in open learning environments
  3. Finding new approaches which go beyond assessment and put learning and assessment in the control of the learner
  4. Exploring the notion of probes for assessment and learning design
  5. Developing the footprints of emergence drawing tool so that it can be used more easily with larger groups of learners.

References

  1. Williams, R., Karousou, R. &  Mackness, J. (2011) Emergent Learning and Learning Ecologies in Web 2.0. International Review of Research in Open and Distance Learning. Retrieved from: http://www.irrodl.org/index.php/irrodl/article/view/883
  2. Williams, R., Mackness, J. & Gumtau, S. (2012) Footprints of Emergence. Vol. 13, No. 4. International Review of Research in Open and Distance Learning. Retrieved from: http://www.irrodl.org/index.php/irrodl/article/view/1267
  3. Veletsianos, G. (2014) ELI 2014, learner experiences, MOOC research, and the MOOC phenomenon – Retrieved from: http://www.veletsianos.com/2014/02/10/mooc-research-mooc-phenomenon/
  4. Footprints of Emergence open wiki – http://footprints-of-emergence.wikispaces.com/
  5. Downes, S. (2014) Beyond Assessment – Recognizing Achievement in a Networked World Jul 11, 2014. 12th ePortfolio, Open Badges and Identity Conference , University of Greenwich, Greenwich, UK (Keynote). Retrieved from: http://www.downes.ca/presentation/344
  6. Mackness, J. (2014). Blog post – Beyond Assessment – Recognizing Achievement in a Networked World. Retrieved from: https://jennymackness.wordpress.com/2014/07/13/beyond-assessment-recognizing-achievement-in-a-networked-world/

Evaluation of Open Learning Scenarios

Screen Shot 2014-08-04 at 10.18.10

In September Roy Williams and I will be giving the keynote for this conference in Graz, Austria, at the invitation of Jutta Pauschenwein and her colleagues. The title of the conference for those who do not speak German is Evaluation of Open Learning Scenarios.

The title of our keynote is:

Surfacing, sharing and valuing tacit knowledge

This is the first blog post in a series that we hope to write between now and September 17th. The aim is that these posts will act as advance organizers. We know from experience that some of the ideas that we will discuss in our presentation need more time and reflection to take in than will be possible at the conference itself. We also know that we won’t have time at the conference to cover everything we have thought about in relation to this presentation and all the work we have done on the Footprints.

This is a small annual conference (usually about 100 people). Last year the conference topic was very popular – Learning with Videos and Games; 150 delegates attended.

Jutta has told us that this is the 13th year this conference has been offered. It attracts a loyal group of delegates – university teachers, school teachers and trainers of companies, from Austria, Germany and Switzerland, some of whom attend year after year. Jutta has told us that unlike many of the German speaking conferences, which focus on scientific articles and presentations, this conference takes a more pragmatic approach and attracts an audience who ‘want to know how to do something’. Jutta has therefore invited us to speak about how we use our work on Footprints of Emergence to evaluate learning in open learning environments. She herself has been using our Footprints of Emergence drawing tool extensively since 2012.

Jutta and her colleagues recently used the Footprints for an assignment in their MOOC – Competences for Global Collaboration (cope14) and have often used them in their work in the past. Jutta blogs about them and has, with her colleagues, written articles and presented papers at conferences that make reference to the Footprints.

The conference presenters will also submit papers for review. Here is the programme for the conference – Programme for Graz e-Learning Conference

….. and here is the Abstract of our paper:

Surfacing, sharing and valuing tacit knowledge in open learning

Roy Williams

Jenny Mackness

Abstract:

This paper is situated within the paradigm of open, emergent learning, which exploits the full range of social and interactive media, and enables independent initiative and creativity. Open, emergent environments change the way we experience learning, and this has implications for the way we design and manage learning spaces, and describe and analyse them. This paper explores the ways we have engaged with these issues, as participants, designers, researchers, and as facilitators, and how we have reflected on, visualized, shared, and valued the rich dynamics of collaborative discovery. In particular, we explore how emergent learning can be enabled by using uncertain probes rather than predictable outcomes, by emphasizing tacit rather than explicit reflection, and by seeking ways to give the learners back a real voice in a collaborative conversation about the value of learning and teaching.

Key words: probes, Footprints, emergent learning, tacit knowledge, MOOCs

This paper will ultimately be published along with all the other papers, in an open e-book. For last year’s e-book see the FH/Joanneum Website

I don’t know how often the keynote for this conference has been given in English. Unfortunately neither Roy nor I speak German, but we welcome comments on this blog in either German or English. Most of the papers for the conference will be presented in German, but Jutta and I will run a workshop at the end of the day in both German and English.

It goes without saying that we are very much looking forward to meeting Jutta and all her colleagues and are grateful for this opportunity to present our work in Austria.

Evaluating and reflecting on OldGlobeMOOC

The Old Globe MOOC has formally ended.  Sarah Kagan has sent out her ‘wrap up’ email and the final peer reviews for Assignment 6 are in (I passed!). To evaluate and reflect on the learning experience of myself and a colleague in the Old Globe MOOC, I have used the Footprints of Emergence framework developed by Roy Williams, Simone Gumtau and myself to explore the relationship between open and prescribed learning in open learning environments. Details of the framework are published here and below are the first drafts of two footprints for the OldGlobe MOOC. Further reflection might result in changes to these initial responses.

Figure 1 reflects my own experience of Old Globe. Figure 2 reflects the experience of my colleague. Details of the scoring of the Footprints with associated comments are posted in two documents below the Figures.

Old Globe Footprints 1 and 2

Footprint scoring sheet for Figure 1 – first draft

Footprint scoring sheet for Figure 2 – first draft

As would be expected I did not experience Old Globe in the same way as my colleague. Figure 2 (my colleague’s footprint) shows that for this participant there was plenty of ‘sweet’ emergent learning in Old Globe and nothing problematic. The course was experienced as an open, interactive environment with plenty of opportunity for developing personal capabilities and exploring articulating and networking personal ideas and feelings. For my colleague, OldGlobe was the 6th Coursera MOOC.

For myself, the footprint shows that I experienced more tension between prescription and emergence than did my colleague. This might be due to the fact that this was my first Coursera MOOC and my reflections are influenced by past experience with many connectivist MOOCs, which I have participated in, researched and in one case helped to design and run. In comparison Old Globe felt like a much safer, less disruptive experience than the connectivist MOOCs.

These are some of my take-aways from the Old Globe MOOC.

Strengths of the course:

1. The design of the MOOC and structure of the syllabus. There was little designated content. There were weekly video interviews with experts on the topics but participants were left to suggest video resources and readings. Early in the course, one or two participants bemoaned the lack of suggested readings, but I surprised myself by how much I have learned about ageing around the globe in the last couple of months, simply by watching weekly webcasts, participating in the discussion forums and completing six assignments. It wasn’t until the end of the course that I fully realized this.

2. The design of the assignments, which each week asked us to respond to a different question. We were encouraged to be creative in the way we responded and these were then peer reviewed. This simple approach to assignments meant that anyone at any level from a teenager to an octogenarian could complete them. Reviewers were asked to be generous in their feedback. The point was to engage and try and answer the question, as creatively as possible, rather than produce an academic piece of work.

3. The diversity was wonderful to experience. Over 9000 people registered for the course, and over 6000 remained active throughout the six weeks, with 700-800 posting to the forums. It would be interesting to know how many people completed assignments.

4. Leadership. The MOOC was led (but not dominated) by a strong and impressive team, who were sympathetic and responsive to participants’ concerns. Two changes were made to the assessment requirements in response to participant concerns and polite, respectful interaction was very effectively modeled by the course leaders. This was important given the diversity of the participant group and the sensitivity of the subject matter. The weekly emails from Sarah Kagan which pulled together key points from the week and discussion forums, were very helpful and quite an achievement. I was impressed!

What would I change?

I would like to do away with the scoring of assignments, but would that mean that people wouldn’t bother to do the peer reviews? For some participants the certificate seems all important – more important than the learning experience?

Personally I would prefer shorter webcasts. I think it would be possible to cover the same content, but perhaps in three chunks of 20 minutes each.

Have the final webcast on the final day of the final week – to give more of a sense of celebration and closure.

Anonymous posting has caused a few problems, since those who have wanted to be aggressively critical have resorted to this (very few have done this). But I can also see the advantages of being able to post anonymously for very sensitive subjects. I’m not sure how this can be resolved.

I wonder if there is a way to allow for greater participant interaction in the ‘live’ sessions. I didn’t feel that I got to know any of the participants on this course, whereas I am still working with people that I met on the first MOOC in 2008.

But overall I wouldn’t change much. My perception is that Old Globe was a very successful MOOC.

There will be a survey to complete in due course, which will hopefully confirm this success, but in the meantime, Sarah has asked us to post a description to Facebook of ‘how you used OldGlobe, with whom you shared it, what you’ve talked with others about, and perhaps even what projects, programs, or connections came of it for you’.

Congratulations to the Old Globe team!

#fslt12 Final Week – Microteaching

This week the focus is teaching and the evaluation of teaching.

This #fslt12 course  is based on a course which runs face-to-face at Oxford Brookes University. The First Steps course is an element of the Oxford Centre for Staff and Learning Development’s (OCSLD) HEA accredited Post Graduate Certificate in Teaching in Higher Education (PCTHE).

#fslt12 has been aimed at new lecturers, people entering higher education teaching from other sectors and postgraduate students who teach. But in true MOOC spirit we have also had some very experienced ‘teachers’ join us who have openly shared their experience.

(Click on the image to see it more clearly)

In the face-to-face course the key activity is to ‘microteach’ –  i.e. teach a short 10 minute session to a small group of peers and receive feedback from that group.  In order to try and ensure alignment between the face-to-face course and what is offered online, we are trying out this activity in #fslt12.  On Wednesday and Friday of this week, #fslt12 participants will showcase the teaching sessions they have prepared in the live sessions and receive feedback from their peers.

Click here to enter the Blackboard Collaborate room. (See time zones below)

Wed 20 June – Check your time zone

Frid 22 June –  Check your time zone

I will be able to reflect further on this activity at the end of this week, but it has already raised some interesting challenges.  These include:

  • feelings of exposure. I think it’s fair to say that it’s one thing to practise your teaching in front of a small face-to-face group, but quite another to practise openly online in front of anyone and everyone
  • 10 minutes. This will also be a challenge face-to-face, but how do you demonstrate your teaching skills in just 10 minutes
  • technology. I also think it would be fair to say that however this activity is presented it will involve a greater degree of technology than it’s face-to-face equivalent.

Finally this activity also demands the skills of evaluation from those involved in peer review.

Greg Benfield from Oxford Brookes University has provided some excellent resources this week, which include two audio video presentations in which he introduces the topic of evaluation, reference to key readings and some sample videos for us to use to try out our evaluation skills.

The microteaching activities are beginning to be posted, both by participants who are being assessed and by others, and we expect some more over the next few days. Have a look in the Moodle wiki and on people’s blogs

It promises to be another interesting week.

Value Creation in Communities of Practice

A key focus of BEtreat was to discuss Etienne Wenger et al.’s most recent work on value creation in communities of practice.

Etienne Wenger, Beverly Trayner, Maarten de Laat (2011) – Value Creation Framework. Promoting and assessing value creation in communities and networks: a conceptual framework, Ruud de Moor Centrum

http://wenger-trayner.com/resources/publications/evaluation-framework

This was a highlight of the workshop for me. Our discussion focussed around two points:

  1. Levels at which we see value creation
  2. The genre of storytelling

So to start we discussed ‘value creation as a donut’. (The slides below are reproduced with the kind permission of Etienne Wenger, Beverley Trayner and Maarten deLaat.)

You can start anywhere in this loop which means that there is no top down, bottom up consideration. At a certain level of maturity a community takes responsibility for practice – and is forward looking to strategy, which in turn can influence the community. Communities are responsible to each other and for the domain. If you are not covering the full circle then you are not doing knowledge management, but the points of the ‘donut’ can be covered in any order.

Communities are caught between day-to-day strategy and what they want to achieve. Unlike a team where the task is defined in advance, in a CoP narrative evolves and is constantly reviewed. Communities are focussed on capability development rather than a task.

Value in a CoP can be thought of as value for time, i.e. return on investment. This value can be measured quantitatively through the collection of data such as offered by Google analytics, but also through individual and collective narratives. Individual narratives become part of the collective one. Narratives can represent both what is happening in the current life of the community (ground narratives), and also the aspirations of the community. CoPs need to develop narratives of aspiration.

Wenger et al. have suggested that the tension between ‘ground’ and ‘aspirational’ narratives can be explored through 5 cycles of value creation.  These cycles give you a notion of indicators – things that can be measured at that point. The cycles do not necessarily have to be followed through in this order, but they should all be considered. As the community matures – it is able to do more.

Cycle 1 –considers the immediate value (activities and interactions) that people get when they enter a community, e.g. having fun. A lot of communities/people stop here.

Cycle 2 – considers the potential value (knowledge capital), i.e. something you get from the CoP that has potential to change something you do, i.e. knowledge capital. Knowledge capital can take different forms (see p.20 of the paper).

Cycle 3 – considers applied value (changes in practice). In this cycle stories are collected about how people use knowledge capital to change their practice.  It was mentioned that data is most difficult to collect in this cycle.

Cycle 4 – considers realised value (performance improvement) – i.e. the effect of knowledge capital and changes in practice on people outside the CoP – value that can be quantified. This data is often already in the institution.

Cycle 5 – considers reframing value (redefining success) – at this stage a CoP may realise that what they have been thinking of as measures of success may need to change – what they are doing might need changing. It may not be enough to  realise value in the terms that have been defined. This is where is becomes evident that voices from the ‘bottom’ can change the direction of the community.

As communities mature they are able to do more with the evaluation process and the process moves from evaluation as return on investment to the value of engagement etc. As communities move through the cycles they have to touch on more qualitative data, but stories are about causality, not about whether data is quantitative or qualitative and stories are needed from all levels of the framework. In addition, stories can point to indicators just as indicators can point to stories – so, for example, if a document is known to have been downloaded 19000 times, then this calls for stories – but stories might also point to the need to know how often a document has been downloaded.

The framework (p.25) provides examples of indicators for each cycle and of questions that can be asked for collection of data. In the group I was working in at BEtreat we used these to examine the work of communities of school leaders in Singapore and identify gaps in their data collection and whether their picture of  what the communities are achieving is complete. This was a useful and interesting exercise as the gaps became evident fairly quickly and easily, but we only had time to look at cycles 1 and 2, and from what people at the workshop with experience of using the framework said, the process becomes more difficult with cycles 3, 4 and 5.

My first impressions are that this will be a very useful framework for evaluating the work of CoPs and maybe for thinking about evaluation in general. My big question would be around how to use the data once it has been collected. What I have written here is a description of what I heard (in brief) at the workshop and how we briefly had a go at using the framework – but I wonder whether we can assume that people have the skills to accurately interpret the stories that have been collected. Does accuracy matter – or is the process the key ingredient?

#PLENK2010 More thoughts about evaluation and assessment

I have to admit to being confused about exactly what the focus of this week of the PLENK course has been about. The title of the week has been – ‘Evaluating Learning in PLE/Ns’. But the language used has been very confusing, starting with the words assessment and evaluation being used interchangeably – which I was relieved to see from Heli’s comment – I am not alone in being concerned about. And then in today’s Elluminate session there was a lot of ‘talk’ about learning outcomes, when sometimes it seemed to me that what was being talked about was learning objectives.

I have also not been clear about whether the focus is on evaluating PLEs/PLNs and therefore this PLENK as a learning environment, or on individual participants’ learning within this environment, which are two different things and would require different processes despite being linked.

Then there is the confusion about whether we are talking about people assessing their own learning within the PLENK course or whether we are trying to decide whether it is possible to assess people in these environments – again two different processes.

The question was asked ‘How do I know that I am making progress/have learned something’? It would be interesting to collect people’s thoughts about that. My own quick response to this would be that I often don’t know until some time after the event and recognising that I made progress/learned something is very context dependent and therefore for me there is no one answer to this question.

Also for me an important question from this week is whether or not personal learning environments enhance learning – its interesting to consider how this might be measured.

Another interesting question (not related directly to the assessment issue) which was raised in today’s Elluminate session is ‘How do you get the balance right between providing course structure and allowing students the type of freedom that is characteristic of a MOOC.’ I loved the ‘Roots and Wings’ metaphor that someone posted – sorry not to have noted the name to be able to attribute this correctly.

And then Heli has thrown down the gauntlet in her comment in response to my last blog post

Almost all questions are open .. are we afraid of measuring because we want to be up-to-date and postmodern or ..?

Now there’s an interesting question. My own feeling is that there does seem to be a tendency to move away from measurement (e.g. in the UK there has been resistance by teachers of young children to using standardised tests)  – mainly because it’s so difficult to get the measures correct. This was implicit in the You Tube video link that was posted in the Elluminate session tonight – RSA Animate – Changing Education Paradigms – but I don’t think this is because we are afraid – more because traditional modes of assessment just don’t seem to fit with learning that takes place in distributed personal learning networks – particularly if the learning is taking place in a MOOC.

Finally, my own thinking about assessment of learners in traditional settings has been influenced by Gibbs and Simpson’s article – Gibbs, G. & Simpson, C. (2004) ‘Conditions under which assessment supports students’ learning’, Learning and Teaching in Higher Education, 1, 3- 31 http://resources.glos.ac.uk/shareddata/dms/2B70988BBCD42A03949CB4F3CB78A516.pdf

… but if learning is to take place in distributed networks and we want this learning to be accredited then how do we apply Gibbs and Simpson’s advice? Do we need to – or should we be trying to think outside the box and swim against the tide?