A key focus of BEtreat was to discuss Etienne Wenger et al.’s most recent work on value creation in communities of practice.
Etienne Wenger, Beverly Trayner, Maarten de Laat (2011) – Value Creation Framework. Promoting and assessing value creation in communities and networks: a conceptual framework, Ruud de Moor Centrum
This was a highlight of the workshop for me. Our discussion focussed around two points:
- Levels at which we see value creation
- The genre of storytelling
So to start we discussed ‘value creation as a donut’. (The slides below are reproduced with the kind permission of Etienne Wenger, Beverley Trayner and Maarten deLaat.)
You can start anywhere in this loop which means that there is no top down, bottom up consideration. At a certain level of maturity a community takes responsibility for practice – and is forward looking to strategy, which in turn can influence the community. Communities are responsible to each other and for the domain. If you are not covering the full circle then you are not doing knowledge management, but the points of the ‘donut’ can be covered in any order.
Communities are caught between day-to-day strategy and what they want to achieve. Unlike a team where the task is defined in advance, in a CoP narrative evolves and is constantly reviewed. Communities are focussed on capability development rather than a task.
Value in a CoP can be thought of as value for time, i.e. return on investment. This value can be measured quantitatively through the collection of data such as offered by Google analytics, but also through individual and collective narratives. Individual narratives become part of the collective one. Narratives can represent both what is happening in the current life of the community (ground narratives), and also the aspirations of the community. CoPs need to develop narratives of aspiration.
Wenger et al. have suggested that the tension between ‘ground’ and ‘aspirational’ narratives can be explored through 5 cycles of value creation. These cycles give you a notion of indicators – things that can be measured at that point. The cycles do not necessarily have to be followed through in this order, but they should all be considered. As the community matures – it is able to do more.
Cycle 1 –considers the immediate value (activities and interactions) that people get when they enter a community, e.g. having fun. A lot of communities/people stop here.
Cycle 2 – considers the potential value (knowledge capital), i.e. something you get from the CoP that has potential to change something you do, i.e. knowledge capital. Knowledge capital can take different forms (see p.20 of the paper).
Cycle 3 – considers applied value (changes in practice). In this cycle stories are collected about how people use knowledge capital to change their practice. It was mentioned that data is most difficult to collect in this cycle.
Cycle 4 – considers realised value (performance improvement) – i.e. the effect of knowledge capital and changes in practice on people outside the CoP – value that can be quantified. This data is often already in the institution.
Cycle 5 – considers reframing value (redefining success) – at this stage a CoP may realise that what they have been thinking of as measures of success may need to change – what they are doing might need changing. It may not be enough to realise value in the terms that have been defined. This is where is becomes evident that voices from the ‘bottom’ can change the direction of the community.
As communities mature they are able to do more with the evaluation process and the process moves from evaluation as return on investment to the value of engagement etc. As communities move through the cycles they have to touch on more qualitative data, but stories are about causality, not about whether data is quantitative or qualitative and stories are needed from all levels of the framework. In addition, stories can point to indicators just as indicators can point to stories – so, for example, if a document is known to have been downloaded 19000 times, then this calls for stories – but stories might also point to the need to know how often a document has been downloaded.
The framework (p.25) provides examples of indicators for each cycle and of questions that can be asked for collection of data. In the group I was working in at BEtreat we used these to examine the work of communities of school leaders in Singapore and identify gaps in their data collection and whether their picture of what the communities are achieving is complete. This was a useful and interesting exercise as the gaps became evident fairly quickly and easily, but we only had time to look at cycles 1 and 2, and from what people at the workshop with experience of using the framework said, the process becomes more difficult with cycles 3, 4 and 5.
My first impressions are that this will be a very useful framework for evaluating the work of CoPs and maybe for thinking about evaluation in general. My big question would be around how to use the data once it has been collected. What I have written here is a description of what I heard (in brief) at the workshop and how we briefly had a go at using the framework – but I wonder whether we can assume that people have the skills to accurately interpret the stories that have been collected. Does accuracy matter – or is the process the key ingredient?