Crafting Research

This seminar that I attended last week on crafting research was very interesting. It was organised by the Department of Organisation, Work and Technology at Lancaster University, UK,  and delivered by Professor Hugh Willmott  from City University London. Hugh Willmott has been working with Professor Emma Bell from the Open University. His talk was based on a paper they are working on, in which they are exploring the significance of crafting research in business and management, although having heard this talk the ideas presented seem relevant to social sciences research in general.

The essence of the work lies in an interest in how to produce well-crafted research and avoid Baer and Shaw’s (see reference list) criticism:

As editors, we are often surprised by the lack of “pride and perfection” in submitted work, even when there is a kernel of a good idea somewhere in the manuscript. Submitted manuscripts that report results from research designs in which many shortcuts have been taken are rather commonplace. In addition, many papers seem to have been hastily prepared and submitted, with obvious rough edges in terms of grammar and writing style.

In their article Baer and Shaw quote C.W. Mills as follows:

Scholarship is a choice of how to live as well as a choice of career; whether he knows it or not, the intellectual workman forms his own self as he works toward perfection of his craft; to realize his potentialities, and any opportunities that come his way, he constructs a character which has at its core the qualities of the good workman. —C. W. Mills, 1959

The seminar started with a look at the online etymology dictionary where we can see that the meaning of the word craft has, over time, shifted in meaning from ‘power, physical strength, might’ to ‘skill, dexterity’.

The thrust of the argument made was that researchers should shift towards being craftsmen who are dedicated to the community, have a social conscience and are aware of and acknowledge the ethical and political dimensions of their research. Such an approach would also openly acknowledge uncertainty and bias in research and the role of embodiment and imagination.

The image that ran through this presentation was Simon Starling’s art work ‘Shedboatshed’.

Starling was the winner of the Turner Prize in 2005.  For this work he dismantled a shed and turned it into a boat; loaded with the remains of the shed, the boat was paddled down the Rhine to a museum in Basel, dismantled and re-made into a shed. See:  http://www.tate.org.uk/whats-on/tate-britain/exhibition/turner-prize-2005/turner-prize-2005-artists-simon-starling for further information.

I’m not sure that I fully understand the significance of using Starling’s work in relation to crafting research unless it’s that his work has been described as research-based and clearly involves research and craft. Maybe it’s simply that Starling deconstructs familiar things to recreate them in different forms?

Interestingly, in the questions that followed the seminar, the thorny issue of having to write in a prescriptive way to be accepted in high ranking journals was discussed. Some members of the audience seemed to accept this as a given constraint which cannot be surmounted, i.e. we need to write and present research in a way which will not only be accepted by the given journal, but also will meet the requirements of the University’s REF. For some of the seminar participants there seemed to be no room for embodied, imaginative research which embraces uncertainty. My suggestion that we should perhaps look for alternative publishing outlets, blogs being one example, was met with an outcry of protest from one or two in the audience. ‘No one reads blogs’ they said, and besides ‘Blogging is cowardly’. I neither understood this nor agreed. The conversation seemed to endorse these sentences from Baer and Shaw’s paper:

Our goal was to reaffirm the notion that scholarly pursuit in the management sciences is a form of craftsmanship—we are craftsmen! Some may dismiss our arguments as idealistic or romantic. The realities of life as an academic, the pressures we are under—to publish in order not to perish—offer an all-to-convenient excuse to dismiss our ideas.

What a sorry state of affairs, but I do know from experience that many journals are not prepared to take a chance on non-conventional styles of presentation; Introduction, Literature Review, Method, Results, Conclusion remains the format most likely to get accepted and to suggest that the research endeavour might have failed or that there is a degree of uncertainty around the results is unlikely to lead to a favourable response. It seems there’s a long way to go before the idea of crafting research in the terms presented by Hugh Willmott is widely accepted.

A wide range of Literature was referred to in this seminar, which will be interesting to follow up on. See the references below.

References

Adamson, G. (2013). The Invention of Craft. Bloomsbury Academic.

Alley, M. (2018 4th edition). The Craft of Scientific Writing. Springer

Baer, M. & Shaw, J.D. (2017). Falling in love again with what we do: Academic Craftsmanship in the Management Sciences. Academy of Management Journal. 80(4), 1213-1217.

Bell, E., Kothiyal, N. & Willmott, H. (2017). Methodology-as-Technique and the Meaning of Rigour in Globalized Management Research. British Journal of Management, 28(3), 534–550.

Burrell, G. & Morgan, G. (1979). Sociological paradigms and organisational analysis: elements of the sociology of corporate life. Heinemann

Cunliffe, A. (2010). Crafting Qualitative Research: Morgan and S Smircich 30 Years On. Organizational Research Methods OnlineFirst.

Delamont, S. & Atkinson, P.A. (2001). Doctoring uncertainty: mastering craft knowledge. Social Studies of Science, 31(1), 87-107.

Frayling, C. (2017, reprint edition). On Craftsmanship: Towards a New Bauhaus. Oberon Books

Kvale, S. & Brinkmann, S. (2008 2nd edition). Interviews: Learning the Craft of Qualitative Research. SAGE Publication.

Wright Mills, C. (2000). Sociological Imagination. Oxford University Press.

#SOCRMx Week 4: ‘half-way’ tasks and reflections

(Click on the image to go to source.)

This post should have been made at the end of last week. We are now at the end of Week 5 and I am attempting to catch up.

We are now half-way through this 8-week Introduction to Social Research Methods course. I continue to be impressed by the content, but the course doesn’t really lend itself to much discussion. I am grateful that it is open and that I have access to the excellent resources, but the course has been designed for Edinburgh University Masters and PhD students – the rest of us must fit in where we can.

There are two tasks for Week 4. I have completed one – rather hurriedly – but will report on both.

The first task for Week 4 was to consider one of the research methods we explored in Weeks 2 and 3 and answer the following questions in a reflective blog post.

  • What three (good) research questions could be answered using this approach?
  • What assumptions about the nature of knowledge (epistemology) seem to be associated with this approach?
  • What kinds of ethical issues arise?
  • What would “validity” imply in a project that used this approach?
  • What are some of the practical or ethical issues that would need to be considered?
  • And finally, find and reference at least two published articles that have used this approach (aside from the examples given in this course). Make some notes about how the approach is described and used in each paper, linking to your reflections above.

So far, I have explored the resources related to Surveys, Working with Images, Discourse Analysis and Ethnography. All have been extremely useful and I have written posts about the first three. I will move on to Interviews next and hope to explore the remaining methods (Focus groups, Experimental interventions, Social network analysis and Learning Analytics) before the end of the course.

I have decided not to do this week’s reflection task which requires answering the questions above. For me these questions will be useful when I am working on an authentic research project, but I don’t want to spend time working through them for a hypothetical project. As I mentioned in a previous post I tend to work backwards on research, or at least backwards and forwards, i.e. I get immersed and see what happens rather than plan it all out ahead of time. That doesn’t mean that the questions above are not important and useful, they are, but for me they are ongoing questions rather than up-front questions. This approach to research doesn’t really fit with traditional Masters or PhD research.  I did do a traditional Masters but felt I was ‘playing the game’ in my choice of dissertation topic.  My PhD by publication was a much better fit with the way I work, but even that was playing the game a bit! My independent research has never felt like ‘playing the game’. It has always stemmed from a deep personal interest in the research question.

The second task for Week 4 was to review a “published academic journal article, and answer a set of questions about the methods employed in the study”. I have completed this task, but not submitted it for assessment, since I am not doing this course for assessment. The assessment is a set of multi-choice questions.

At this point it’s worth mentioning that there are a lot of multi-choice quizzes in this course and that I am hopeless at them! I rarely ever get a full score, although I think I have answered these Week 4 task questions correctly. Most of the quizzes in this course allow you to have multiple attempts and sometimes I have needed multiple attempts. Thank goodness for a second computer monitor, where I can display the text being tested at the same time as trying to answer the multi-choice quizzes. Having two monitors is essential to the way I work and even more essential for my research work.  I’m not sure that multiple choice quizzes do anything for my learning, other than to confirm that I have completed a section. I would prefer an open controversial question for discussion, but in this course there is so much content to cover that there would be no time for this.

But again, some excellent resources have been provided for this week. Particularly useful is reference to this open textbook : Principles of Sociological Inquiry – Qualitative and Quantitative Methods with specific reference to Chapters 14.1 and 14.2.

I am copying this helpful Table (from the open textbook) here for future reference: Table 14.2 Questions Worth Asking While Reading Research Reports

Report section Questions worth asking
Abstract What are the key findings? How were those findings reached? What framework does the researcher employ?
Acknowledgments Who are this study’s major stakeholders? Who provided feedback? Who provided support in the form of funding or other resources?
Introduction How does the author frame his or her research focus? What other possible ways of framing the problem exist? Why might the author have chosen this particular way of framing the problem?
Literature review How selective does the researcher appear to have been in identifying relevant literature to discuss? Does the review of literature appear appropriately extensive? Does the researcher provide a critical review?
Sample Was probability sampling or nonprobability sampling employed? What is the researcher’s sample? What is the researcher’s population? What claims will the researcher be able to make based on the sample? What are the sample’s major strengths and major weaknesses?
Data collection How were the data collected? What do you know about the relative strengths and weaknesses of the method employed? What other methods of data collection might have been employed, and why was this particular method employed? What do you know about the data collection strategy and instruments (e.g., questions asked, locations observed)? What don’t you know about the data collection strategy and instruments?
Data analysis How were the data analyzed? Is there enough information provided that you feel confident that the proper analytic procedures were employed accurately?
Results What are the study’s major findings? Are findings linked back to previously described research questions, objectives, hypotheses, and literature? Are sufficient amounts of data (e.g., quotes and observations in qualitative work, statistics in quantitative work) provided in order to support conclusions drawn? Are tables readable?
Discussion/conclusion Does the author generalize to some population beyond her or his sample? How are these claims presented? Are claims made supported by data provided in the results section (e.g., supporting quotes, statistical significance)? Have limitations of the study been fully disclosed and adequately addressed? Are implications sufficiently explored?

Finally – some of the course participants have completed the first task and posted their reflections on their blogs. See

Now to see if I can make a start on Week 5 which finished today!

#openedMOOC Week 5: Research on OER Impact and Effectiveness

Click on image for source

I have not done any research into OER Impact and Effectiveness and I don’t, in my career as a teacher, remember ever heavily relying on a textbook that students would have to buy. I do remember having to buy them myself during my own undergraduate studies, but that was back in the mid 60s. When I was teaching in Higher Education at the end of the 90s, early 2000, we would recommend textbooks which the students could buy if they wished, or could take out of the library (we tried to ensure multiple copies were in the library), but mostly we wrote our own materials and gave students hand-outs in the sessions. I must have written many text-books worth of hand-outs during my career. It never occurred to us at that time to share these online, but even if we had wanted to it would not have been possible, because all the materials we produced belonged to the institution. Of course, we also referred students to open online sites where they could explore further materials and dig deeper. So I haven’t had a lot of experience of this heavy reliance on expensive text books for teaching, although I have in my own research had difficulty accessing materials behind paywalls (see below).

It seems that at least in the US, there is this reliance on expensive textbooks and that explains the push for further research that David Wiley talks about in this week’s video. He tells us that whereas in the early days of this research the focus was on surveys and finding out what OERs were being used, and what happens when you use OERs, there is now a need for more nuanced research into what difference they make to student outcomes. According to David Wiley research into OER adoption is still at an early stage and there is need for further research into how OERs are produced and used, and how they are used in teaching.

Stephen Downes in his video for this week once more gets to the nub of the issue when he questions what we mean by impact and effectiveness. He tells us that research has shown that the medium makes no difference to student outcomes, i.e. it makes no difference whether the student learning environment is open or closed. The obvious difference that OERs will make to the student is cost.

As an aside here, from my own perspective, I doubt I would be a researcher if there weren’t OERs. I remember when we submitted our first paper on CCK08 learner experience in 2009, the reviewers criticised the number of blog posts we referenced. There were two reasons we referenced blog posts, 1. At that time there were no research papers on MOOCs to reference 2. Even if there had been, if they were in closed journals (and there were not many open access journals back then), as independent researchers we would not have been able to access them. I still have these issues, particularly in relation to books, which I often cannot afford; there are not enough open access e-books.

Returning to Stephen’s video and the point where I think he really nails it is in his discussion of what we mean by impact. He thinks, and I agree, that impact is more than grades, graduation and course completion. For him we should be looking at a person’s ability to:

  • Play a role in society
  • Live a happy and productive life
  • Be healthy
  • Engage in positive relationships with others
  • Live meaningfully
  • Have a valuable impact as seen through their own eyes and through the eyes of society

He asks how does using open content change any of these. If OERs are only used by the teacher then there won’t be much change. He says open is how you do things, open is when you share how you work with other people, open is when you take responsibility for ensuring that knowledge is carried forward into the next generation. This is the long-term impact of your value and worth in society. Stephen asks where is the research on this – we need research on how open resources help society.  This seems to me like the big picture and quite a challenge for research.

#openedmooc participants have responded to this week’s resources in different ways.

Matthias Melcher has questioned what we mean by research effectiveness in his blog post for this week.

Geoff Cain also reminds us to not forget the role of Connectivism and the role of history in open education (See also  Martin Weller’s recent post. Katy Jordan has done some amazing work in relation to this. I wish I had her technical skills!).

Merle Hearns  has done a great job of commenting on this video A review of the Effectiveness & Perceptions of Open Educational Resources as Compared to Textbooks  and further discusses Martin Weller’s paper The openness-creativity cycle in education as well as sharing her own work.

Benjamin Stewart keeps plugging away asking for a more critical perspective, both in his tweets, e.g. https://twitter.com/bnleez/status/924735752336039936 and in his blog.

For more blog posts see the course site

Finally there are some great resources provided this week, which I have copied here for future reference. These are links to publication lists. For anyone doing research into OER, they would be a great help.

#SOCRMx: Week 4 – Discourse Analysis

 (Click on image for source)

In Week 4 the Introduction to Social Research Methods course requires participants to move on and 1) reflect on a chosen method, and 2) test our ability to identify specific information about methods in a given research paper. I hope to get round to this but I am behind and am not ready to do it yet. I still want to explore some of the methods that I haven’t had time to engage with yet and take advantage of the resources provided.

In this post I will share my notes from watching Sally Wiggins’ video introducing Discourse Analysis. I have not attempted to complete the associated task, or to synthesise the other resources and information provided by the course.There are many more resources in the Week 2/3 materials of the course site. And some participants have tackled this as a course task. See for example these blog posts:

http://lizhudson.coventry.domains/general-blog-posts/research-method-option-1-discourse-analysis/

https://screenface.net/week-3-socrmx-discourse-analysis/

http://www.brainytrainingsolutions.com/discourse-analysis-facebook-conversation/#.WfL87hNSxTY

http://focusabc.blogspot.co.uk/2017/10/discourse-analysis-in-focus-example.html

Discourse analysis is not a method I have used, but it seems to be relevant to the research I have done and my interests.

My notes

Discourse analysis is a method for collecting qualitative data through the analysis of talk and text. It constructs rather than reflects reality from the premise that talk is a social, and talk and writing are never neutral.

Sally Wiggins in her video introducing discourse analysis tells us there are 5 types:

  1. Conversation analysis
  2. Discursive psychology
  3. Critical discursive psychology
  4. Foucauldian discourse analysis
  5. Critical discourse analysis

She explained that conversation analysis and discursive psychology approaches look at the detail of discourse (with a zoom lens), whilst critical discursive psychology and Foucauldian discourse analysis are interested in a broader perspective (wide angle lens). Critical discourse analysis is between these two. Before using discourse analysis as a method, we must decide which lens to use.

Conversation analysis (CA): uses tape recorders and other technologies to capture the detail of conversation. All aspects are captured, including body language, to explore how social interactions work. CA is all about illuminating the things we take for granted, all those intricate everyday social actions, and exploring them in great detail.

Discursive psychology (DP): examines the detail of interaction but also explores issues such as identities, emotions and accountabilities. Like CA it also uses technologies, such as video, to record interactions, but is used to explore how psychological states are invoked.

Critical discursive psychology (CDP): seeks a perspective which is somewhere between the zoom and wide angle lenses, blending the detail of interaction with broader social issues. It can’t be reduced to a line by line analysis, but instead examines patterns in the data in terms of culturally available ways of talking (interpretative repertoires). It explores familiar ways of talking about issues that shape and structure how we understand concept in a particular culture. It uses interviews and focus groups to explore everyday, common sense ways of understanding and issues produced in everyday talk.

Foucauldian discourse analysis (FDA): emerged from post structuralism. It takes a wide angle perspective on how discourses are connected to knowledge and power. It draws on textual and visual images, such as advertisements, as well as conversations, interviews and focus groups. FDA is interested in the implications of discourse for our subjective experience, how discourse and knowledge changes over time and how this effects people’s understanding of themselves.

Critical discourse analysis (CDA): takes a wide angle perspective and is the most critical form of discourse analysis. Its foundations lie in critical linguistics, semiotics and sociolinguistics. CDA seeks to reveal hidden ideologies underlying particular discourses, and how discourses are used to exert power over individuals and groups. CDA is used when we want to focus on a social problem of some kind. It draws heavily on semiotics and how words and images create to convey meaning in particular ways. It tries to unpack layers of meaning. CDA has a political vision, e.g. it is used to explore how individuals or groups are marginalized or dominated by other groups in society. It uses broad texts and images and seeks to expose ideologies that underpin a particular discourse. CDA shed light on social inequalities and how these are produced through certain discourses, but it also illuminates ways to challenge these discourses.

Just a minimal amount of wider reading around discourse analysis reveals there to be a wealth of literature related to this research method. I suspect it is not a method to be taken up lightly. I would have liked further examples of research questions that have been addressed using each of the five types of discourse analysis. Of the five types, I am most drawn to critical discourse analysis and critical discursive psychology.

#SOCRMx: Week 3 – Working with images

I have found the working with images resources in the Introduction to Social Research Methods MOOC very stimulating. According to the information provided in this course, visual methods are becoming increasingly popular.  I have always been interested in images, knowing that they can elicit ideas and feelings that words cannot. John Berger in his series of programmes on “Ways of Seeing” showed that the relation between what we see and what we know is never settled.

There are three kinds of visual data

  • researcher created, e.g. diagrams, maps, videos, photos
  • participant created, e.g. video diaries
  • researcher curated, e.g. a photo essay, cultural anthropology

Digital technologies have greatly increased the possibilities for working with each of these kinds of data. Images can also be used to elicit information in interviews.

Key considerations when working with visual images for research are: Why use this method? How can it address the research question? What are the best images for the given question? How can the image/s be accessed? What are the ethical implications of using images, e.g. research participant anonymity and right to privacy?

With respect to photos, further considerations relate to how a photo is conceptualised. Is it a copy or is it a more complex construction? Does the camera never lie or do the eye and brain perceive differently to the camera? Do we accept that the photo is evidence or do we consider how the photo was produced, what choices were made, what is included/excluded, what was around the photo that cannot be seen?

The strengths of visual research methods are thought to be that they can:

  • Generate more talk
  • Evoke sensory, affective and emotional responses
  • Encourage reflection on what is taken for granted, what is hidden, what is visible, what is not visible
  • Engage with people who find talk challenging
  • Reduce power differentials
  • Are inherently collaborative and interpreted through communication

This week’s task

The task for this method is to spend an hour or two engaging in a small-scale image-creation research activity. I have not taken a photo specifically for this task, but have trawled back through my own photos to find one that might fit the task and raise some of the issues that need to be addressed.

I have selected this photo that was taken in 2012. I could envisage this photo being used for example with Indian tourism students to explore perceptions of inequality.

Source of photo – here

We have been asked to consider six questions.

  • What is depicted in the image(s)?

I think this would be an interesting question to ask the tourism students. For me the image shows an Indian woman carrying a small child apparently unaffected by a white woman sunbathing. This appears to be a normal situation and each appears oblivious of the other, maybe indicating that they live in separate worlds even though they are inhabiting the same space.

  • What were you trying to discover by creating your image(s)

At the time I was on holiday in Mamallapuram, South of Chennai in India. This photo was not planned, but I noticed the incongruity suggested by the scene, probably because I am a white woman and was a tourist. Neither subject was aware of me taking the photo. I don’t think there were any ethics concerned with taking the photo – lots of unknown people appear in my holiday photos. I’m not sure what the ethics would be of using this photo for a real research project, given that there is no way that I could identify or contact either of the subjects.

  • What did the process of image creation involve?

I was in the right place at the right time with my camera ready. This photo was not staged. It was a snapshot in time, but nevertheless I was aware at the time that it conveys a message beyond a beach scene.

  • What is not seen, and why?

The photo is as it was taken. It might have been cropped and sharpened – I don’t remember, but just looking at it through this frame makes it appear that there are just two people on the beach. In fact I was sitting in a restaurant on the edge of the beach, full of tourists, and the beach was full of people, both Indians and tourists from around the world. There were also fishermen with their boats on the beach. It was a lively location and was situated within walking distance of the exquisite Mahabalipuram stone carvings. Does knowing this change how the photo is perceived?

  • How is meaning being conveyed?

Through the proximity of the two subjects who are so near but so far from each other. They are back to back, facing in opposite directions, but don’t appear concerned, or even to have noticed this ambiguity. Further opposites are conveyed through their clothing and through their posture – one is walking and the other lying.

  • With respect to the photograph, how might the image convey something different to your experience of ‘being there’.

The image appears still and quiet without sound, or the sound of the sea, but it was busy and there was plenty of sound, chatter, laughter, shouting, music, the sound of the sea and so on. Indian tourism students may have seen this type of scene so often that they do not notice it or if they do it may not concern them. Alternatively it may concern them greatly. As tourism students are the contradictions evident in this photo something they should be concerned about? What issues are raised?

#SOCRMx End of Week 3 Reflections

This is the third week of the Introduction to Social Research Methods MOOC, which I am finding both very useful and frustrating at the same time. It is very useful, because the resources provided (as mentioned in a previous post) are really excellent, but unfortunately some of them are locked down in closed systems so only accessible to course participants. I wish there was more time to engage with them all properly. Their high quality has left me wondering whether I should spend time making sure I have seen them all or whether I should focus on the weekly tasks and trying to follow other participants.

The course is frustrating because there is little social interaction, or have I missed it? The majority of participants seem to be doing a Masters or a PhD at the University of Edinburgh, so completing the tasks and getting feedback from a tutor on those tasks must be a high priority for them and the tasks take quite a bit of time, not leaving much time for discussion. In addition, it’s difficult to respond to the task requirements in short posts, leading to long pages of text which are demotivating in terms of discussion. I find the design of the edX discussion forums terrible – very time consuming and difficult. I feel as though I have wasted time trying to follow what little discussion there is in these forums.

I wondered whether there was more discussion on participants’ blogs than in the forums, so I have spent some time collating all the blogs I could find. If blogs are going to be used in MOOCs, then my view is that it’s essential that these are centrally aggregated. This was realized as long ago as 2008 in the first MOOC – CCK08. This is the list of bloggers I have found.

There are probably more than this. I am finding it very difficult to get a sense of who is doing this MOOC, from where and why. The map that we were all asked to add our names to in the first week, no longer seems to be on the site (or if it is, I can no longer find it), so I have no sense of how many people are on the course. From the forum posts that I have read, there seem to be people from the States, Latin America, Australia and Europe, but I’m not clear about whether they are students of Edinburgh University or not.

I am going to persevere with the MOOC because of the high quality of the resources and I will also try and follow the blogs I have found, although I suspect that not all participants are blogging that much.

However, on reflection I have decided that I probably won’t engage fully with the tasks. My response to last week’s task on Surveys was, I acknowledge, quite half-hearted, whereas I can see that some participants made a really good job of it. One participant has commented that it is difficult to engage in tasks for which there doesn’t seem a real purpose. I agree. I find it difficult to get motivated to write survey questions or complete some of the other tasks with no intention of doing this for an actual research project. This is not helped by the fact that I am actually, at this very time, completing writing a research paper, so my ‘head’ is in another zone.

Nevertheless this process and reflection have been helpful – because I have realized, even more clearly than before, that in all my research I have worked backwards rather than forwards. This means that I haven’t decided ‘I am going to go out and research that’, these are my questions, this is the methodology I will adopt, and these are the methods I will use. All my research has emerged, almost serendipitously, from my experience – mostly experience of participating in MOOCs. At the end of the MOOC (or equivalent experience) I find I have met people who, like me, have unanswered questions and want to probe further and then it goes from there. It is messy. The questions keep changing, the data is difficult and messy to gather and it takes months and months to make sense of. The survey we designed to research the use of blogs and forums in the CCK08 MOOC, took months and months of convoluted discussion. We didn’t concoct these questions from thin air, we drew them from our data, endless hours of trawling blogs and forums for what participants had said. We then spent further endless hours debating these statements, their language, whether they made sense and yet we have been asked in this MOOC to write a set of hypothetical survey questions in one week. In addition, all my research has been collaborative, so it feels strange to be working on the methods tasks in isolation, however half-heartedly.

To end on a more positive note, I have thoroughly enjoyed going through all the Visual Methods and Ethnography resources this week, which have been very informative.

And to end on a fun note, one of the participants, Helen Walker (@helenwalker7) has just posted an infrographics quiz on her blog –  The ‘who old are you? quiz shows me to be at the limits of my creative zenith, career and worldly success. Maybe that accounts for this post 🙂

PhD by Publication – Selection of Papers

In her book, PhD by Published Work, Susan Smith writes that one of the disadvantages of this route to a PHD is that ‘it is tricky to retrospectively shoe-horn diverse papers into a post hoc theme’ (p.34).

This statement seems to suggest that researchers jump from project to project that have no direct links between them. Maybe this is the case for researchers, associated with universities, who may have to work on projects which are not their principal area of interest, either because these projects bring in funding, or because papers from these projects will contribute to their University’s research excellence framework (REF). I can see that this might lead to diverse papers that are difficult to pull together, but neither of these constraints applied to me, since I have always worked as an independent researcher.

Despite this, it wasn’t immediately apparent to me which papers I should select for this PhD by Publication or what the focus of my supporting statement should be. I think there were at least three possible routes I could have gone down, depending on which and how many papers I selected for submission and which papers I left out. As Ian McGilchrist says on p.133 of his book The Master and His Emissary: The Divided Brain and the Making of the Western World:

It is not just that what we find determines the nature of attention we accord to it, but that the attention we pay to anything also determines what it is we find.

Perhaps not surprisingly, in order to select papers, I first had to refresh my memory about these publications. Once a paper has been published I tend not to go back and reread it multiple times, but instead move on to the next research project. Although I knew the general gist of all the papers, I didn’t remember all the detail. So I started by working on a mini literature review of my own papers, critiquing them, summarising them, checking the number of citations and how and where the work has been disseminated. Looking back at my journal, I can see that I didn’t find this process particularly easy. It was time consuming and my first summaries were streams of consciousness rather than summaries. Ultimately, I ended up with the summaries of the papers I selected that are in Appendix 3 of the thesis – Jenny Mackness PhD (Pub) 2017.

To decide on which papers to select, I used Matthias Melcher’s Think Tool, which allows you to enter text into a mapping tool and look for links between the entered texts.

Since 2009, I have published 20 papers and one book chapter. I entered the Abstracts of all these publications into the Think Tool and as a result was able to create 6 groups of papers and identify cross-paper themes.

Interrelationships between all publications by group and keyword. (Figure 1 in the thesis, on p.16)

I blogged about this process at the time – A new mapping tool: useful for research purposes. From this process it became clear to me that whilst a large body of work was related to emergent learning, and I could have focussed solely on that, in fact even those papers resulted from participation in MOOCs and a deep interest in how learning occurs in these open environments at the level of the individual learner. I felt there was only one group of papers that diverged from this and that was the group that looks at whether and how learning design can be influenced by an embodied view of the world and a view of perception and action as enactive perception using all the senses, but even these papers originated from an interest in the design of learning environments.

Having decided on which groups to focus on there still remained the question of how many papers to select. For Lancaster University, there was no advice on the number of papers to be submitted other than that the material submitted must be “sufficiently extensive as to provide convincing evidence that the research constitutes a substantial contribution to knowledge or scholarship.” At this stage I went into the department to look at the PhDs by Published Work already awarded, to discover that there had only been three since 1999 (1999, 2003, 2010) and each of these was awarded to a member of staff in the department, who submitted 9, 11 and 10 published works respectively together with a supporting statement of around 40 pages, although I have seen other examples from Lancaster University considerably shorter than this. Ultimately, I submitted 13 papers and a supporting statement of 101 pages. I mention this not to suggest that the number of pages is in any way significant, but just to illustrate that it seems that at Lancaster University there is a wide variety of practice. I wouldn’t be surprised if this is the case across universities. The uncertainty associated with this was not easy to work with, but on the other hand seemed to mirror the unpredictable learning environments I have researched, where I have worked with no externally imposed rules or expectations.

Throughout this process I felt I was working in the same way I have always worked, i.e. working it out as I went along, and letting the process and structure emerge. One of my ‘critical friends’ who gave me feedback on the thesis after I had submitted but before the viva thought that my important work was related to the ‘Footprints of Emergence’ framework and emergent learning rather than the empirical papers and I think that my colleague Roy Williams, probably thinks the same, although he hasn’t said this. But the analysis of my papers, using Matthias Melcher’s Think Tool,  revealed my ‘golden thread’ (as Susan Smith calls it) to be ‘learners’ experiences in cMOOCs’, so that is what I focussed on.

On reflection and given the open structure of the PhD by Publication, I can see that in different circumstances at a different time, I might have selected a different set of papers and ended up with a different thesis. Now there’s a thought! But I’m not going to test out this idea  🙂