#SOCRMx: Week 3 – Working with images

I have found the working with images resources in the Introduction to Social Research Methods MOOC very stimulating. According to the information provided in this course, visual methods are becoming increasingly popular.  I have always been interested in images, knowing that they can elicit ideas and feelings that words cannot. John Berger in his series of programmes on “Ways of Seeing” showed that the relation between what we see and what we know is never settled.

There are three kinds of visual data

  • researcher created, e.g. diagrams, maps, videos, photos
  • participant created, e.g. video diaries
  • researcher curated, e.g. a photo essay, cultural anthropology

Digital technologies have greatly increased the possibilities for working with each of these kinds of data. Images can also be used to elicit information in interviews.

Key considerations when working with visual images for research are: Why use this method? How can it address the research question? What are the best images for the given question? How can the image/s be accessed? What are the ethical implications of using images, e.g. research participant anonymity and right to privacy?

With respect to photos, further considerations relate to how a photo is conceptualised. Is it a copy or is it a more complex construction? Does the camera never lie or do the eye and brain perceive differently to the camera? Do we accept that the photo is evidence or do we consider how the photo was produced, what choices were made, what is included/excluded, what was around the photo that cannot be seen?

The strengths of visual research methods are thought to be that they can:

  • Generate more talk
  • Evoke sensory, affective and emotional responses
  • Encourage reflection on what is taken for granted, what is hidden, what is visible, what is not visible
  • Engage with people who find talk challenging
  • Reduce power differentials
  • Are inherently collaborative and interpreted through communication

This week’s task

The task for this method is to spend an hour or two engaging in a small-scale image-creation research activity. I have not taken a photo specifically for this task, but have trawled back through my own photos to find one that might fit the task and raise some of the issues that need to be addressed.

I have selected this photo that was taken in 2012. I could envisage this photo being used for example with Indian tourism students to explore perceptions of inequality.

Source of photo – here

We have been asked to consider six questions.

  • What is depicted in the image(s)?

I think this would be an interesting question to ask the tourism students. For me the image shows an Indian woman carrying a small child apparently unaffected by a white woman sunbathing. This appears to be a normal situation and each appears oblivious of the other, maybe indicating that they live in separate worlds even though they are inhabiting the same space.

  • What were you trying to discover by creating your image(s)

At the time I was on holiday in Mamallapuram, South of Chennai in India. This photo was not planned, but I noticed the incongruity suggested by the scene, probably because I am a white woman and was a tourist. Neither subject was aware of me taking the photo. I don’t think there were any ethics concerned with taking the photo – lots of unknown people appear in my holiday photos. I’m not sure what the ethics would be of using this photo for a real research project, given that there is no way that I could identify or contact either of the subjects.

  • What did the process of image creation involve?

I was in the right place at the right time with my camera ready. This photo was not staged. It was a snapshot in time, but nevertheless I was aware at the time that it conveys a message beyond a beach scene.

  • What is not seen, and why?

The photo is as it was taken. It might have been cropped and sharpened – I don’t remember, but just looking at it through this frame makes it appear that there are just two people on the beach. In fact I was sitting in a restaurant on the edge of the beach, full of tourists, and the beach was full of people, both Indians and tourists from around the world. There were also fishermen with their boats on the beach. It was a lively location and was situated within walking distance of the exquisite Mahabalipuram stone carvings. Does knowing this change how the photo is perceived?

  • How is meaning being conveyed?

Through the proximity of the two subjects who are so near but so far from each other. They are back to back, facing in opposite directions, but don’t appear concerned, or even to have noticed this ambiguity. Further opposites are conveyed through their clothing and through their posture – one is walking and the other lying.

  • With respect to the photograph, how might the image convey something different to your experience of ‘being there’.

The image appears still and quiet without sound, or the sound of the sea, but it was busy and there was plenty of sound, chatter, laughter, shouting, music, the sound of the sea and so on. Indian tourism students may have seen this type of scene so often that they do not notice it or if they do it may not concern them. Alternatively it may concern them greatly. As tourism students are the contradictions evident in this photo something they should be concerned about? What issues are raised?

#SOCRMx End of Week 3 Reflections

This is the third week of the Introduction to Social Research Methods MOOC, which I am finding both very useful and frustrating at the same time. It is very useful, because the resources provided (as mentioned in a previous post) are really excellent, but unfortunately some of them are locked down in closed systems so only accessible to course participants. I wish there was more time to engage with them all properly. Their high quality has left me wondering whether I should spend time making sure I have seen them all or whether I should focus on the weekly tasks and trying to follow other participants.

The course is frustrating because there is little social interaction, or have I missed it? The majority of participants seem to be doing a Masters or a PhD at the University of Edinburgh, so completing the tasks and getting feedback from a tutor on those tasks must be a high priority for them and the tasks take quite a bit of time, not leaving much time for discussion. In addition, it’s difficult to respond to the task requirements in short posts, leading to long pages of text which are demotivating in terms of discussion. I find the design of the edX discussion forums terrible – very time consuming and difficult. I feel as though I have wasted time trying to follow what little discussion there is in these forums.

I wondered whether there was more discussion on participants’ blogs than in the forums, so I have spent some time collating all the blogs I could find. If blogs are going to be used in MOOCs, then my view is that it’s essential that these are centrally aggregated. This was realized as long ago as 2008 in the first MOOC – CCK08. This is the list of bloggers I have found.

There are probably more than this. I am finding it very difficult to get a sense of who is doing this MOOC, from where and why. The map that we were all asked to add our names to in the first week, no longer seems to be on the site (or if it is, I can no longer find it), so I have no sense of how many people are on the course. From the forum posts that I have read, there seem to be people from the States, Latin America, Australia and Europe, but I’m not clear about whether they are students of Edinburgh University or not.

I am going to persevere with the MOOC because of the high quality of the resources and I will also try and follow the blogs I have found, although I suspect that not all participants are blogging that much.

However, on reflection I have decided that I probably won’t engage fully with the tasks. My response to last week’s task on Surveys was, I acknowledge, quite half-hearted, whereas I can see that some participants made a really good job of it. One participant has commented that it is difficult to engage in tasks for which there doesn’t seem a real purpose. I agree. I find it difficult to get motivated to write survey questions or complete some of the other tasks with no intention of doing this for an actual research project. This is not helped by the fact that I am actually, at this very time, completing writing a research paper, so my ‘head’ is in another zone.

Nevertheless this process and reflection have been helpful – because I have realized, even more clearly than before, that in all my research I have worked backwards rather than forwards. This means that I haven’t decided ‘I am going to go out and research that’, these are my questions, this is the methodology I will adopt, and these are the methods I will use. All my research has emerged, almost serendipitously, from my experience – mostly experience of participating in MOOCs. At the end of the MOOC (or equivalent experience) I find I have met people who, like me, have unanswered questions and want to probe further and then it goes from there. It is messy. The questions keep changing, the data is difficult and messy to gather and it takes months and months to make sense of. The survey we designed to research the use of blogs and forums in the CCK08 MOOC, took months and months of convoluted discussion. We didn’t concoct these questions from thin air, we drew them from our data, endless hours of trawling blogs and forums for what participants had said. We then spent further endless hours debating these statements, their language, whether they made sense and yet we have been asked in this MOOC to write a set of hypothetical survey questions in one week. In addition, all my research has been collaborative, so it feels strange to be working on the methods tasks in isolation, however half-heartedly.

To end on a more positive note, I have thoroughly enjoyed going through all the Visual Methods and Ethnography resources this week, which have been very informative.

And to end on a fun note, one of the participants, Helen Walker (@helenwalker7) has just posted an infrographics quiz on her blog –  The ‘who old are you? quiz shows me to be at the limits of my creative zenith, career and worldly success. Maybe that accounts for this post 🙂

SOCRMx Week 2 Surveys

I have divided this post into two parts. The first part reports on information from the course site. The second part is an attempt to complete one of the suggested activities.

Surveys: Part 1
Source of image here

The Social Research Methods MOOC (SOCRMx) being run by Edinburgh University on edX is a treasure trove of content and resources. For Weeks 2/3 of the course we have been asked to ‘choose our own adventure’ and an adventure it is certainly proving to be. We have been given a list of methods with extensive associated resources for each (videos, reading, examples of research) and it has been suggested that we choose two and write a blog post about our understanding of that approach. There are also specific activities suggested for each method. This is the list of methods:

  • Surveys
  • Discourse analysis
  • Learning analytics
  • Interviews
  • Focus groups
  • Ethnography – Participant observation
  • Social Network Analysis (SNA)
  • Experimental intervention
  • Working with images

The videos associated with these resources are really excellent especially the SAGE videos, but these videos are only available to course participants or people who subscribe to SAGE and come with clearly and strongly stated restrictions:

You shall not:

  • print, email, publicly post, display, sell, distribute or otherwise share the content with non-registered users without further permission from SAGE.
  • transfer, sell or share login details for the express purpose of providing access to content for non-registered users.
  • translate, adapt, delete, alter or apply any other treatment to the content without further permission from SAGE.
  • be guaranteed access to the content after completion of the course.

This doesn’t at all fit with my aspirations for open education, but they are really good videos, so I am going to watch as many as I can, while I have the chance.

I started at the top of the list with ‘Surveys’ simply out of curiosity, with no real initial intention of choosing this as a method to follow up on, but quickly found myself drawn in by the excellent videos. As one of the presenters said there is now 30-40 years of research into questionnaire design to draw on, and plenty of information out there about using surveys for research. I first used a survey in my MA research which was 20 years ago. At that time we were encouraged to use Judith Bell’s book ‘Doing your research project’ and looking at it now I can see that the information it provides is very similar to the information provided by this course. There are a couple of topics that Judith Bell doesn’t discuss:

  1. The difficulties associated with Agree/Disagree questions, which can be problematic as they require respondents to go through extra cognitive steps and can lead to acquiescence response bias. This link from the Survey Monkey blog explains this well – https://www.surveymonkey.com/blog/2016/12/21/lets-agree-not-use-agreedisagree-questions/

This link also mentions recent developments in questionnaire designs which are using technology to collect para data, e.g. measuring how long it takes a respondent to answer a question or using eye- tracking technology ‘to track respondents’ attention to each type of question’.

  1. The difficulty of getting honest responses when dealing with sensitive topics. Two techniques are used with varying degrees of success to achieve this: Item count technique and Randomised response

I have found all this fascinating but the truth of the matter is that I prefer collecting qualitative data to quantitative data. Whilst surveys are usually used to generate quantitative data, large amounts of data in a standardized format from which generalizable statements about the entire population can be made, it is possible to use surveys to generate qualitative data, through the use of open-ended questions.

Surveys: Part 2

In the few surveys I have conducted, I have always ended up being more interested in the responses to the open-ended questions that provide a text box for the answer, but what is the point of asking open-ended questions in a survey? I found some answers to this question in this blog post. What the blog post doesn’t say is that asking open-ended questions in a big survey, e.g. a survey sent out to hundreds of MOOC participants, is going to lead to an equally massive amount of work!

Reflecting on the resources provided about surveys has made me think about the number of times we ask survey type questions when not actually having formal research intentions, e.g. student course evaluations. The same guidelines about good questions apply. I have also been reminded of how we used to agonise over how to ask our primary trainee teachers questions about their knowledge of science concepts which would reveal their misconceptions.

For this social research methods course, we have been asked to design a simple survey which will explore some aspect of the use of Web 2.0 tools by a certain group of people, but I’m going to do my own thing and instead I’m going to revisit the problems we had in eliciting students’ science misconceptions. (I already know this will probably not lead to a satisfactory conclusion!).

Some of the issues already identified in Part 1 of this post relating to honesty and sensitivity of the subject are relevant here. It is important not to make students feel stupid – on the other hand it is extremely important to make them aware of their own misconceptions, so that they do not pass these on to the children they teach.  It will also be important to try and avoid receiving responses which are guesses.

By asking the questions below I want to elicit students’ misconceptions. I am imagining that I am asking a cohort of approx. 200 students to respond to these questions. For this task I have simply focused on the questions and at this stage have not thought about presentation, introductory information etc. Questions could be asked about each scientific concept in the primary school curriculum. I have settled for just two here.

Questions:

Which of the following three statements is correct?  Put a tick in the box beside the correct statement.

1.   Putting a coat on a snowman will keep it cold and stop it melting so fast.
2.   Putting a coat on a snowman will warm it up and make it melt faster.
3.   Putting a coat on a snowman will make no difference to how fast it melts.
Using your knowledge of materials and their properties, explain your choice of correct answer.

 

 

 

Which of the following three statements is correct?  Put a tick in the box beside the correct statement.

1.   A book standing on a table does not move because there are no forces acting on it.
2.   A book standing on a table does not move because the forces acting on it are balanced.
3.   A book standing on a table does not move because of the friction between the book and the table.
Using your knowledge of forces and motion, explain your choice of correct answer.

 

 

 

Practical issues

The objective of this questionnaire is to find out:

  • How many students still have science misconceptions at the end of the course (I could do this for any scientific concept).
  • Which misconceptions students still hold at the end of the course.
  • Whether there are any patterns to these misconceptions.
  • Whether students can use their knowledge and understanding to explain their responses.

The main issue with this survey is that if the student doesn’t explain their response in the text box provided then they could simply have guessed the answer and there would be no way of knowing this.

The epistemological assumption here is that there is a correct answer which the student should know if the course has been effectively taught and if the student has engaged with the course. I don’t think there would be any ethical issues associated with these questions. They could be answered anonymously, although a possible ethical issue might be whether the students find them threatening to their self-esteem, but this is unlikely if the taught course has focused on misconceptions.

Perhaps of more concern is validity. Will the questions really give me the information I need? Perhaps there needs to be a statement up front explaining why an explanation for a response is so important.

Finally, if this questionnaire depends on the open-ended question where students are asked to give an explanation for their choice of statement, then there is clearly the practical issue of the work-load of reading and analyzing all those explanations once collected.

If this type of questionnaire would scale up, then there is the potential of finding out a lot about primary school trainee teachers’ scientific misconceptions, but I’m not sure that it would scale up – and I suspect it might need a follow-up questionnaire.

I have not done what was asked this week, but I have learned more about how not to design a survey 🙂

Update 131017

Of course I should have mentioned that this type of survey may need to be run face-to-face. Otherwise, students could simply look up their responses on the internet or from books.  This was a problem we always had when trying to gather information about the knowledge of our distance learning students. As mentioned above, one way round this these days, would be to measure the student response time to each question, but presumably that needs specialist knowledge and technology.

Acknowledgement

The snowman statements come from the following sources:

Concept Cartoons

Image from: http://slideplayer.com/slide/4332605/ (Slide 18)

For the original cartoon, see also Keogh, B. & Naylor, S. (1996). Teaching and learning in science: a new perspective. Retrieved from: http://www.leeds.ac.uk/educol/documents/000000115.htm

#SOCRMx – Introduction to Social Research Methods MOOC. Week 1

 

Source of image

I am currently participating in two MOOCs – the #openedMOOC run by David Wiley and George Siemens, and this one – Introduction to Social Research Methods, run by Jeremy Knox and his team at Edinburgh University.

I have made a late start on both MOOCs because I was away for the first half of the week. I am hoping that working on two MOOCs won’t be too ambitious.

I have already written a blog post about the #openedMOOC and have now worked through the materials for Week 1 on the Introduction to Social Research Methods. To end this week we have been asked to think about and blog our responses to five questions. Here are the questions and my thoughts:

What kind of topics are you interested in researching?

Most of my research to date has centred on learners’ experiences in connectivist massive open online courses, or open online learning environments. I have been particularly interested in how the espoused principles for learning in these environments – autonomy, diversity, openness and connectedness – are experienced by participants. I have also been interested in how learning emerges in such environments which can be experienced as chaotic. Whilst there is now a lot of published research on MOOC learner experiences, we still don’t know enough about how these environments, in which learners determine their own learning paths, impact on their identity and development as learners. What is the quality of learning that occurs? How would we define or recognise that? I am also interested in the role of the teacher in these environments. How do these environments impact on the role of the teacher?

What initial research questions might be starting to emerge for you?

Currently I am working collaboratively on a position paper. Does a position paper count as research? It feels like research to me – research of the literature and the position that others have taken in relation to the research question that interests us. So far we have been working on this paper for a year and the direction of the paper continues to shift as we become exposed to new literature.

I like the fact that this question refers to research questions starting to emerge. For nearly all my research the question has initially been very vague – more like a pique of interest or a sense that something has happened in the learning environment that is not fully understood. And sometimes a more interesting question arises when the data starts to be analysed. This has happened to me more than once.

Currently since I am participating in two MOOCs, I am already intrigued by how they differ from each other and why. Whether or not this will lead to any interesting questions I don’t know as yet, but I am keeping an open mind.

What are you interested in researching – people, groups, communities, documents, images, organisations?

I like working with people. I am fascinated by how people behave in online environments and I am particularly interested in more vulnerable learners – those for whom the environment is not easy, those who can easily get shouted down by the louder voices.

Do you have any initial ideas for the kinds of methods that might help you to gather useful knowledge in your area of interest?

To date I have always preferred a qualitative approach. I like talking to people or reading what they write. I like to see alternative perspectives emerge. I agree with Stephen Downes (2014), when he says that lots of research ‘sees what it expects to see’ and with Scott, Williams and Letherby (2014) when they say (in the Week 1 video) that we have to beware of confirmation bias. So I am interested in “speculative research approaches, which recognise the potential impact of uncertain futures on education and the need for alternative approaches to research (Ross, 2015; Ross, 2016; Wilkie, Savransky & Rosengarten, 2017).” (Mackness, 2017). How might we do things differently?

What initial questions do you have about those methods? What don’t you understand yet?

I have always manually coded data. I have never used a tool like NVivo for data analysis. This is principally because I am an independent researcher, not affiliated to an institution, who doesn’t get paid to do research and these tools are not free. Having said that, I enjoy the manual data analysis, but it is very slow. It would be interesting to try out a tool like NVivo.

Do you perceive any potential challenges in your initial ideas: either practical challenges, such as gaining access to the area you want to research, or the time it might take to gather data; or conceptual challenges; such as how the method you are interested in can produce ‘facts’, ‘truths’, or ‘valuable knowledge’ in your chosen area?

I am not doing this course for a certificate –and I won’t be starting a research project during these 8 weeks, but I am keen to learn from others how they are approaching their research projects and I would be very willing to work with someone who would like a research partner for the duration of this course.

References

Downes, S. (2014, May 26). Digital Research Methodologies Redux. Seminar presentation Delivered to ESTeaching.org, Tübingen, Germany, online via Adobe Connect. http://www.downes.ca/presentation/341

Scott, J, Williams, M & Letherby, G. (2014). Objectivity and subjectivity in social research, SAGE Publications Ltd., London

Ross, J. (2015, April 13). ‘Not-yetness’ – Research and teaching at the edges of digital education.  http://jenrossity.net/blog/?p=12935

Ross, J. (2016). Speculative method in digital education research. Learning, Media and Technology, 1–16. doi:10.1080/17439884.2016.1160927.

Wilkie, A., Savransky, M. & Rosengarten, M. (2017). Speculative Research. The Lure of Possible Futures. Routledge

Mackness, J. (2017). ‘Learners’ experiences in cMOOCs (2008-2016)’ PhD thesis