This post is part of a task on the SOCRMx course.
My first choice of research method, discourse analysis, was a bit of a no-brainer; I’m a Writing graduate, I like studying text and contexts. I wasn’t sure which way to go with the second choice, but I think experimental intervention could work quite well with discourse analysis. The third example question I wrote for discourse analysis might demonstrate this, I think: How does online disability awareness training affect the language used by academic staff when discussing disabled student experiences? The intervention in this case would be to have two groups of academic staff, neither would have recently undergone disability awareness training of any sort. The independent variable would be the provision (or not) of e-learning awareness training, while the dependent variable would be the conversational language used by academic staff from both experimental and control groups. I would need to be more specific about what I was looking for, of course, but I liked the idea that discourse analysis could study the dependent variable of experimental intervention. That is, assuming I haven’t confused and betrayed the logic of either method by mixing them together.
The Stroop Effect experiment demonstrates a more straightforward example of experimental intervention. Participants are asked to state the colour of each word on two lists, all of them colour nouns. The text of each word in the first list of colours matches the colour it describes (red is written in red text). The second list uses different coloured text to that being described by each word (e.g. red might be written in blue text).They have to work as quickly as possible while avoiding mistakes and the time it takes participants to read each list out loud is compared. It is expected that it will take participants longer to read the second list, demonstrating a theory that describing a colour requires more cognitive processing than reading, which comes more naturally to us. In this experiment, the congruence of colour words and text colour create the experimental conditions. There are lots of ways this could be varied and adapted; the words need not be colour nouns, for example. Images of objects with congruent and incongruent text labels could be an interesting variation.
As my research interests are concerned with the existing/potential mediation performed by technology in the relationship between social inequality and higher education, quantitative data alone is unlikely to satisfy the depth and nature warranted by such an inquiry. There’s plenty of discourse analysis in sociology and education fields, however, so I’m interested in the potential benefits of using alternatives like experimental intervention. Some examples I’ve seen do seem to raise immediate ethical questions. For example, a recent study by Herodotou, C. Heiser, S. and Rienties, B. (2017), which involved three support interventions, could mean that some students received more support than others. And so I am wary that experimental interventions with “active” students might not only study disadvantage, but could create or contribute to it.
I have been thinking about the validity issue of confounding variables and artefacts, and examples I might encounter in my particular research area. I introduced my topic of interest, a few posts ago, with reference to my own status as a first-generation graduate. I have been concerned that these types of definitions, how ever well-intentioned, might not be as helpful as they seem. The social class and educational advantages and disadvantages of individuals within this category could vary dramatically, as my own background illustrated. The quality of education received by the graduate and non-graduate parents of first generation graduates could still vary greatly. I think this is one area that confounding variables and artefacts might come into play. The “Hawthorne Effect”, too, may be something to be wary of in this context. How does awareness of being a first generation applicant affect determination and confidence, for example? If an intervention drew students’ attention to their advantages or disadvantages, might they behave differently?
Herodotou, C. Heiser, S. and Rienties, B. (2017) Implementing randomised control trials in open and distance learning: a feasibility study; Open Learning: The Journal of Open, Distance and e-Learning Vol. 32 , Iss. 2,2017