Methods for investigating target grade use in English secondary schools

If you’ve read any of my previous posts about my research (such as the proposed methods poster I produced, or about some of the data I’ve collected, for example) then you may be familiar with the methods I used. In this second short post in the run-up to my viva, I’ve summarised the methods I used here. This won’t address the use of self-determination theory (SDT) as a framework.

There were three methods I used in my research, and in the end two of them were used for drawing conclusions, with the third used to provide context and background to the study and the other findings.

Method 1: Psychometric testing

I used the Self-Regulation Questionnaire – Academic (SRQ-A, originally designed by Ryan & Connell (1989)) to quantify the extent to which students were autonomously motivated, and to measure their motivation types according to SDT. I chose this specific test since it applies SDT specifically to an education/academic context, and because it links to the wider motivation students experience and more importantly, why they may experience those. Given that target grades affect students’ motivation, and may impact the type of motivation students largely experience, this felt helpful to answer the research questions I was addressing.

Pros: I successfully gathered helpful and reliable quantitative data about students’ general academic motivation, and about their affinity toward target grade use. I based my calculation of students’ ‘target grade affinity index‘ explicitly on the relative autonomy index (RAI) frequently generated using this test. This helped draw conclusions about students’ experiences in general.

Cons: I found that a minority of students experienced multiple motivation types simultaneously, and it was challenging to decide how to distribute students between these. Eventually, I distributed students with ties randomly between their motivation types, since a Chi square test indicated this did not statistically significantly change the results when based on three set random seeds, but that is not to say the results are perfect. More on this in another post.

Method 2: Focus groups with students

To contextualise the students’ results in the SRQ-A, after the first and last round of data collection, I conducted focus groups with a stratified purposive sample of students to represent all four motivation types, and amotivation (i.e. a complete lack of motivation). This provided a range of rich data, with relatively consistent findings across the four focus groups; two focus groups for each participating year group. Using prompts and questioning, students gave their views on target grade use, and the impacts of the key strands of SDT (autonomy, competence and relatedness), giving further detail on why they felt the way they did about target grade use. You can find some of these findings here: views on autonomy & views on competence.

Pros: The data generated from this method were rich and helpful, and aligned well with the SRQ-A data collected. Some of the students’ responses also went some way to explaining the ties between motivation types described a couple paragraphs ago. Use of Braun & Clarke’s (2006) thematic analysis framework meant I could analyse the data in a structured way, but with space for deduction, induction and abduction.

Cons: It was challenging to ensure this didn’t turn into a group interview at times, but the students consistently took the experience seriously and provided well-thought out perspectives. If I’d not done two focus groups as parts of a pilot study, I would not have been successful here I don’t think.

Method 3: Teacher semi-structured interviews

I interviewed five members of the teaching staff, including two with leadership responsibilities. These were incredibly helpful for providing context and evidence for students’ perspectives from the focus groups. While I didn’t actively draw conclusions from these interviews, they were really helpful. As one such example, students said that some of their teachers used targets and some didn’t, and this was supported by the teachers’ contributions. Students said that of those teachers who did use target grades, they used them differently, and this was also corroborated by teaching staff.

Why I chose those methods

The first two methods allowed me to collect data in different formats to allow for triangulation, and both aligned well with the theoretical basis (SDT) for the study. Drawing conclusions from two different methods meant the findings could be considered trustworthy (had they aligned, which thankfully they did). I was confident in my ability to analyse these data effectively.

Would I do anything differently?

In choosing the methods for data generation & analysis, generally, no. I am confident that the methods I chose were valid for the data I aimed to generate, and that they returned rich enough data to be analysed well for drawing concrete conclusions that were applicable to the context of the research study school.

I would, however, decide ahead of time what I would do about the ties in motivation types. I don’t think this issue was a problem for my conclusions’ trustworthiness, but having decided on a solution beforehand would’ve been helpful. It was quite stressful trying different ways of addressing the issue, and tested my ability to code in R heavily. In reading other literature that had used the SRQ-A where they necessarily must have come across this issue, there was no mention of it. However, I am confident that my solution was reasonable and acceptable.

More to come.


Comments

Leave a comment