Consistency in Dental Clinical Feedback to Students: Clinical Teachers’ Perspectives

Judith L. Werner1, Graham D. Hendry2

Abstract

Purpose: In dental education, feedback from clinical teachers is critical for developing students’ clinical competence. However, students have identified inconsistency of clinical feedback from clinical teachers as a major area of concern. Compared to research on the student perspective of consistency in clinical feedback, dental clinical teachers’ own views of the consistency of their feedback is not as thoroughly researched. The purpose of this study is to redress that balance.

Methodology: This qualitative study explored dental clinical teachers’ views of the clinical feedback process during the 2017 academic year, with a focus on their perceptions of consistency of their own feedback.

Findings: Our results show that clinical teachers use a number of parameters in judging students’ performance and giving feedback, and were aware that their feedback may not be consistent with other clinical teachers’ feedback. Teachers also recognised that this inconsistency could lead to an adverse effect on students’ learning and clinical competence.

Research implications: To improve the consistency of their feedback and calibrate their judgement of students’ performance, clinical teachers recommended that their Dental School should provide opportunities for them to engage in collegial discussion and interactive, case-based teaching development programs. They also believed clinical teaching and its significance to dental student learning and competence should be recognised and valued more highly by the School.

Practical implications: Implementation of professional development initiatives endorsed by clinical teachers has the potential to improve the consistency of teachers’ feedback and the quality of clinical dental education, and ultimately the quality of oral health care.

Originality: This is the first study to explore clinical teachers’ views of how they judge students’ performance and the consistency of their feedback.

Limitations: A limitation of this study is that clinical teachers who volunteered to participate may have different opinions compared to teachers who did not participate.

Keywords: Teaching, Clinical skills, Feedback, Faculty Development

1 Sydney Dental School, University of Sydney, Australia.

2 Centre for Educational Measurement and Assessment, University of Sydney, Australia.

Corresponding author: Dr Graham Hendry, Room 385, Education Building A35, University of Sydney, NSW, 2006, Australia. Email [email protected] Telephone +61 2 9351 6383.


INTRODUCTION

It is now universally accepted that feedback is integral to teaching and critical for enhancing students’ achievement (Hattie & Timperley 2007; Hounsell 2003; Ramsden 2003; Shute 2008). As Hounsell (2003, p. 67) stated:

… feedback plays a decisive role in learning and development … we learn faster, and much more effectively, when we have a clear sense of how well we are doing and what we might need to do in order to improve.

Feedback is particularly important in professional education programmes aimed at developing students’ skills. For example, a synthesis of research metanalyses of pre-service teacher education courses, involving over 2.5 million participants, found that supervisors’ ‘performance feedback’ was positively related to optimal outcomes for beginning teachers (Dunst et al. 2020). In dental education, once students enter the dental clinic and commence clinical practice on patients, feedback is critical to developing students’ clinical competence (Ende 1983; Manogue, Brown & Foster 2001; Youngson et al. 2008). Clinical teachers need to provide students with feedback that is timely, constructive (focused on what students can do to improve), consistent and supportive of students’ self-assessment (Boud 2000; Hattie & Timperley 2007; Nicol & Macfarlane-Dick 2006).

However, dental students have identified inconsistencies in clinical feedback as a major area of concern and deficiency in clinical teaching (Henzi et al. 2006; Strohschein, Hagler & May 2002; Wilson, Sweet & Pugsley 2015). As Henzi et al. (2006, p. 376) state, ‘students viewed their clinical education as being a positive experience with some notable exceptions … including inconsistent and all too often inconsiderate feedback by faculty’. One student noted that inconsistent feedback occurs in ‘situations in which two instructors would look at the same work performed by the students and each would give dramatically different feedback and assign different grades’ (Henzi et al. 2006, p. 372). In a follow-up study by Henzi et al. (2007), only 53% of dental students were satisfied with the consistency of their clinical instruction, and 20% of students perceived that their dental programme’s weaknesses revolved around faculty inconsistency in teaching.

Extensive research has been conducted on students’ perspectives of consistency in clinical feedback; however, only limited research has been conducted on dental clinical teachers’ views on the consistency of their own feedback. Clinical teachers are generally recruited from private practice and in addition to having different treatment philosophies and varied educational and professional experiences (Henzi et al. 2006), they also have different opinions as to what is clinically acceptable (Park et al. 2009). Clinical teachers have clinical experience and expertise; however, this does not necessarily equate to expertise in clinical education.

The main aim of this study was to explore dental clinical teachers’ views of the clinical feedback process, focusing on their perceptions of the consistency of their feedback. The key research questions were:

  1. What are clinical teachers’ views on how they make judgements about students’ performance?
  2. What are clinical teachers’ views on the consistency of their feedback?
  3. What strategies do clinical teachers believe their school should develop and implement to engage them in enhancing the consistency of their feedback?

METHODS

This study was approved by the Human Research Ethics Committee of the University of Sydney (protocol number 2016/624). The research method adopted for this study was qualitative. Specifically, in-depth individual interviews were conducted with clinical teachers. The use of qualitative methods to explore clinical teachers’ perceptions of the clinical feedback process is in line with an emerging perspective that research in the dental sphere needs to be widened to include qualitative and quantitative methodologies (Kairuz, Lawrence & Bond 2015). Fugill (2005, p. 135) has suggested that:

… the emphasis on quantitative methodology [in research in dentistry] has resulted over time in a relative neglect of the social and interactive aspects of Dentistry and may go some way to explain the lack of discussion in the dental literature of clinical teaching.

The present study was designed to address the lack of qualitative research focusing on clinical teaching in dentistry.

The clinical teachers who participated in this study were teachers of dental clinical teaching sessions for the four-year Doctor of Dental Medicine degree course in the University of Sydney School of Dentistry (the School). Most of the clinical teaching sessions for this course are held at two metropolitan hospital locations that are separated by a considerable distance. The clinical teachers may work at either one or, more rarely, both locations. Clinical teachers are all registered dentists and practise in private practices or hospital/government-based clinics; a small number are also appointed as university academic staff. Each clinical teacher is responsible for the supervision and clinical guidance of patient treatment by approximately six students in each clinical teaching session. All teachers are provided with the School’s clinical teaching guidelines that outline the School’s clinical protocols and rationales for dental procedures.

Clinical teachers with teaching experience of more than one year, who taught fourth year dental students were invited by email to participate in an individual, face-to-face, semi-structured interview about their views on how they make judgements about students’ performance, the consistency of their feedback and how their School could help them enhance their feedback practices. This cohort of clinical teachers (N = 30) was solely responsible for teaching final year students and providing feedback on students’ performance at each and every clinical teaching session they supervise. Thus, the teachers in this cohort were in an ideal position to provide their views and perspectives on the feedback process, and were asked to focus on their perceptions of the consistency of their feedback.

Within the literature on qualitative methods, at least eight interview participants is considered a satisfactory number (Baker & Edwards 2012; McCracken 1988). A total of nine clinical teachers participated in this study. This number of participants was also considered adequate because the time required to conduct the interviews was manageable and the research questions were tightly focused (Guest, Bunce & Johnson 2006). The average duration of each interview was 45 minutes. Each interview was digitally audio recorded with the participant’s consent. Participation in the study was entirely voluntary, and any clinical teacher who agreed to participate in the study had the option to withdraw from the study at any time.

The digital audio recordings of each interview were professionally transcribed. The qualitative technique of thematic analysis (Braun & Clarke 2006; Miles & Huberman 1994) was used to analyse each interview transcript. In the initial phase of the analysis, the authors independently read the transcripts to gain familiarity with and become immersed in the data. In the second phase, the initial codes were generated by ‘coding interesting features of the data in a systematic fashion across the entire data set, collating data relevant to each code’ (Braun & Clarke 2006, p. 87). The generation of the initial codes also involved the concurrent writing of analytic memorandums to ‘document and reflect on the coding processes and code choices … and the emergent patterns … themes and concepts in the data’ (Saldaña 2013, p. 41). In the third phase, the patterns and relationships between the codes were recognised and potential and emerging themes were identified. In the fourth phase, the main themes and sub-themes were defined and reviewed, and it was confirmed that the developed themes were representative of the codes and the entire data set. In the final phase, the essence of each theme was clarified and distilled, and each theme was named.

RESULTS

Nine themes emerged in relation to each of our three research questions. Each theme is described below under each question. The nine themes are also listed against each research question in Table 1.

Table 1. Research questions and associated themes.

Research Question

Themes

1. What are clinical teachers’ views on how they make judgements about students’ performance?

Complexity of influences on judgement

Personal concerns in judgement

Poor use of teaching guidelines

2. What are clinical teachers’ views on the consistency of their feedback?

Good intra-reliability in the consistency of the feedback

Poor inter-reliability in the consistency of the feedback

Adverse effects of inconsistent feedback on student learning

3. What strategies do clinical teachers believe their School should develop and implement to engage them in enhancing the consistency of their feedback?

Valuing commitment to teaching

Facilitating collegiality and communication

Interactive teaching development program

Research question 1. What are clinical teachers’ views on how they make judgements about students’ performance?

COMPLEXITY OF INFLUENCES ON JUDGEMENT

Clinical teachers were of the view that they use a number of parameters to judge students’ performance in clinical teaching sessions. They expressed the view that making a judgement is a complex process influenced by more than one parameter. They indicated that the most predominate parameter was the clinical teacher’s knowledge of their own clinical practice; however, they noted that this often goes hand-in-hand with other influences, including other students’ performances and the teacher’s experience when they were a student. Clinician Four (p. 2)stated:

I suppose you base it … on what you would expect, honestly you expect the student to have a standard where you go through a procedure, how you would do it but not at the level that you do it — obviously, we have way more experience than them and I do try … and think back to yes, this is what I expect I would do as a student.

PERSONAL CONCERNS IN JUDGEMENT

Clinical teachers perceived that their judgement of students’ performance was influenced by personal and specific concerns. These concerns were not based on clinical performance but were more aligned with overall patient care, how a student presents, and the student’s personality. Clinician One (p. 1)stated:

I’m very much more conscious of their people skills and their patient management. That’s one of the things that I like to give more feedback on. If I like their personality as well, I also judge them on how they are with the patient, so I probably will give them a higher score even if technically they are not that good.

POOR USE OF TEACHING GUIDELINES

When the clinical teachers were prompted and asked if they used the School teaching guidelines as a basis for their judgement of student performance and provision of feedback they indicated that they did not explicitly use the guidelines. This was noteworthy, as the relevance of the teaching guidelines did not appear to play as significant a role in the judgement of a student’s performance as the clinical experience of the teacher did. Clinician Five (p. 2)stated:

Ah well roughly, you read it but beyond that once you get to [the clinical teaching session], it is somewhere in the back of your mind, but you mainly use your experience.

Research question 2. What are clinical teachers’ views ON the consistency of their feedback?

GOOD INTRA-RELIABILITY IN THE CONSISTENCY OF THE FEEDBACK

Without exception, the clinical teachers believed that they were consistent in the feedback they give to their own students or that they at least made a strong effort to be consistent. However, the majority did not have any method of monitoring whether their belief was correct. As Clinician Two (p. 2) stated:

I hope [my feedback is consistent]. That’s what I strive for. I’ve never thought about how I monitor it. I don’t know … I have been conscious of it [being fair] but sometimes the perception [of students] can be different. I’m not sure why.

POOR INTER-RELIABILITY IN THE CONSISTENCY OF THE FEEDBACK

On the question of inter-reliability, most clinical teachers knew without doubt that their feedback was not consistent with that of other teachers. At no stage in any of the interviews, did any of the participants comment that this lack of consistent feedback to the students concerned them; even though they knew it was a concern to the students and resulted in student dissatisfaction. As Clinician Four (p. 8) stated:

We are all starting at different points and then we expect to be consistent because we have some bits of paper … you know, we are all 10 miles apart as far as consistency is concerned.

Similarly, Clinician Eight (p. 5) stated:

No, I think I am different to others — well that’s what they [the students] tell me. There is a broad range of clinical feedback and it doesn’t matter as long as you explain why to the students.

THE ADVERSE EFFECTS OF INCONSISTENT FEEDBACK ON STUDENT LEARNING

Clinical teachers recognised that the lack of consistency of clinical feedback had adverse effects on student learning, which resulted in both confusion and a perceived lack of fairness from students’ perspectives. As Clinician Seven (p. 4) stated:

It’s unfair you get unhappy … you want everyone to be treated equally. Well if one person gets feedback for the same work and another person gets a different feedback then they won’t know what they are supposed to be doing then and it starts to become confusing about how exactly they should be doing it.

Similarly, Clinician Five (p. 3) stated:

If there is no consistency in feedback then you can’t learn at all and then they [the students] just get confused … how can they possibly learn what they should be doing?

Research question 3. What strategies do clinical teachers believe their school should develop and implement to engage them in enhancing the consistency of their feedback?

VALUING COMMITMENT TO TEACHING

Overall, clinical teachers were of the view that their colleagues should be committed to teaching, and that the School should value and support clinical education. This theme emerged not in direct response to the third interview question about strategies for engaging teachers, but resulted from a stream of persistent, unsolicited comments by clinical teachers throughout the interviews. Clinical teachers thought that the School should provide training programmes to assist them to develop proficiency in their teaching practice, including in the areas of feedback and assessment. As Clinician One (p. 8) stated:

I do think that there is a lack of recognition of teaching. I think the major problem is getting everyone to consistently teach the same thing. The Faculty needs to provide training for us.

FACILITATING COLLEGIALITY AND COMMUNICATION

Clinical teachers thought that to improve the consistency of their feedback, and their teaching practices generally, the School should open up avenues for communication and collegiality among the diverse group of teachers. As Clinician Nine (p. 5) stated:

We need to be able to communicate with other tutors on a regular basis to really gain a sense of what we are trying to achieve on a daily basis.

Similarly, Clinician Four (p. 12) stated:

Wouldn’t it be a nice concept where we could get together and communicate and learn?

INTERACTIVE TEACHING DEVELOPMENT PROGRAMME

It is not surprising that clinical teachers, given their desire to communicate with each other, also felt that to improve the consistency of their feedback, the School should provide an interactive teaching programme based on face-to-face, small-group discussions that focused on the concepts of effective clinical teaching and feedback. Regular collegial meetings, case-based discussions and calibration exercises for clinical teachers were also mentioned as ways to improve teachers’ depth of knowledge about how to provide consistent feedback. As Clinician Five (p. 5) stated:

I think courses to encourage people to come and then you could have case studies with clinical slides, photographs, aiming for some calibration. I think a lot of the inconsistency is also to do with the private practitioners who have done their own thing in private practice for x number of years and that’s what they do and that makes for a little bit of confusion.

Discussion

This qualitative study explored clinical teachers’ views about how they view student performance and their perceptions of the consistency of their feedback. This study also explored clinical teachers’ ideas and recommendations for faculty development strategies to help them improve their feedback practices. Our qualitative results showed that clinical teachers may combine knowledge of their specific clinical practice, their own past student experiences, and even individual student’s personalities to inform their feedback at any given clinical session. Notably, the School’s teaching guidelines played only a minor role in determining teachers’ judgements. These results are consistent with Park et al.’s (2009) findings that while the Faculty had provided written guidelines, they were not used for evaluation and feedback, and in fact, ‘a dentist brings [to the teaching situation] his own clinical bias consisting of his own clinical experience’ (Park et al. 2009, p. 37).

In the present study, despite the range of influences on clinical teachers’ decisions that informs the feedback they provide, most teachers thought they gave consistent feedback (intra-reliability). However, they were aware that their feedback was not consistent with other clinical teachers’ feedback (inter-reliability). These findings align with previous research on inconsistency in clinical teaching and student dissatisfaction (Bloxham et al. 2016; Park et al. 2009). In a major review of research on intra- and inter-reliability in clinical teaching, Taylor, Grey and Satterthwaite (2013) concluded that there is a high degree of variability between different clinical teachers’ practices and less variability within individual teachers’ practices. Clinical teachers in the study also recognised that this inconsistency across students’ learning experiences could lead to student confusion and dissatisfaction and have an adverse effect on students’ learning and achievements. Students’ concerns about inconsistency in feedback and its effects has also been confirmed by research studies in different dental schools worldwide (Bloxham et al. 2016; Hendricson et al. 2007; Henzi et al. 2007; Jahangiri et al. 2013; Wiley & Gardner 2010).

Our study indicated that clinical teachers strongly believe that clinical teaching should be valued and supported by their School, and that the School should introduce strategies to facilitate communication and learning with colleagues. Recognition of the value of teaching has been identified as a concern of dental clinicians in previous research. In a study of chair-side teaching, Wilson, Sweet and Pugsley (2015, p. 187) found that the ‘general observation is that research is everything, and teaching counts for little seems to prevail at most universities’. As Clinician Three in our study stated, ‘If we value the education and we value what we are trying to achieve then we need to do more for clinical education’ (p. 9).

The clinical teachers were of the view that the School should provide teaching development programmes to help improve their feedback practice. This need is consistent with research on the teaching development needs of clinical teachers across a wide range of health professions, which found that ‘feedback’ was the most reported area for improvement (Bearman et al. 2018). To engage teachers, teaching development programmes should be interactive and case-based in design to stimulate discussion among colleagues. We already know that faculty development programmes can play a major role in improving the quality of clinical education (Haden et al. 2006; Manogue, Brown & Foster 2001; Masella & Thompson 2004; Wilson, Sweet & Pugsley 2015). In their evaluation of the available evidence, Hendricson et al. (2007, p. 1529–1530) concluded that some of the critical design elements consistently associated with programme effectiveness include the:

… use of experiential learning (hands-on practice of teaching skills, case study analysis), use of a diversity of learning experiences, use of peers to model exemplary teaching behaviours, and programs designed to facilitate peer interaction and the building of [collegial] relationships.

Faculty development programmes alone may not necessarily address the issues related to the provision of inconsistent feedback by clinical teachers. In the clinical teaching environment, many significant outside influences come into play; for example, one-to-one teaching is often patient defined, it may be ad hoc depending on patients’ needs and it can be stressful, time limited, and require that the students perform at a high level at all times while focusing on patients’ wellbeing. It is thus possible that students’ expectations of feedback are not able to accommodate the change from the relative simplicity of the traditional classroom to the complexity of the clinic. As Price et al. (2012, p. 115) argue, to improve clinical teachers’ feedback we may also need to ‘align student and staff expectations of feedback’, so the teachers ‘share the same understanding of why they are giving feedback and the students share the same understanding in the receipt of feedback and the ways in which it can help them’. Thus, in addition to instigating effective and targeted faculty development programmes, we also need to develop strategies to acknowledge, encourage and facilitate a shared understanding and awareness about the nature and effects of clinical judgement, feedback and expectations for both students and staff alike. Such strategies could include involving both students and staff in the joint development of rubrics for evaluating students’ clinical skills (Chan & Ho 2019), and helping students and staff to develop a ‘learning goal orientation’ to feedback (Farrell et al. 2017) based on the co-developed rubrics.

The limitations of this study are that clinical teachers who volunteered to participate were accepted on a first-in-first-serve basis. Thus, it could be assumed that the more highly interested or motivated clinical teachers may have volunteered earlier in the process of recruitment and may have different opinions compared to other clinical teachers who did not participate. However, the findings of this study are substantially corroborated by previous research findings, which suggests that while the participant group was small, the findings do have validity. The first author is a clinical teacher and was primarily inspired to undertake this research after she became aware from student feedback that there was a lack of consistency in clinical teachers’ feedback. This awareness may have influenced the direction of questioning in the interviews. However, every attempt was made to reflect on and evaluate the direction of the interviews, monitor them for biases and modify the interview techniques as required.

Future research could focus on testing the validity of the themes identified in this study by undertaking a survey of entire cohorts of clinical teachers within and across dental schools. This study involved clinical teachers in a metropolitan area; however, future research could also focus on the consistency of health professionals’ feedback in rural and remote locations, and ‘transformative’ student placements in Aboriginal health (McDonald et al. 2017). Research could also be undertaken to develop strategies and educational initiatives that will promote and engage students effectively and actively in the feedback and assessment process to align students’ and teachers’ expectations in relation to feedback.

Conclusions

In dental education, clinical teachers’ feedback is crucial to dental students’ development and to their becoming competent, independent practitioners. This study showed that a complex mix of parameters, including a clinical teacher’s own clinical practice and their experience as a student, can influence the feedback they provide. As a result feedback can often be inconsistent within the clinical teacher cohort, which can have a detrimental effect on students’ confidence and hinder their learning. Clinical teachers themselves recognise the need to improve the consistency of the feedback they provide. They want their teaching to be valued more highly, and they want to be provided with opportunities to engage in discussion and case-based activities with colleagues to develop their pedagogical skills and calibrate their judgements of students’ performance. Dental school leaders have a responsibility to develop and implement professional development initiatives that have been endorsed by clinical teachers to improve the quality of clinical dental education and ultimately oral health care.

Acknowledgements

The authors would like to extend their appreciation to the clinical teachers who agreed to participate in this qualitative study.

References

Baker, S & Edwards, R 2012, ‘How many qualitative interviews is enough? Expert voices and early career reflections on sampling and cases in qualitative research’, National Centre for Research Method Review Paper, pp. 1–43, viewed 23 March 2020, http://eprints.ncrm.ac.uk/2273.

Bearman, M, Tai, J, Kent, F, Edouard, V, Nestel, D & Molloy, E 2018, ‘What should we teach the teachers? Identifying the learning priorities of clinical supervisors’, Advances in Health Sciences Education, vol. 23, pp. 29–41.

Bloxham, S, den-Outer, B, Hudson, J & Price, M 2016, ‘Let’s stop the pretence of consistent marking: exploring the multiple limitations of assessment criteria’, Assessment and Evaluation in Higher Education, vol. 41, no. 3, pp. 466–481.

Boud, D 2000, ‘Sustainable assessment: rethinking assessment for the learning society’, Studies in Continuing Education, vol. 22, no. 2, pp. 151–167.

Braun, V & Clarke, V 2006, ‘Using thematic analysis in psychology’, Qualitative Research in Psychology, vol. 3, no. 2, pp. 77–101.

Chan, Z & Ho, S 2019, ‘Good and bad practice in rubrics: the perspectives of students and educators’, Assessment and Evaluation in Higher Education, vol. 44, no. 4, pp.533–545.

Dunst, CJ, Hamby, DW, Howse, RB, Wilkie, H & Annas, K 2020, ‘Research synthesis of meta-analyses of preservice teacher preparation practices in higher education’, Higher Education Studies, vol. 10, no. 1, pp. 29–47.

Ende, J 1983, ‘Feedback in clinical medical education’, Journal of the American Medical Association, vol. 250, no. 6, pp.777–781.

Farrell, L, Bourgeois-Law, G, Ajjawi, R & Regehr, G 2017, ‘An autoethnographic exploration of the use of goal oriented feedback to enhance brief clinical teaching encounters’, Advances in Health Sciences Education, vol. 22, pp. 91–104.

Fugill, M 2005, ‘Teaching and learning in dental student clinical practice’, European Journal of Dental Education, vol. 9, no. 3, pp. 131–136.

Guest, G, Bunce, B & Johnson L 2006, ‘How many interviews are enough? An experiment with data saturation and variability’, Field Methods , vol. 18, no. 1, pp. 59–82.

Haden, NK, Andrieu, SC, Chadwick, DG, Chmar, JE, Cole, JR, George, MC, Glickman, GN, Glover, JF, Goldberg, JS, Hendricson, WD, Meyerowitz, C, Neumann, L, Pyle, M, Tedesco, LA, Valachovic, RW, Weaver, RG, Winder, RL, Young, SK & Kalkwarf, KL 2006, ‘The dental education environment’, Journal of Dental Education, vol. 70, no. 12, pp. 1265–1269.

Hattie, J & Timperley, H 2007, ‘The power of feedback’, Review of Educational Research, vol. 77, pp. 81–112.

Hendricson, WD, Anderson, E, Andrieu, SC, Chadwick, DG, Cole, JR, George, MC, Glickman, GN, Glover, JF, Goldberg, JS, Haden, NK, Kalkwarf, KL, Meyerowitz, C, Neumann, LM, Pyle, M, Tedesco, LA, Valachovic, RW, Weaver, RG, Winder, RL & Young, SK 2007, ‘Does faculty development enhance teaching effectiveness? Journal of Dental Education, vol. 71, no. 12, pp. 1513–1533.

Henzi, D, Davis, E, Jasinevicius, R & Hendricson, W 2006, ‘North American dental students’ perspectives about their clinical education’, Journal of Dental Education, vol. 70, no. 4, pp. 361–377.

Henzi, D, Davis, E, Jasinevicius, R & Hendricson, W 2007, ‘In the student’s own words: what are the strengths and weaknesses of the dental school curriculum?’, Journal of Dental Education, vol. 71, no. 5, pp. 632–645.

Hounsell, D 2003, ‘Student feedback, learning and development’, in M Slowey & D Watson (eds.), Higher education and the lifecourse, SHRE and Open University press.

Jahangiri, L, McAndrew, M, Muzaffar, A & Mucciolo, T 2013, ‘Characteristics of effective clinical teachers identified by dental students: a qualitative study’, European Journal of Dental Education, vol. 17, pp. 10–18.

Kairuz, T, Lawrence, B & Bond, J 2015, ‘Comparing student and tutor perceptions regarding feedback’, Pharmacy Education, vol. 15, no. 1, pp. 290–296.

Manogue, M, Brown, G & Foster, H 2001, ‘Clinical assessment of dental students: values and practices of teachers in restorative dentistry’, Medical Education, vol. 35, pp. 364–370.

Masella, R & Thompson, T 2004, ‘Dental education and evidence-based educational best practices: bridging the great divide’, Journal of Dental Education, vol. 68, no. 12, pp. 1266–1271.

McCracken, G 1988, The long interview (Vol. 13), Sage, London, UK.

McDonald, H, Browne, J, Perruzza, J, Svarc, R, Davis, C, Adams, K & Palermo, C 2018, ‘Transformative effects of Aboriginal health placements for medical, nursing, and allied health students: A systematic review’, Nursing and Health Sciences, vol. 20, pp. 154–164.

Miles, M & Huberman, A 1994, Qualitative data analysis: an expanded sourcebook, 4th edn., Sage, London, UK.

Nicol, D & Macfarlane-Dick, D 2006, ‘Formative assessment and self-regulated learning: a model and seven principles of good feedback practice’, Studies in Higher Education, vol. 31, no. 2, pp. 199–218.

Park, R, Susarla, S, Howell, T & Karimbux, N 2009, ‘Differences in clinical grading associated with instructor status’, European Journal of Dental Education, vol. 13, pp. 31–38.

Price, M, Rust, C, O’Donovan, B & Handley, K 2012, Assessment literacy—the foundation for improving student learning, Oxford Brookes University Press, Oxford, UK.

Ramsden, P 2003, Learning to teach in higher education, 2nd edn., RoutledgeFalmer, London, UK.

Saldaña J. 2013, The coding manual for qualitative researchers, Sage, London, UK.

Shute, V 2008, ‘Focus of formative feedback’, Review of Educational Research, vol. 78, no. 1, pp. 153–189.

Strohschein, J, Hagler, P & May, L 2002, ‘Assessing the need for change in clinical education practices’, Physical Therapy, vol. 82, no. 2, pp. 160–172.

Taylor, C, Grey, N & Satterthwaite, J 2013, ‘Assessing the clinical skills of dental students: a review of the literature’, Journal of Education and Learning, vol. 2, no. 1, pp. 20–31.

Wiley, K & Gardner, A 2010, ‘Improving the standard and consistency of multi-tutor grading in large classes’, in N Parker & K Waite K (eds.), Proceedings of the Australian Conference on Assessment in Higher Education, University of Technology Sydney, Sydney, Australia.

Wilson, J, Sweet, J & Pugsley, L 2015, ‘Developmental guidelines for good chair-side teaching—a consensus report from two conferences’, European Journal of Dental Education, vol. 19, pp. 185–191.

Youngson, C, Fox, K, Boyle, E, Blundell, K & Baker, R 2008, ‘Improving the quality of clinical education teaching in a restorative clinic using student feedback’, European Journal of Dental Education, vol. 12, pp. 75–79. 10.1111/j.1600-0579.2007.00486.