Vol. 7 | No. 1 | 2024
Research & Evaluation article (double blind peer-review)
Copyright is held by the authors with the first publication rights granted to the journal. Conditions of sharing are defined by the Creative Commons License Attribution-ShareAlike-NonCommercial 4.0 International
Citation: Janssen, A, Donnelly, C, Murphy, AD, Trinh, B, Moujaber, T, Shah, K, Harnett, P & Shaw, T 2024, ‘Feasibility study of personalised online learning program for junior doctors aligned with authentic workplace practice, Health Education in Practice: Journal of Research for Professional Learning, vol. 7, no. 1 https://doi.org/10.33966/hepj.7.1.18088
Introduction:Online learning is an accessible method that enables medical practitioners to undertake training to develop new, and reinforce existing, knowledge and skills. Early career medical practitioners may find engaging in online learning particularly beneficial as they have a stronger motivation to refine knowledge and skills than their more senior peers. One under-explored mechanism to strengthen the delivery of online learning for medical practitioners is the use of clinical data to tailor learning so it is closely aligned with the individual health professional’s clinical practice.
Methodology:This study aimed to evaluate the feasibility of personalising an online learning program for early career doctors working in oncology using electronic medical record (EMR) data. An online program was developed by clinical domain experts that could be triggered using pathology orders and/or results closely aligned to when the test was ordered in clinical practice. The program content was designed to cover three categories: (1) test ordering, (2) interpreting test results, and (3) patient management. Early career medical practitioners undergoing oncology training were recruited to participate in the study. The program was evaluated using metrics captured by the online learning platform, and a post-program survey.
Results:All early career medical practitioners eligible to participate in the study consented to participate (n=5). It was feasible to personalise the online program using pathology ordering data. Further, analysis of survey responses indicated that personalising an online learning program using EMR data was acceptable to early career doctors and facilitated engagement with the course.
Conclusion:Personalising an online learning program for early career medical practitioners in cancer care using electronic health-record data is both feasible and acceptable.
Keywords : electronic health records, digital health, oncology, health professions education, health informatics
1 The University of Sydney Faculty of Medicine and Health
2Crown Princess Mary Cancer Centre, Western Sydney Local Health District
Anna Janssen, Level 2, Charles Perkins Centre, The University of Sydney, NSW, 2006, Australia, [email protected]
HETI ‘supports education and training’ (HETI 2009; Ministry of Health 2007) for excellent health care across the NSW Health system. We work to ensure that world-class education. In-text links are clickable (http://www.heti.nsw.gov.au/research/).
HETI supports education and training for excellent health care across the NSW Health system. We work to ensure that world-class education and training resources are available to support the full range of roles across the public health system including patient care, administration and support services.
Lifelong learning is foundational for medical professionals, enabling them to stay up to date on the latest evidence in order to deliver high quality care and reinforce their existing knowledge (Institute of Medicine 2009). Engaging in both mandated and voluntary learning activities can take up considerable time for medical practitioners. A recent study of early career doctors observed and categorised the time spent on each workplace task, finding that almost two hours a day was spent on educational activities (Chaiyachati et al. 2019). The prominent role of continuing education for medical professionals who have completed their training has also been explored in the literature (Lloyd-Williams et al. 2006).
Learning activities can take a range of forms, including online learning activities, which are becoming more available to medical professionals (Ruiz et al. 2006). The popularity of online learning in medical education is unsurprising given its ability to make training available to participants when and where they would like to engage with it, reducing barriers to accessing information, and potentially enabling more innovative teaching approaches than traditional face-to-face methods (Curran et al. 2017). Online learning also enables the use of elements such as spacing and repetition, which have been shown to be effective for facilitating the chance of sustained knowledge in learners (Kerfoot 2010; Kerfoot et al. 2007; Phillips et al. 2019).
Although much of the learning medical professionals engage in is undertaken in the workplace (Sehlbach et al. 2020), there is a notable gap in research on aligning learning activities to what happens in the clinical environment. The literature has shown that adult learners value training and education that is both authentic and aligned with their experiences in the real world (Lombardi 2007). There is a unique opportunity in the health sector to explore the use of electronic health data sources as a tool for personalising education to clinical practice experiences. Over the last decade the increasing digitisation of healthcare has led to the collection of increasing quantities of electronic health data (Ambinder 2005), much like the increase in data collection in other sectors, such as manufacturing, marketing, accounting and finance (Wang et al. 2015). One major source of electronic health data is electronic medical records (EMRs), which are longitudinal electronic records of patient health information including patient demographics, progress notes, medication, laboratory results, etc. These are generated by one or more encounters in any health care delivery setting (Menachemi & Collum 2011). Data from EMRs have been used for secondary applications, such as research using population-wide data on cancer patient experiences and therapy outcomes (Berger et al. 2016), and quality improvement activities such as toxicity monitoring and symptom management of chemotherapy treatments (Brockstein et al. 2011), but there are few examples of its use for personalising the medical education of healthcare providers.
A number of models and frameworks can provide guidance, and show the value of strengthening education for health professions using electronic health data. Of particular note is the Master Adaptive Learning (MAL) framework, which provides a model for understanding the complex process medical practitioners use to engage in effective lifelong learning activities (Cutrer et al. 2017). The MAL framework emphasises the need for adaptation in order to develop new clinical skills and can be used to guide skills acquisition by medical practitioners. The MAL framework consists of four stages: (1) planning, (2) learning, (3) assessing, and (4) adjusting. The MAL framework has been applied to give understanding of the strategies medical professionals use to plan their learning and to identify barriers that can inhibit learning (Regan et al. 2019).
Using EMR data to develop personalised learning for health professionals may support the adaptive processes described in the MAL framework. Furthermore, models of online learning such as Knowledge Process Practice (KPP), can help understand the educational design considerations of online programs that lead to improvements in clinical practice (Shaw et al. 2015). The KPP has three components and is built around principles such as matching new knowledge to the experiences of professionals, using real-world experiences to contextualise knowledge and enabling reflective and collaborative learning.
The study described in this manuscript aims to evaluate the feasibility of personalising an online learning program for early career doctors working in oncology using EMR data generated from routine clinical practice. It further aims to explore the acceptability of the online learning program to medical practitioners.
STUDY DESIGN
A pilot study was undertaken, which was suitable to evaluate both the feasibility and acceptability of the online program. Data collected included open responses from an online survey, and semi-structured interview data. The data also included reports extracted from the EMR and metrics collected by the online learning platform.
Participants, Study Setting and Intervention Design
The study was undertaken within the oncology department at two public metropolitan hospitals in Sydney, Australia. The intervention ran for five weeks between November and December 2019.
Potential participants were early career doctors who had graduated from their medical degrees within the last two years and were working in the oncology ward at the two study sites. At the study sites there were five potential participants undertaking their oncology term during the study period, and all provided written informed consent to participate in the study.
The program focused on a learning curriculum that consisted of three categories: (1) test ordering, (2) interpreting test results, and (3) patient management. A working group, consisting of clinical domain experts, educational designers and researchers, was established to develop questions for the online learning program. Guided by the curriculum, the working group identified questions that were both meaningful to clinical practice and appropriate to the doctor’s level of training, while also ensuring it was feasible to personalise the questions with EMR data. A total of twelve questions were used in the pilot program. The program was developed to be delivered using an online microlearning platform (Qstream, Burlington, Massachusetts) that delivers questions to participants via email containing a hyperlink to the website platform, or via a smartphone application, depending on the participant’s choice. The microlearning program consists of multiple-choice questions which include a clinical scenario, multiple response options, and expert feedback based around a take-home message once a response has been selected by the learner. By default, the microlearning platform delivers a small bundle of two to three questions to learners at a time, so that it only takes a few minutes for learners to respond to the bundle. Once the learner has responded to the questions they have been assigned, they are provided with detailed feedback on why their response was correct or incorrect, to reinforce a take home message. A question is repeated after several days if the participant answers the question incorrectly.
In this program, the delivery of microlearning questions was personalised for each participant using data extracted from the EMR. EMR data related to pathology-test ordering was chosen to personalise the online learning program due to the structured nature of pathology data. When designing the program, the working group reviewed EMR pathology reports to determine appropriate triggers for each question. This process involved reviewing the data to determine abnormal pathology test value thresholds to ensure the clinical relevancy of questions and minimise question over-triggering for individual participants. These thresholds did not always concur with the laboratory thresholds for abnormal test result values.
During the intervention period for the online program, a report was extracted twice a week from the EMR. The report used in the study was prepared to ensure patient privacy was protected, with all identifiable data anonymised by the pathology laboratory prior to it being provided to the research team. This anonymised report contained data for each participant, identifying whether they had ordered one of the relevant pathology tests in the EMR since the time of the last report extraction (or the prior three days for the first report). If a relevant test had been ordered, the data was manually reviewed to determine if the test result met the eligibility threshold to trigger a question in the online program. Participants in the intervention received one question up to two times a week (if they triggered a question) for the five weeks of the intervention.
Data Collection and Analysis
Quantitative data consisted of the analysis of metrics captured by the online learning platform on participant progress through the microlearning course, and analysis of EMR data. This was used to see how frequently certain pathology tests occurred and how these data aligned with the questions participants received during the online program. In addition to this, metrics collected automatically by the online learning platform were analysed to determine participant engagement with the online program. This analysis included the number of questions participants were enrolled in that they completed, the accuracy of their responses, and the time that elapsed between being allocated a question and answering it.
Qualitative data consisted of structured and free-text responses collected via an anonymous and voluntary online survey at the end of the intervention. The survey consisted of five structured Likert-style questions and five open-response questions. Content from this survey was analysed to determine participant engagement with the content of the intervention, the online learning platform and the personalisation component.
A total of five early career doctors consented to participate across the two participating study sites. This represented all the doctors who were eligible
Table 1: Overview of Qstream clinical questions and the EMR Data trigger.
Question Number |
Question Topic |
EMR Data Trigger |
Triggered by EMR Data During Intervention |
1 |
Management of critical hyperkalaemia |
Potassium > 6.0 mmol/L |
|
2 |
Management of hypernatraemia |
Sodium Level > 145 mmol/L |
|
3 |
Managing hyponatraemia |
Sodium Level <= 130 mmol/L |
Yes |
4 |
Causes of hyperbilirubinaemia |
Bilirubin > 30 µmol/L |
Yes |
5 |
Immunotherapy related adverse events |
Alanine aminotransferase or aspartate aminotransferase or AST > 100 IU/L |
Yes |
6 |
Recognition of hypercalcaemia |
Calcium Level Corrected > 2.75 mmol/L |
|
7 |
Clinical consequences of hypomagnesaemia |
Magnesium Level < 0. 60 mmol/L |
Yes |
8 |
Management of anaemia |
Haemoglobin < 70 g/L |
Yes |
9 |
Febrile neutropenia |
Absolute Neutrophils < 1.0 × 109/L |
Yes |
10 |
Consenting for blood transfusions |
Blood Group Ordered |
|
11 |
DVT prophylaxis in the setting of thrombocytopenia |
Platelets < 50 × 109/L |
Yes |
12 |
Management of thrombocytopenia |
Platelets < 50 × 109/L |
Yes |
to participate during the intervention period. There were three male participants and two female participants.
Data from the EMR was successfully extracted twice every week for the duration of the intervention period.
Of the twelve questions that could have been sent to participants during the intervention period, the EMR data was able to trigger nine for at least one participant. Refer to Table 1 for an overview of the question IDs and the topics each question covered. Participants triggered an average of six questions, with a range of three to nine questions triggered over the intervention period. On average, participants triggered two questions every time the EMR extraction report was run, with a range of zero to twelve questions triggered per report. The number of questions triggered with each bi-weekly report does not account for whether the participant had completed the questions previously in the online program, and thus would automatically not be re-assigned it even though it appeared on the report.
Over the intervention period, participants received an average of five questions, with a minimum of two and a maximum of seven questions received. A question could be reallocated to a participant if they did not attempt it the first time it triggered. The average number of unique questions received by participants over the intervention period was four, with a range of two to five. Refer to Figure 1 for an overview of the questions participants were enrolled in during the intervention and their question responses.
Figure 1: Qstream questions that participants were enrolled in, including the date of enrolment, and participant responses. Once enrolled in a question, participants could answer correctly on their first attempt, incorrectly on first attempt, or not respond. For questions that participants responded to, they could once again get the question correct or incorrect, or not respond to the second repeat. If a participant did not respond to a question, they could be enrolled it again later in the pilot if the EMR report indicated they had triggered it again.
Analysis of metrics from the learning platform indicated that participants attempted an average of three questions over the intervention period, with a minimum of zero questions attempted and a maximum of four attempted. On average participants answered two questions correctly on the first attempt, with a range of one to three.
Analysis of the time that elapsed between when a participant was allocated a question and when they made their first attempt to answer it indicated that the majority of questions were answered by participants between 24 and 48 hours after allocation. This was followed by questions answered by participants on the same day as the allocation, with the remainder of questions being answered by participants more than 72 hours after allocation. Refer to Table 2 for a breakdown of the time between when a participant was enrolled in a question, and their first attempt at responding to it.
HEPJ table title Table 1: Overview of Qstream clinical questions and the EMR Data trigger.
Participant |
Question ID |
Allocation |
First Attempt |
Response Time |
P1 |
12 |
19/12/2019 |
- |
- |
P1 |
27 |
17/12/2019 |
- |
- |
P2 |
14 |
28/11/2019 |
- |
- |
P2 |
14 |
11/12/2019 |
13/12/2019 |
1d 22h 4m |
P2 |
20 |
21/11/2019 |
24/11/2019 |
3d 3h 59m |
P2 |
23 |
10/12/2019 |
11/12/2019 |
1d 1h 56m |
P2 |
25 |
26/11/2019 |
- |
- |
P2 |
25 |
3/12/19 |
5/12/2019 |
2d 5h 44m |
P3 |
20 |
21/11/2019 |
22/11/2019 |
19h 51m |
P3 |
26 |
19/12/2019 |
- |
|
P3 |
37 |
26/11/2019 |
- |
|
P3 |
37 |
3/12/2019 |
4/12/2019 |
19h 35m |
P4 |
12 |
28/11/2019 |
29/11/2019 |
1d 1h 25m |
P4 |
20 |
21/11/2019 |
- |
- |
P4 |
23 |
17/12/2019 |
- |
- |
P4 |
36 |
19/12/2019 |
- |
- |
P4 |
40 |
26/11/2019 |
29/11/2019 |
3d 1h 22m |
P5 |
14 |
26/11/2019 |
- |
- |
P5 |
23 |
10/12/2019 |
11/12/2019 |
16h 16m |
P5 |
25 |
21/11/2019 |
- |
- |
P5 |
25 |
17/12/2019 |
18/12/2019 |
19h 54m |
P5 |
37 |
28/11/2019 |
- |
- |
P5 |
37 |
11/12/2019 |
12/12/2019 |
1d 0h 52m |
P5 |
40 |
3/12/2019 |
4/12/2019 |
1d 5h 39m |
A total of four participants (80%) responded to the post-intervention online survey. (A copy of the post-intervention online survey is available in the Appendix.) All participants indicated that they found the online program engaging and would recommend it to a colleague. Regarding the content of the online program, all respondents agreed or strongly agreed with the statement ‘the program contained realistic content that was relevant to their clinical practice’. An equal number of respondents indicated that they completed the program using the smartphone application (n=2) and via email (n=2), and all indicated they completed the questions when they had time in the workday. All respondents agreed or strongly agreed with the statement that the duration of the microlearning course suited their needs, and half of respondents agreed with the statement they would have liked to have received more questions.
Survey responders were asked to provide feedback about the acceptability of personalising the online program using EMR data. All participants either agreed or strongly agreed with the statement that ‘the questions in the online program felt linked to their clinical practice’, and with the statement that ‘the program felt engaging because it used clinical data relevant to their organisation’. All respondents indicated that the program was a helpful means of feeding back data on their patient presentations. Three respondents indicated that they agreed or strongly agreed with the statement that the data-personalised program encouraged them to engage in reflective practice activities such as reviewing their own patient records in the EMR.
Findings from this study indicate that it is feasible to personalise an online learning program using pathology test ordering, test result interpretation and oncology patient management, using data extracted from an EMR. Furthermore, findings indicate that it is acceptable for early career doctors to undertake training personalised in this manner. This finding aligns with and builds on previous research indicating health professionals across a range of specialties are interested in the use of EMR data to personalise lifelong learning activities (Shaw et al. 2019). When considering non-attempts at questions, findings from this study suggest there was a greater number of non-attempts at the start of the program than at the end. The reason for this is unclear, but perhaps it reflects participants being busy orienting to their new positions early in the term and having less capacity to engage in the program.
Furthermore, this study indicates that personalising an online learning program using EMR data is acceptable to early career doctors, and facilitates engagement with the microlearning course. Interventions like the one described in this manuscript may be a viable option for health professionals, and may meet ongoing professional development requirements of regulatory bodies, which increasingly focus on the use of routinely collected health data in continuing medical education (The Royal Australasian College of Physicians 2020). However, it remains unclear whether this personalisation is more engaging than other means of delivering online learning. Existing research on online learning has shown that these programs can be effective for engaging learners, particularly programs that incorporate elements such as secure internet connectivity, collaboration among the learning community, utilising feedback, critical question-asking and matching learning styles (Czerkawski & Lyman III 2016). A systematic review of online learning platforms demonstrated that programs that used a ‘space and repeat’ approach to delivering learning content improved health professionals’ knowledge, and were suitable for disseminating best-practice guidelines. In addition, the health professionals retained knowledge at a greater level than when traditional education methods were used (Phillips et al. 2019).
Incorporating electronic health data into learning activities may be a valuable tool for improving quality issues related to its capture. This is because doing so may provide a secondary use of the data that is potentially valuable to health professionals. It is evidenced that medical professionals dedicate considerable time to administrative tasks, such as entering patient data into EMRs, particularly when the systems have been newly implemented and health professionals are yet to familiarise themselves with the optimal navigation of the platforms (Baumann et al. 2018). However, the extent to which they have access to this data for secondary applications varies across provider and organisation type (Orchard et al. 2009).
The use of EMR data to personalise online learning enables medical professional training to be tailored to clinical encounters in the workplace. Personalisation of learning may act as an enabler of adaptive learning approaches, which have been identified as learning strategies that are characteristic of master learners (Cutrer et al. 2017). Coupled with this, the incorporation of EMR data into learning may act as an enabler of practice reflection due to the immediacy of prior clinical practice to a learning opportunity. Approaches to using EMR data that may support workplace-based learning include using platforms that allow reflection on current performance compared to historic performance, or that enable reflections aligned with defined standards (Sebok-Syer et al. 2019).
A limitation of this study is that the sample size was small and the study duration was relatively short due to the length of the oncology term. Although the sample included all participants who were eligible to participate during the study period, the ability to draw broad conclusions from this data is limited. An additional limitation was the EMR data set used to trigger the questions. Pathology ordering data was used to trigger the questions, but some participants were doctors primarily caring for patients in end-of-life, where tests may be minimised for patient comfort. Further research should consider reproducing this study with a larger number of medical professionals to obtain more nuanced data on what aspects of the program are acceptable. There would also be an opportunity for future research to evaluate whether the personalisation of a program leads to a change in clinical outcomes, such as a reduction in inappropriate test ordering in the EMR. Future research could also explore different EMR data for triggering questions, such as prescribing data which may have more utility for alignment with clinical practice.
Personalising an online learning program for early career doctors using EMR data is feasible, provided there is access to structured data that can meaningfully indicate a specific clinical encounter for each participant. Additionally, this online personalised program is acceptable to early career doctors. The findings of this study warrant further exploration of timely and relevant learning linked to the care delivered by individual medical professionals in other areas where EMR data is collected routinely. There are three considerations when designing a personalised microlearning course of this nature: 1) the strength of the core content in the online program, 2) the design and technology used in the online learning platform and its suitability for medical professionals, and 3) the availability and structure of the EMR data.
Future studies could explore whether information could be drawn from the EMR on the appropriateness of treatment options chosen by doctors and whether this information could be fed back as part of the experience, to further enhance and reinforce learning.
Conflict of Interest
This work received funding from the Sydney West Translational Cancer Research Centre which was funded by Cancer Institute NSW grant ID 15/TRC/1-01. Funding provided salary support for authors CD and KS. Author AJ is undertaking a postdoctoral research fellowship that is funded through the Digital Health CRC (Cooperative Research Centre). The DHCRC is established and supported under the Australian Government’s Cooperative Research Centres Program.
Authors’ contributions
All authors contributed to the study conception and design. Material preparation, data collection and analysis were performed by AJ and CD. The first draft of the manuscript was written by AJ and all authors commented on subsequent versions of the manuscript. All authors read and approved the final manuscript.
Ethics approval
Permission to conduct this study was granted by human research ethics committee of the Western Sydney Local Health District [5803].
Ambinder, EP 2005, ‘Electronic health records’, Journal of Oncology Practice, vol. 1, no. 2, pp. 57.
Baumann, LA, Baker, J & Elshaug, AG 2018, ‘The impact of electronic health record systems on clinical documentation times: A systematic review’, Health Policy, vol. 122, no. 8, pp. 827-36.
https://doi.org/10.1016/j.healthpol.2018.05.014
Berger, ML, Curtis, MD, Smith, G, Harnett J & Abernethy, AP 2016, ‘Opportunities and challenges in leveraging electronic health record data in oncology’, Future Oncology, vol.12, no. 10, pp. 1261-74. https://doi.org/10.2217/fon-2015-0043
Brockstein, B, Hensing, T, Carro, GW, Obel, J, Khandekar, J, Kaminer, L, Van De Wege, C & de Wilton Marsh, R 2011, ‘Effect of an electronic health record on the culture of an outpatient medical oncology practice in a four-hospital integrated health care system: 5-year experience’, Journal of Oncology Practice , vol.7, no. 4, pp. e20-4. https://doi.org/10.1200/JOP.2011.000260
Chaiyachati, KH, Shea, JA, Asch, DA, Liu, M, Bellini, LM, Dine, CJ, Sternberg, AL, Gitelman, Y, Yeager, AM & Asch, JM 2019, ‘Assessment of inpatient time allocation among first-year internal medicine residents using time-motion observations’, JAMA Internal Medicine,vol. 179, no. 6, pp. 760-7.
Curran, V, Matthews, L, Fleet, L, Simmons, K, Gustafson, DL & Wetsch, L 2017, ‘A review of digital, social, and mobile technologies in health professional education’, The Journal of Continuing Education in the Health Professions ,vol. 37, no. 3, pp. 195-206. https://doi.org/10.1097/CEH.0000000000000168
Cutrer, WB, Miller, B, Pusic, MV, Mejicano, G, Mangrulkar, RS, Gruppen, LD, Hawkins, RE, Skochelak, SE & Moore Jr, DE 2017, ‘Fostering the development of Master Adaptive Learners: a conceptual model to guide skill acquisition in medical education’, Academic Medicine, vol.92, no. 1, pp. 70-5.
Czerkawski, BC & Lyman III, EW 2016, ‘An instructional design framework for fostering student engagement in online learning environments’, TechTrends ,vol. 60, no. 6, pp. 532-9. https://doi.org/10.1007/s11528-016-0110-z
Institute of Medicine (US) Committee on Planning a Continuing Health Care Professional Education Institute 2009, Continuing professional development: building and sustaining a quality workforce. In Redesigning continuing education in the health professions, National Academies Press, Washington DC.
Kerfoot, BP 2010, ‘Adaptive spaced education improves learning efficiency: a randomized controlled trial’, The Journal of Urology,vol. 183, no.2, pp. 678-81.
Kerfoot, BP, DeWolf, WC, Masser, BA, Church, PA & Federman, DD 2007, ‘Spaced education improves the retention of clinical knowledge by medical students: a randomised controlled trial’, Medical Education, vol.41, no. 1, pp. 23-31.
Lloyd-Williams, M, Kite, S, Hicks, F, Todd, J, Ward, J & Barnett, M 2006, ‘Continuing professional development (CPD) in palliative medicine: a survey’, Medical Teacher, vol. 28, no. 2, pp. 171-4.
Lombardi, MM 2007, ‘Authentic learning for the 21st century: an overview’, Educause Learning Initiative,vol. 1, no. 2007, pp. 1-12.
Menachemi, N & Collum, TH 2011, ‘Benefits and drawbacks of electronic health record systems’, Risk Management and Healthcare Policy, vol.4, pp. 47.
Orchard, MC, Dobrow, MJ, Paszat, L, Jiang, H & Brown, P 2009, ‘Access to electronic health records by care setting and provider type: perceptions of cancer care providers in Ontario, Canada’, BMC Medical Informatics and Decision Making ,vol.9, no. 1, pp. 38. https://doi.org/10.1186/1472-6947-9-38
Phillips, JL, Heneka, N, Bhattarai, P, Fraser, C & Shaw, T 2019, ‘Effectiveness of the spaced education pedagogy for clinicians’ continuing professional development: a systematic review’, Medical Education ,vol. 53, no. 9, pp. 886-902.
Regan, L, Hopson, LR, Gisondi, MA & Branzetti, J 2019, ‘Learning to learn: a qualitative study to uncover strategies used by Master Adaptive Learners in the planning of learning’, Medical Teacher, vol. 41, no. 11, pp. 1252-62.
Ruiz, JG, Mintzer, MJ & Leipzig, RM 2006, ‘The impact of e-learning in medical education’, Academic Medicine,vol. 81, no. 3, pp. 207-12.
Sebok-Syer, SS, Goldszmidt, M, Watling, CJ, Chahine, S, Venance, SL & Lingard, L 2019, ‘Using electronic health record data to assess residents’ clinical performance in the workplace: the good, the bad, and the unthinkable’, Academic Medicine, vol. 94, no. 6, pp. 853-60.
Sehlbach, C, Teunissen, PW, Driessen, EW, Mitchell, S, Rohde, GG, Smeenk, FW & Govaerts, MJ 2020, ‘Learning in the workplace: use of informal feedback cues in doctor‐patient communication’, Medical Education , vol. 54, no. 9, pp. 811-20.
Shaw, T, Barnet, S, Mcgregor, D & Avery, J 2015, ‘Using the knowledge, process, practice (KPP) model for driving the design and development of online postgraduate medical education’, Medical Teacher, vol. 37, no. 1, pp. 53-8.
Shaw, T, Janssen, A, Crampton, R, O'Leary, F, Hoyle, P, Jones, A, Shetty, A, Gunja, N, Ritchie, AG & Spallek, H 2019, ‘Attitudes of health professionals to using routinely collected clinical data for performance feedback and personalised professional development’, Medical Journal of Australia , vol. 210, pp. S17-21.
The Royal Australasian College of Physicians 2020, 2020 MyCPD Framework , RACP, NSW. https://www.racp.edu.au/docs/default-source/fellows/cpd/2020-mycpd-framework.pdf?sfvrsn=ee5ae31a_4
Wang, X, White, L & Chen, X 2015, ‘Big data research for the knowledge economy: past, present, and future’, Industrial Management & Data Systems , vol. 115, no. 9, https://doi.org/10.1108/IMDS-09-2015-0388