Evaluation is a vital part of any learning activity and is essential to optimize and improve educational programmes. It should be considered and prioritized prior to the implementation of any learning activity. However, comprehensive programme evaluation is rarely conducted, and there are numerous barriers to high-quality evaluation. This review provides a framework for conducting outcome evaluation of simulation-based education programmes in low and middle-income countries (LMICs). The basis of evaluation, including core ideas of theory, purpose and structure are outlined, followed by an examination of the levels and healthcare applications of the Kirkpatrick model of evaluation. Then, methods of conducting evaluation of simulation-based education in LMICs are discussed through the lens of a successful surgical simulation programme in Myanmar, a lower-middle-income country. The programme involved the evaluation of 11 courses over 4 years in Myanmar and demonstrated evaluation at the highest level of the Kirkpatrick model. Reviewing this programme provides a bridge between evaluation theory and practical implementation. A range of evaluation methods are outlined, including surveys, interviews, and clinical outcome measurement. The importance of a mixed-methods approach, enabling triangulation of quantitative and qualitative analysis, is highlighted, as are methods of analysing data, including statistical and thematic analysis. Finally, issues and challenges of conducting evaluation are considered, as well as strategies to overcome these barriers. Ultimately, this review informs readers about evaluation theory and methods, grounded in a practical application, to enable other educators in low-resource settings to evaluate their own activities.
Keywords: global health; global surgery; programme evaluation; simulation‐based education; surgical education.
© 2024 The Authors. ANZ Journal of Surgery published by John Wiley & Sons Australia, Ltd on behalf of Royal Australasian College of Surgeons.