The authors examined the operational meaning of the 48 items that state the accreditation standards for teaching, learning, and evaluation in medical school, and determined the extent to which these standards were applied by schools and by on-site evaluators for 59 programs surveyed by the Liaison Committee on Medical Education (LCME) in 1994-1996. In this study, "application" meant that evidence was offered, not necessarily that it proved compliance with the standard. The data sources employed were the medical education databases and self-studies prepared by schools undergoing accreditation surveys, and the reports prepared by ad hoc teams of surveyors. The frequency with which evidence of compliance was offered by the schools and cited by evaluators was determined for each of the 48 accreditation requirements. In addition, the authors compared the patterns of surveyors' concerns about noncompliance at schools surveyed during 1984-1986 and at those visited during 1994-1996. In 1994-1996, schools addressed 42 of the 48 accreditation requirements in 90% of instances of more. The areas of particularly low attention dealt with the definition and communication of educational objectives (47% of schools provided evidence); faculty authority and control of academic programs in clinical affiliates (12%); and the faculty's commitment to being effective teachers and their understanding of pedagogy, curricular design, and methods of evaluation (8%). Survey teams, in contrast, accounted in their reports for only 26 (55%) of the standards during the same time period. Among those least frequently addressed were the definition and communication of educational objectives by schools (accounted for in 59% of the reports); assessment of students' problem-solving ability (51%); comparability of educational experiences and student evaluation across dispersed teaching sites (49%); faculty understanding of pedagogy, curriculum construction, and the evaluation of students (8%); faculty authority and control of academic programs in clinical affiliates (7%); and knowledge of the administration and faculty about methods for measuring student performance (2%). Over the past decade, surveyors' most frequently cited concerns about schools' noncompliance with accreditation standards dealt with student counseling and health services, institutional financial and space/facilities resources, faculty issues, and vacant decanal and department chair positions. Next in order were concerns about various aspects of the educational program leading to the MD degree. Among the high-profile concerns about the educational program that increased significantly over the decade were those about curriculum design, management, and evaluation; primary/ambulatory care experiences; and student advancement policies and due-process issues. Schools paid high attention to most of the 48 standards, in large part because they were prompted by the formatting of the medical education database and self-study guidelines. In those instances of lesser attention, the fault lies as much or more with ambiguities in the construction and meaning of the standards as with institutional laxity. The surveyors' inattention to accreditation standards is more troubling. In some cases it can be attributed to uncertainties about the meaning of the requirements and the quantities that need to be audited; or surveyors may be comfortable reaching a "substantial compliance" threshold without adducing all the evidence. The authors argue that many of the standards given scant attention on surveys are important to educational program development and quality control. The LCME will need to consider whether more prominent definition and highlighting should be given to neglected standards, or whether some of the requirements are at the margin as quality indicators. A planned survey of communities of interest-educators, practitioners, students, graduates, and residency program directors, among others-may help confirm