Modeling will continue to be used to address important issues in clinical practice and health policy issues that have not been adequately studied with high-quality clinical trials. The apparent ad hoc nature of models belies the methodologic rigor that is applied to create the best models in cancer prevention and care. Models have progressed from simple decision trees to extremely complex microsimulation analyses, yet all are built using a logical process based on objective evaluation of the path between intervention and outcome. The best modelers take great care to justify both the structure and content of the model and then test their assumptions using a comprehensive process of sensitivity analysis and model validation. Like clinical trials, models sometimes produce results that are later found to be invalid as other data become available. When weighing the value of models in health care decision making, it is reasonable to consider the alternatives. In the absence of data, clinical policy decisions are often based on the recommendations of expert opinion panels or on poorly defined notions of the standard of care or medical necessity. Because such decision making rarely entails the rigorous process of data collection, synthesis, and testing that is the core of well-conducted modeling, it is usually not possible for external audiences to examine the assumptions and data that were used to derive the decisions. One of the modeler's most challenging tasks is to make the structure and content of the model transparent to the intended audience. The purpose of this article is to clarify the process of modeling, so that readers of models are more knowledgeable about their uses, strengths, and limitations.