Extensive virological testing is central to SARS-CoV-2 containment, but many settings face severe limitations on testing. Group testing offers a way to increase throughput by testing pools of combined samples; however, most proposed designs have not yet addressed key concerns over sensitivity loss and implementation feasibility. Here, we combine a mathematical model of epidemic spread and empirically derived viral kinetics for SARS-CoV-2 infections to identify pooling designs that are robust to changes in prevalence, and to ratify losses in sensitivity against the time course of individual infections. Using this framework, we show that prevalence can be accurately estimated across four orders of magnitude using only a few dozen pooled tests without the need for individual identification. We then exhaustively evaluate the ability of different pooling designs to maximize the number of detected infections under various resource constraints, finding that simple pooling designs can identify up to 20 times as many positives compared to individual testing with a given budget. We illustrate how pooling affects sensitivity and overall detection capacity during an epidemic and on each day post infection, finding that sensitivity loss is mainly attributed to individuals sampled at the end of infection when detection for public health containment has minimal benefit. Crucially, we confirm that our theoretical results can be accurately translated into practice using pooled human nasopharyngeal specimens. Our results show that accounting for variation in sampled viral loads provides a nuanced picture of how pooling affects sensitivity to detect epidemiologically relevant infections. Using simple, practical group testing designs can vastly increase surveillance capabilities in resource-limited settings.