Action assessment, the process of evaluating how well an action is performed, is an important task in human action analysis. Action assessment has experienced considerable development based on visual cues; however, existing methods neglect to adaptively learn different architectures for varied types of actions and are therefore limited in achieving high-performance assessment for each type of action. In fact, every type of action has specific evaluation criteria, and human experts are trained for years to correctly evaluate a single type of action. Therefore, it is difficult for a single assessment architecture to achieve high performance for all types of actions. However, manually designing an assessment architecture for each specific type of action is very difficult and impracticable. This work addresses this problem by adaptively designing different assessment architectures for different types of actions, and the proposed approach is therefore called the adaptive action assessment. In order to facilitate our adaptive action assessment by exploiting the specific joint interactions for each type of action, a set of graph-based joint relations is learned for each type of action by means of trainable joint relation graphs built according to the human skeleton structure, and the learned joint relation graphs can visually interpret the assessment process. In addition, we introduce using a normalized mean squared error loss (N-MSE loss) and a Pearson loss that perform automatic score normalization to operate adaptive assessment training. The experiments on four benchmarks for action assessment demonstrate the effectiveness and feasibility of the proposed method. We also demonstrate the visual interpretability of our model by visualizing the details of the assessment process.