Monitoring and quantifying movement behavior is crucial for improving the health of individuals with cerebral palsy (CP). We have modeled and trained an image-based Convolutional Neural Network (CNN) to recognize specific movement classifiers relevant to individuals with CP. This study evaluates CNN's performance and determines the feasibility of 24-h recordings. Seven sensors provided accelerometer and gyroscope data from 14 typically developed adults during videotaped physical activity. The performance of the CNN was assessed against test data and human video annotation. For feasibility testing, one typically developed adult and one adult with CP wore sensors for 24 h. The CNN demonstrated exceptional performance against test data, with a mean accuracy of 99.7%. Its general true positives (TP) and true negatives (TN) were 1.00. Against human annotators, performance was high, with mean accuracy at 83.4%, TP 0.84, and TN 0.83. Twenty-four-hour recordings were successful without data loss or adverse events. Participants wore sensors for the full wear time, and the data output were credible. We conclude that monitoring real-world movement behavior in individuals with CP is possible with multiple wearable sensors and CNN. This is of great value for identifying functional decline and informing new interventions, leading to improved outcomes.
Keywords: cerebral palsy; deep learning; monitoring; movement behavior; wearable sensors.