Improving Neck Injury Assessment with Data Science and Machine Learning
By Magnús Gíslason, Reykjavík University and NeckCare; Thorsteinn Geirsson, NeckCare; Eythor Kristjansson, NeckCare
Affecting nearly two-thirds of the general population at least once in their life, neck pain is a growing healthcare concern. Common causes for this condition can include whiplash, a blow to the head, or strenuous working conditions. For example, professionals who spend many hours hunched over their workspace—such as surgeons and dentists—frequently develop neck pain. Those who wear heavy protective helmets, including athletes, jet pilots, and firefighters, may also be at risk.
Many of the techniques clinicians currently use to assess neck injuries have significant drawbacks because they rely on subjective range-of-motion observations that make it difficult to gauge the extent of an injury or track progress during therapy. Some also require labor-intensive manual procedures involving a laser pointer attached to the patient’s head, generating subjective outcomes.
Our team has developed hardware and software that helps simplify and automate the clinical assessment of neck injuries with objective metrics. The technology was originally researched at Reykjavík University and is currently being further developed into commercial products at our startup, NeckCare. As an early-stage venture, we joined the MathWorks Accelerator Program, which gave us access to MATLAB® at a discounted price along with support from MathWorks engineers to help validate our technology.
This technology relies on headgear with an embedded inertial measurement unit (IMU), along with data analysis and machine learning algorithms developed in MATLAB. The algorithms process signals from the IMU (Figure 1) and produce objective, quantifiable 3D metrics on neck movement. By comparing the IMU sensor data from healthy subjects with data from patients suffering from, for instance, whiplash or concussions, the algorithms can also accurately classify asymptomatic cases and identify those suffering from common causes of neck injury.
Introducing the Butterfly Test
Using our IMU headgear along with MATLAB, we can perform a wide variety of assessments, covering all three of the principal dimensions of human kinematics: range of motion, proprioception (the ability to sense the movement and orientation of parts of the body), and neuromuscular control. Of these, neuromuscular assessments are often the most valuable in diagnoses and among the most difficult to perform quantitatively with existing techniques.
To evaluate an individual’s neuromuscular control, we invented and patent-protected a specialized procedure called the butterfly test. During this test, a subject is seated in front of a computer monitor wearing our IMU headgear. The subject is instructed to visually track the movement of a dot as it moves on the monitor, following three different trajectories—ranging from easy to difficult (Figure 2).
During the test, the IMU continuously measures changes to the head’s orientation as the subject follows the moving dot (Figure 3). Specifically, it records roll, pitch, and yaw angles 60 times per second, as well as the head’s angular velocity and acceleration across these dimensions. This recorded data is what we process in MATLAB using statistical and machine learning techniques.
Statistical Analysis and Visualization
The software we have developed in MATLAB is designed to objectively measure a subject’s ability to control their head and neck as they follow the moving dot in the butterfly test. As a first step, the software projects the rotational angles captured by the IMU onto the 2D plane coinciding with the surface of the monitor’s screen. Using this projection, the software can then compare the dot’s path with the path traced by the test subject. By plotting overlays of these paths, it is easy to see differences in the performance of an asymptomatic test subject and a test subject with a neck injury (Figure 4).
In addition to generating visualizations, the software also computes several statistical metrics to better quantify differences between asymptomatic and symptomatic subjects. A key metric is amplitude accuracy, or the average difference between the target dot and the subject-control cursor over the entire duration of the test. The software also computes time-on-target, which is the percentage of time the cursor is on or in close vicinity of the target. This includes both undershoots and overshoots—the proportion of time spent behind or ahead of the target, respectively. Finally, the software computes smoothness of movement, a parameter that quantifies jerkiness based on the integration of the quadratic sum of the third derivative of the spatial coordinates traced by the subject normalized against the same quantity traced by the target.
Analyses conducted with the software consistently show statistically significant differences between asymptomatic and whiplash subjects for virtually all computed metrics, often with p-values less than 0.001 (Figure 5).
Machine Learning Classification
We’ve recently been exploring the use of machine learning to classify test subjects into asymptomatic, whiplash, or concussion categories based on their test results. Using the Classification Learner app in Statistics and Machine Learning Toolbox™, we trained a variety of machine learning models with a data set consisting of 15 variables from butterfly tests, 30 variables from range of motion tests, and 28 variables from head/neck relocations tests. After training models with a limited data set, we found that a naive Bayes model worked best, classifying subjects at or close to 100% accuracy (Figure 6).
We also used the feature ranking capabilities of the Classification Learner app to identify those features that were most important to the classification (Figure 7). With this capability, we determined that classifications based on only the top seven features—as ranked using analysis of variance, or ANOVA—provided the same accuracy as classifications based on all the features (Figure 8). We are now expanding the training data to include a much larger number of subjects, and we are also developing models that will further classify subject by the severity of their impairment.
We are actively working on the clinical adoption of our technology, so that physicians can better treat patients with neck injuries. Our headgear is currently registered as a Class I medical device with the U.S. Food and Drug Administration. Moreover, we continue to develop MATLAB algorithms to support a growing number of software applications. One such application is in telehealth and other home health solutions, where patients could use our technology at home to perform therapeutic exercises. Another is assessing whether an athlete is fit enough to compete after a head injury. The technology may also provide a way to verify insurance and disability claims made by whiplash patients. Finally, we are planning to expand the use of our technology beyond assessments of the cervical spine to other parts of the human body.