LEARNING EFFECT & EVALUATION ERROR
At Tools4Patient, we are working on the prediction of placebo response and especially the placebo response with endpoints assessing the patients’ pain. As you know, these endpoints represent most of the efficacy endpoints in Osteoarthritis Randomized Clinical Trials. Nevertheless, the assessment of pain is, by nature, subjective and the risk is high that patients evaluate it inconsistently. Reducing this inconsistency could therefore help to have more qualitative data and to better understand the placebo and treatment responses. The abstract asks then simply this question: Could we reduce this inconsistency by a learning effect induced by a daily self-assessment of pain? To study this inconsistency in the pain evaluation, we have modeled the measured pain scores, for example here the APS, as the sum of
- A signal, which represents the “ideal” measure that the patient would have reported if he was totally consistent
- An Error of Evaluation, disrupting the consistent and ideal value of APS
Modeled like this, we can evaluate the inconsistency by estimating the variance of this evaluation error. This variance will give an estimation of how far the subjects are from the ideal and consistent value of APS when they rate their pain. The estimation of the variance can be computed using this formula. We have used these considerations to analyze the data of a single OA study involving 64 patients having received placebo during 3 months.
3 pain measures were recorded daily:
- the average pain score
- the worst pain score
- the lowest pain score
During the visits, the Brief Pain Inventory was recorded monthly and we analyzed the individual items of the BPI regarding APS, WPS and LPS, and, of course, the total score of the BPI-Severity. We analyzed the Error in this study by measuring directly the variance of the Evaluation Error. But also indirectly, by observing the auto-correlation of the pain scores from one day to the next, allowing to assess the evolution of the consistency of the measured pain. We observed also the consistency between different pain measures by computing the correlation between these measures. It is important to note that all these correlations were adjusted to only consider an increase of these correlations due to an increase of the consistency. Regarding the daily APS, we observed an important reduction of the evaluation error: its variance was reduced by more than 50% between the start and the end of the study. As a consequence, as you can see in this diagram, the auto-correlation, measuring the consistency of the subjects during the study, increased significantly. In evidence, if we compare the auto-correlation at the start of the study with the one at the end, we can see an increase by more than 20%, inducing a better consistency in the pain assessment at the end. Similar reductions of the error and increases of the consistency were also observed for the other endpoints.
Other consequence of the reduction of the Evaluation Error: the consistency between the APS and BPI-Severity, measured by the correlation between these endpoints, increased also significantly by more than 35%, as you can see on this diagram. Again, similar increases of the consistency were observed by comparing other endpoints. To conclude: these results suggest that the learning effect by a daily self-recording of the subjects’ pain would help to increase their consistency. A daily pain evaluation and especially in a run-in pre-baseline period in Clinical Trials would hence allow a reduction of the evaluation error. This would therefore bring a better estimation of the placebo and treatment responses without excluding any subjects at Baseline. This analysis and these results give hence new insights on how to improve pain assessment in RCT’s.