Predicting Treatment Response Using Machine Learning

Predicting treatment response using machine learning involves employing advanced computational algorithms to analyze patient data and identify patterns that can forecast an individual’s likelihood of responding favorably to a specific treatment.

By leveraging machine learning techniques, clinicians can make more informed decisions about personalized treatment plans, potentially leading to improved patient outcomes, reduced treatment failures, and optimized resource allocation in mental health care settings.

Illustration of a man talking to a healthcare professional on a large phone screen with different health icons coming off the screen
Jankowsky, K., Krakau, L., Schroeders, U., Zwerenz, R., & Beutel, M. E. (2024). Predicting treatment response using machine learning: A registered report. British Journal of Clinical Psychology, 63, 137–155. https://doi.org/10.1111/bjc.12452

Key Points

  • The study used machine learning algorithms to predict treatment response in a naturalistic inpatient sample with mixed diagnoses, using baseline indicators such as demographics, physical health, mental health, and treatment-related variables.
  • Machine learning algorithms enhanced predictive performance compared to linear regression models, primarily by reducing overfitting through regularization rather than incorporating nonlinear or interaction effects.
  • Treatment response was best predicted by baseline symptom severity, followed by treatment-related variables and mental health indicators. Demographics and physical health indicators were less predictive.
  • The research highlights the importance of a multidimensional assessment of functioning and identifies potential prognostic markers for treatment response in inpatient settings.

Rationale

Predicting treatment response is of utmost importance in mental health care, as it can help minimize treatment failures, prevent suboptimal attempts, and optimize resource allocation.

When clinicians can accurately predict a patient’s likelihood of responding to a particular treatment, they can make more informed decisions about the most appropriate interventions, leading to better patient outcomes and more efficient use of healthcare resources.

However, much of the existing research on treatment response prediction has focused on outpatient settings or clinical trial data.

While these studies provide valuable insights, their findings may not directly apply to naturalistic inpatient samples due to differences in patient characteristics, treatment settings, and the severity of mental health conditions.

As Webb et al. (2020) points out, clinical trial data often have limited ecological validity for real-world inpatient populations. These trials typically have strict inclusion and exclusion criteria that result in more homogeneous samples compared to the diverse patient populations seen in inpatient settings.

Furthermore, research has shown that treatment responses can vary significantly among inpatients. Hartmann et al. (2018) identified distinct patterns of symptom change in inpatients with major depression, highlighting the heterogeneity of treatment responses in this population.

Similarly, Zeeck et al. (2020) found that self-criticism and personality functioning predicted different patterns of symptom change in inpatients with major depressive disorder. These findings underscore the need for accurate predictors of treatment response specific to inpatient populations, as the factors influencing treatment outcomes may differ from those in outpatient settings.

Given the limitations of existing research and the documented heterogeneity of treatment responses in inpatient settings, there is a clear need for studies that specifically investigate predictors of treatment response in naturalistic inpatient samples.

Researchers can gain more accurate insights into the factors influencing treatment outcomes in real-world inpatient settings by focusing on this population and leveraging advanced analytical techniques such as machine learning algorithms.

This knowledge can then be used to develop more targeted and effective interventions, ultimately improving the quality of care for inpatients with mental health conditions.

The present study aims to address this gap in the literature by using machine learning algorithms to predict treatment response in a naturalistic inpatient sample with mixed diagnoses.

By doing so, the researchers seek to identify key predictors of treatment response specific to this population, providing valuable insights that can inform clinical decision-making and enhance the effectiveness of inpatient mental health care.

Method

The study employed a nested cross-validation approach with machine learning algorithms, including elastic net regressions and gradient boosting machines, to predict treatment response using baseline indicators from a naturalistic inpatient sample.

The study used routine outcome monitoring data from a clinic and polyclinic in Germany. The primary outcome was the post-treatment sum score of the Patient Health Questionnaire Anxiety and Depression Scale (PHQ-ADS).

Sample

The sample consisted of 723 patients from a clinic and polyclinic in Rhineland-Palatinate, Germany, collected between 2018 and 2021.

Measures

The study used various measures, including the PHQ-ADS, Sheehan Disability Scale, Symptom Checklist (SCL-K9), Cambridge Depersonalization Scale 2, The Personality Inventory for DSM-5—Brief Form (PID-5), and others.

Statistical measures

The study employed nested cross-validation with elastic net regressions and gradient boosting machines. Model performance was evaluated using explained variance (R²), root mean squared error (RMSE), and mean absolute error (MAE).

Results

  • Machine learning algorithms, particularly elastic net regressions and gradient boosting machines, outperformed linear regression models in predicting treatment response, primarily by reducing overfitting through regularization.
  • The best-performing model (elastic net regression with all available predictor variables) explained 44% of the variance in post-treatment PHQ-ADS scores, representing a 12% improvement over using baseline symptom severity alone.
  • Baseline symptom severity, as measured by the PHQ-ADS, was the strongest predictor of treatment response.
  • Treatment-related variables, such as patients’ perceptions of treatment helpfulness, treatment length, and the number of previous treatments, were the second most predictive group of variables.
  • Mental health indicators, including depersonalization/derealization symptoms, repetitive negative thinking, and aspects of personality functioning, were also important predictors of treatment response.
  • Demographics and physical health indicators had minimal predictive value compared to the other variable groups.

Insight

The study highlights the importance of a multidimensional assessment of functioning, encompassing factors such as occupational functioning, chronicity, and comorbid symptoms, in predicting treatment response.

Specific symptoms and characteristics, such as depersonalization/derealization, worry, irritability, and impaired self-other functioning, emerged as potential prognostic markers for treatment response in inpatient settings.

The findings suggest that focusing on collecting high-quality data with reliable indicators may be more beneficial for improving predictive models than using increasingly complex modeling approaches.

The study demonstrates the value of combining open science practices, such as preregistration and providing synthetic datasets, with machine learning algorithms for predictive modeling in psychotherapy research.

Strengths

  • Used a large naturalistic inpatient sample with mixed diagnoses,
  • Employed rigorous open science practices
  • Provided a synthetic dataset for transparency and reproducibility.

Limitations

  • The study relied on a specific set of available predictor variables, which may not capture all relevant factors influencing treatment response.
  • The study also did not assess long-term treatment outcomes.

Clinical Implications

The findings suggest that prediction models using baseline indicators could be implemented in routine assessments to identify patients at risk of treatment nonresponse.

Mental health indicators, such as depersonalization/derealization symptoms, repetitive negative thinking, and personality functioning, provided incremental predictive value for treatment response beyond baseline symptom severity.

These factors should be prioritized in baseline assessments to improve the accuracy of treatment response predictions.

In contrast, demographics and physical health indicators were less informative, suggesting that focusing on a comprehensive assessment of mental health-related variables may be more beneficial for predicting treatment outcomes in inpatient settings.

References

Primary reference

Jankowsky, K., Krakau, L., Schroeders, U., Zwerenz, R., & Beutel, M. E. (2024). Predicting treatment response using machine learning: A registered report. British Journal of Clinical Psychology, 63, 137–155. https://doi.org/10.1111/bjc.12452

Other references

Hartmann, A., von Wietersheim, J., Weiss, H., & Zeeck, A. (2018). Patterns of symptom change in major depression: Classification and clustering of long term courses. Psychiatry Research, 267, 480–489. https://doi.org/10.1016/j.psychres.2018.03.086

Webb, C. A., Cohen, Z. D., Beard, C., Forgeard, M., Peckham, A. D., & Björgvinsson, T. (2020). Personalized prognostic prediction of treatment outcome for depressed patients in a naturalistic psychiatric hospital setting: A comparison of machine learning approaches. Journal of Consulting and Clinical Psychology, 88(1), 25–38. https://doi.org/10.1037/ccp0000451

Zeeck, A., von Wietersheim, J., Weiss, H., Hermann, S., Endorf, K., Lau, I., & Hartmann, A. (2020). Self-criticism and personality functioning predict patterns of symptom change in major depressive disorder. Frontiers in Psychiatry, 11, 147. https://doi.org/10.3389/fpsyt.2020.00147

Keep Learning:

  1. How can the findings from this study be applied in clinical practice to improve treatment outcomes for inpatients with mixed diagnoses?
  2. What are the potential ethical considerations when using machine learning algorithms to predict treatment response in mental health settings?
  3. How might the predictive models developed in this study be further refined or expanded to incorporate additional relevant factors influencing treatment response?

Olivia Guy-Evans, MSc

BSc (Hons) Psychology, MSc Psychology of Education

Associate Editor for Simply Psychology

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.


Saul McLeod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul McLeod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

h4 { font-weight: bold; } h1 { font-size: 40px; } h5 { font-weight: bold; } .mv-ad-box * { display: none !important; } .content-unmask .mv-ad-box { display:none; } #printfriendly { line-height: 1.7; } #printfriendly #pf-title { font-size: 40px; }