Skip to main content

Table 4 IDHTN machine learning results

From: Research on the development of an intelligent prediction model for blood pressure variability during hemodialysis

 

XGBoost

SVM

KNN

DT

RF

LR

NB

AdaBoost

LightGBM

CatBoost

ROC-AUC

0.89

0.87

0.84

0.72

0.89

0.79

0.74

0.81

0.88

0.89

PR-AUC

0.78

0.76

0.72

0.66

0.78

0.60

0.55

0.62

0.77

0.79

accuracy

0.85

0.82

0.81

0.76

0.83

0.76

0.71

0.77

0.83

0.84

precision

0.77

0.76

0.72

0.59

0.75

0.63

0.50

0.64

0.76

0.77

recall

0.64

0.54

0.59

0.61

0.62

0.42

0.59

0.46

0.59

0.62

F1-score

0.70

0.63

0.65

0.60

0.68

0.50

0.54

0.54

0.67

0.69

  1. XGBoost: Extreme Gradient Boosting; SVM: Support Vector Machine; KNN: K-Nearest Neighbors; DT: Decision Tree; RF: Random Forest; LR: Logistic Regression; NB: Naive Bayes; AdaBoost: Adaptive Boosting; LightGBM: Light Gradient Boosting Machine; CatBoost: Categorical Boosting; ROC-AUC: Receiver Operating Characteristic - Area Under the Curve; PR-AUC: Precision-Recall - Area Under the Curve