Skip to content

predict_model with only integers, slices (:) #3568

@lgentil

Description

@lgentil

pycaret version checks

Issue Description

Hi,

I have this issue with predict_model(): only integers, slices (:), ellipsis (...), numpy.newaxis (None) and integer or boolean arrays are valid indices

I tried to fix with the issue Error in prediction#1047, but it doesn’t worked.

Python 3.10.11
Pycaret 3.0.2

I appreciate any help. Thank you.
Luiz

Reproducible Example

!pip install pycaret
import pycaret
from pycaret.classification import *

s = setup(data = train, test_data = test, target = 'col68', session_id = 123, 
            numeric_imputation = 'knn', remove_outliers = True, index=False)

best = compare_models()
model_future = create_model(best)
model_future_tune = tune_model(model_future)

predicted_future_Test = predict_model(model_future_tune)

Expected Behavior

The prediction of the value in the column col68 with 0 or 1.

Actual Results

Description	Value
0	Session id	123
1	Target	col68
2	Target type	Binary
3	Original data shape	(3095, 68)
4	Transformed data shape	(2971, 70)
5	Transformed train set shape	(2352, 70)
6	Transformed test set shape	(619, 70)
7	Numeric features	66
8	Date features	1
9	Rows with missing values	1.0%
10	Preprocess	True
11	Imputation type	simple
12	Numeric imputation	knn
13	Categorical imputation	mode
14	Remove outliers	True
15	Outliers threshold	0.050000
16	Fold Generator	StratifiedKFold
17	Fold Number	10
18	CPU Jobs	-1
19	Use GPU	False
20	Log Experiment	False
21	Experiment Name	clf-default-name
22	USI	36af
 	Model	Accuracy	AUC	Recall	Prec.	F1	Kappa	MCC	TT (Sec)
dummy	Dummy Classifier	0.5998	0.5000	1.0000	0.5998	0.7498	0.0000	0.0000	0.2320
svm	SVM - Linear Kernel	0.5796	0.0000	0.6201	0.4966	0.5295	0.1579	0.1521	0.5220
lr	Logistic Regression	0.5792	0.5311	0.6295	0.6114	0.5430	0.1518	0.1441	1.2290
nb	Naive Bayes	0.5443	0.8071	0.4128	0.3901	0.3559	0.1587	0.1787	0.2320
ridge	Ridge Classifier	0.5115	0.0000	0.4443	0.5130	0.4719	0.0573	0.0488	0.3040
lda	Linear Discriminant Analysis	0.5084	0.5659	0.5281	0.6203	0.5344	0.0129	0.0252	0.3550
gbc	Gradient Boosting Classifier	0.4220	0.3957	0.4109	0.5258	0.4245	-0.1456	-0.1463	4.7640
lightgbm	Light Gradient Boosting Machine	0.3957	0.3504	0.4178	0.5148	0.4305	-0.2146	-0.2073	1.3270
xgboost	Extreme Gradient Boosting	0.3756	0.2996	0.4198	0.4872	0.4275	-0.2646	-0.2665	3.0240
ada	Ada Boost Classifier	0.3659	0.3235	0.3907	0.4882	0.3964	-0.2707	-0.2706	1.8010
rf	Random Forest Classifier	0.3359	0.2510	0.3436	0.4657	0.3718	-0.3240	-0.3201	1.9560
et	Extra Trees Classifier	0.3179	0.2721	0.3424	0.4048	0.3442	-0.3600	-0.3735	0.8920
dt	Decision Tree Classifier	0.3060	0.3183	0.2571	0.3943	0.3015	-0.3416	-0.3643	0.4310
qda	Quadratic Discriminant Analysis	0.3006	0.2307	0.3868	0.3543	0.3556	-0.4171	-0.4520	0.2590
knn	K Neighbors Classifier	0.2589	0.2036	0.2273	0.2810	0.2251	-0.4348	-0.4830	0.4010
 	Accuracy	AUC	Recall	Prec.	F1	Kappa	MCC
Fold	 	 	 	 	 	 	 
0	0.6008	0.5000	1.0000	0.6008	0.7506	0.0000	0.0000
1	0.6008	0.5000	1.0000	0.6008	0.7506	0.0000	0.0000
2	0.6008	0.5000	1.0000	0.6008	0.7506	0.0000	0.0000
3	0.6008	0.5000	1.0000	0.6008	0.7506	0.0000	0.0000
4	0.6008	0.5000	1.0000	0.6008	0.7506	0.0000	0.0000
5	0.5968	0.5000	1.0000	0.5968	0.7475	0.0000	0.0000
6	0.5992	0.5000	1.0000	0.5992	0.7494	0.0000	0.0000
7	0.5992	0.5000	1.0000	0.5992	0.7494	0.0000	0.0000
8	0.5992	0.5000	1.0000	0.5992	0.7494	0.0000	0.0000
9	0.5992	0.5000	1.0000	0.5992	0.7494	0.0000	0.0000
Mean	0.5998	0.5000	1.0000	0.5998	0.7498	0.0000	0.0000
Std	0.0013	0.0000	0.0000	0.0013	0.0010	0.0000	0.0000
 	Accuracy	AUC	Recall	Prec.	F1	Kappa	MCC
Fold	 	 	 	 	 	 	 
0	0.6008	0.5000	1.0000	0.6008	0.7506	0.0000	0.0000
1	0.6008	0.5000	1.0000	0.6008	0.7506	0.0000	0.0000
2	0.6008	0.5000	1.0000	0.6008	0.7506	0.0000	0.0000
3	0.6008	0.5000	1.0000	0.6008	0.7506	0.0000	0.0000
4	0.6008	0.5000	1.0000	0.6008	0.7506	0.0000	0.0000
5	0.5968	0.5000	1.0000	0.5968	0.7475	0.0000	0.0000
6	0.5992	0.5000	1.0000	0.5992	0.7494	0.0000	0.0000
7	0.5992	0.5000	1.0000	0.5992	0.7494	0.0000	0.0000
8	0.5992	0.5000	1.0000	0.5992	0.7494	0.0000	0.0000
9	0.5992	0.5000	1.0000	0.5992	0.7494	0.0000	0.0000
Mean	0.5998	0.5000	1.0000	0.5998	0.7498	0.0000	0.0000
Std	0.0013	0.0000	0.0000	0.0013	0.0010	0.0000	0.0000
Fitting 10 folds for each of 4 candidates, totalling 40 fits
Original model was better than the tuned model, hence it will be returned. NOTE: The display metrics are for the tuned model (not the original one).
 	Model	Accuracy	AUC	Recall	Prec.	F1	Kappa	MCC
0	Dummy Classifier	0.5506	0.5000	1.0000	0.5506	0.7101	0.0000	0.0000
---------------------------------------------------------------------------
IndexError                                Traceback (most recent call last)
<ipython-input-116-7530072cf3b8> in <cell line: 9>()
      7 model_future_tune = tune_model(model_future)
      8 
----> 9 predicted_future_all = predict_model(model_future_tune, data = price_target_df.drop(price_target_df.tail(n).index))
     10 predicted_future_Test = predict_model(model_future_tune, data = test)

3 frames
/usr/local/lib/python3.10/dist-packages/pycaret/internal/pycaret_experiment/supervised_experiment.py in <listcomp>(.0)
   5019 
   5020                 score = pd.DataFrame(
-> 5021                     data=[s[pred[i]] for i, s in enumerate(score)],
   5022                     index=X_test_.index,
   5023                     columns=[SCORE_COLUMN],

IndexError: only integers, slices (`:`), ellipsis (`...`), numpy.newaxis (`None`) and integer or boolean arrays are valid indices

Installed Versions

PyCaret required dependencies:
pip: 23.1.2
setuptools: 67.7.2
pycaret: 3.0.2
IPython: 7.34.0
ipywidgets: 7.7.1
tqdm: 4.65.0
numpy: 1.22.4
pandas: 1.3.5
jinja2: 3.1.2
scipy: 1.10.1
joblib: 1.2.0
sklearn: 1.2.2
pyod: 1.0.9
imblearn: 0.10.1
category_encoders: 2.6.1
lightgbm: 3.3.5
numba: 0.56.4
requests: 2.27.1
matplotlib: 3.7.1
scikitplot: 0.3.7
yellowbrick: 1.5
plotly: 5.13.1
kaleido: 0.2.1
statsmodels: 0.13.5
sktime: 0.17.0
tbats: 1.1.3
pmdarima: 2.0.3
psutil: 5.9.5

PyCaret optional dependencies:
shap: Not installed
interpret: Not installed
umap: Not installed
pandas_profiling: Not installed
explainerdashboard: Not installed
autoviz: Not installed
fairlearn: Not installed
xgboost: 1.7.5
catboost: Not installed
kmodes: Not installed
mlxtend: 0.14.0
statsforecast: Not installed
tune_sklearn: Not installed
ray: Not installed
hyperopt: 0.2.7
optuna: Not installed
skopt: Not installed
mlflow: Not installed
gradio: Not installed
fastapi: Not installed
uvicorn: Not installed
m2cgen: Not installed
evidently: Not installed
fugue: Not installed
streamlit: Not installed
prophet: 1.1.2

Metadata

Metadata

Assignees

Labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions