-
Notifications
You must be signed in to change notification settings - Fork 1.8k
Closed
Labels
bugSomething isn't workingSomething isn't working
Description
pycaret version checks
-
I have checked that this issue has not already been reported here.
-
I have confirmed this bug exists on the latest version of pycaret.
-
I have confirmed this bug exists on the master branch of pycaret (pip install -U git+https://github.com/pycaret/pycaret.git@master).
Issue Description
The result of the compare_model() presents AUC = 0
Reproducible Example
from pycaret.classification import *
data = pd.read_sql(query,mydb)
from pycaret.classification import *
s = setup(data, target='weather', log_experiment=True, experiment_name= my_experiment_name)
best_model = s.compare_models()
Expected Behavior
Table best models incomplete- AUC
Actual Results
`2024-03-02 14:11:43,296:WARNING:c:\Users\Laurent\miniconda3\envs\pycaret3\lib\site-packages\pycaret\internal\metrics.py:196: FitFailedWarning: Metric 'make_scorer(roc_auc_score, response_method=('decision_function', 'predict_proba'), average=weighted, multi_class=ovr)' failed and error score 0.0 has been returned instead. If this is a custom metric, this usually means that the error is in the metric code. Full exception below:
Traceback (most recent call last):
File "c:\Users\Laurent\miniconda3\envs\pycaret3\lib\site-packages\pycaret\internal\metrics.py", line 188, in _score
return super()._score(
File "c:\Users\Laurent\miniconda3\envs\pycaret3\lib\site-packages\sklearn\metrics_scorer.py", line 350, in _score
return self._sign * self._score_func(y_true, y_pred, **scoring_kwargs)
File "c:\Users\Laurent\miniconda3\envs\pycaret3\lib\site-packages\pycaret\internal\metrics.py", line 136, in call
return self.score_func(y_true, y_pred, **kwargs)
File "c:\Users\Laurent\miniconda3\envs\pycaret3\lib\site-packages\sklearn\utils_param_validation.py", line 213, in wrapper
return func(*args, **kwargs)
File "c:\Users\Laurent\miniconda3\envs\pycaret3\lib\site-packages\sklearn\metrics_ranking.py", line 634, in roc_auc_score
return _multiclass_roc_auc_score(
File "c:\Users\Laurent\miniconda3\envs\pycaret3\lib\site-packages\sklearn\metrics_ranking.py", line 707, in _multiclass_roc_auc_score
raise ValueError(
ValueError: Target scores need to be probabilities for multiclass roc_auc, i.e. they should sum up to 1.0 over classes
Installed Versions
pycaret: 3.3.0
IPython: 8.22.1
ipywidgets: 8.1.2
tqdm: 4.66.2
numpy: 1.26.4
pandas: 2.1.4
jinja2: 3.1.3
scipy: 1.11.4
joblib: 1.3.2
sklearn: 1.4.1.post1
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't working