Skip to content

Support for Custom Metrics in "check_metric" #880

@ngupta23

Description

@ngupta23

Is your feature request related to a problem? Please describe.
check_metric does not support custom metrics yet. I defined a custom metric and used it in the training. But when I try to check that against unseen data predictions, it gives an error

def single_instance_metric(row):
    if row['y_test'] == 0 and row['y_pred'] == 1: # False Positive
        return 10
    elif row['y_test'] == 1 and row['y_pred'] == 0: # False Negative
        return 500
    else: # Correct Predictions
        return 0

def fp10_fn500_func(y_test, y_pred):
    df = pd.DataFrame({'y_test':y_test, 'y_pred':y_pred})
    df['metric'] = df.apply(single_instance_metric, axis=1)
    return np.mean(df['metric'].values)

add_metric(
    id='fp10_fn500',
    name='fp10_fn500',
    score_func=fp10_fn500_func,
    target='pred',
    greater_is_better=False)

get_metrics()

image

best_model = compare_models(sort='F1',  exclude = ['gbc']) 

image

final_model_baseline = finalize_model(best_model)
unseen_predictions = predict_model(final_model_baseline, data=test)

from pycaret.utils import check_metric
check_metric(unseen_predictions['y'], unseen_predictions['Label'], metric = 'fp10_fn500')

image

Describe the solution you'd like
Would like to be able to evaluate model on unseen data with the custom metric

Describe alternatives you've considered
This can be done manually, but would be nicer to incorporate this in PyCaret internally. I can try to work on this if needed and submit a PR.

fp10_fn500_func(y_test=unseen_predictions['Label'], y_pred=unseen_predictions['y'])

image

Metadata

Metadata

Assignees

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions