Skip to content

Conversation

ngupta23
Copy link
Collaborator

Related Issuse or bug

Time Series tune_model does not work with non-default optimize argument

tune_model(model, optimize = 'MAE')
~\pycaret-timeseries\pycaret\pycaret\time_series.py in fit(self, y, X, **fit_params)
   2199             raise ValueError(
   2200                 f"Refit Metric: '{refit_metric}' is not available. ",
-> 2201                 f"Available Values are: {list(scorers.keys())}",
   2202             )
   2203 
ValueError: ("Refit Metric: 'smape' is not available. ", "Available Values are: ['MAE']")

Describe the changes you've made

Fixed the above mentioned issue. Refit metric was not being passed to the Grid Search instantiation earlier.

Type of change

Please delete options that are not relevant.

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Code style update (formatting, local variables)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • This change requires a documentation update

How Has This Been Tested?

Unit test has been added

Checklist:

  • My code follows the style guidelines of this project.
  • I have performed a self-review of my own code.
  • I have commented my code, particularly in hard-to-understand areas.
  • I have made corresponding changes to the documentation.
  • My changes generate no new warnings.
  • I have added tests that prove my fix is effective or that my feature works.
  • New and existing unit tests pass locally with my changes.
  • Any dependent changes have been merged and published in downstream modules.

@ngupta23 ngupta23 merged commit 279f8da into time_series Jun 15, 2021
@ngupta23 ngupta23 deleted the fix_tune_optimize branch June 15, 2021 14:15
fh = 12
fold = 2

exp.setup(data=load_data, fold=fold, fh=fh, fold_strategy="sliding")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it may be a good idea to later define a fixture for setup with fold_strategy="sliding"

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we will have to re-architect the unit tests at some point after alpha release. For example, we could have a fixture for create models also so that could be reused in all tests instead of creating them again in each test.

exp.setup(data=load_data, fold=fold, fh=fh, fold_strategy="sliding")

model_obj = exp.create_model("naive")
tuned_model_obj = exp.tune_model(model_obj, optimize="MAE")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we could parametrize this to accept MAE, SMAPE and all the metrics available in time series module.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, good idea. Lets add it as an enhancement to make when we refactor the tests.


exp.setup(data=load_data, fold=fold, fh=fh, fold_strategy="sliding")

model_obj = exp.create_model("naive")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is there a reason for testing the naive model? I image this makes test run faster as you only want to test non default parameters.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, it is to make it faster. I just want to test that the flow works with non-default parameters (so 1 model would be OK). There is already a flow that tests all models through tune_model with default arguments so I think this is a good compromise,

@ngupta23 ngupta23 mentioned this pull request Jun 17, 2021
6 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants