Skip to content

Multiple models & optimizers & phases support. Vanilla GAN example on MNIST. #365

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed

Conversation

asmekal
Copy link
Contributor

@asmekal asmekal commented Sep 12, 2019

Finally! GANs supported!
For now as a standalone example, which may be generalized in the future to any multi-model & multi-optimizer & multi-phase experiments

@Scitator Scitator self-requested a review September 12, 2019 17:49
@Scitator Scitator added enhancement New feature or request WIP Pull request is under construction labels Sep 13, 2019
@asmekal asmekal changed the title Standalone example of GANs (only minor changes in core catalyst functionality) Multiple models & optimizers & phases support. Vanilla GAN example on MNIST. Sep 14, 2019
self.lr = None
self.momentum = None
single_optimizer = isinstance(optimizer, Optimizer)
self.lr = None if single_optimizer else defaultdict(lambda: None)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

dont use defaultdict, try https://github.com/catalyst-team/safitty from catalyst-ecosystem

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So far I believe defaultdicts are much more clear here with much less code.
I agree that dicts instead may prevent some errors (but without safitty, again)

Maybe you can give me a masterclass?

        self.lr = None
        self.momentum = None
        if isinstance(optimizer, dict):
            self.lr = dict()
            self.momentum = dict()
            for key in optimizer:
                safitty.set(self.lr, key, value=None)
                safitty.set(self.momentum, key, value=None)

from catalyst.dl import Runner


class GANRunner(Runner):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe it's better to move GANRunner from examples into catalyst/dl/runner ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK

Actually right now to implement some other GAN basing on current vanilla GANRunner someone will have to override it almost completely. I suppose we will have to think about better/more generic/easy extendable implementation which will be more useful parent class.

But I'm going to left this important task to other PR's =)

if wrapper_params:
wrapper_params["base_callback"] = callback
return ConfigExperiment._get_callback(**wrapper_params)
return callback
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what do you think about another approach, like here https://github.com/catalyst-team/catalyst/blob/master/catalyst/contrib/optimizers/lookahead.py#L84 ?
so in the config it should looks like:

callback_name:
    callback: WrapperCallback
    param_1: ...
    param_2: ...
    base_callback_params:
        param_3: ...
        param_4: ...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think my solution needs 1 line in config.yml compared to 2 lines here

@@ -166,11 +169,17 @@ def _run_epoch(self, loaders):
self._run_loader(loader)
self._run_event("loader_end")

def _run_prestage(self, stage: str):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks like a dirty hack :)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I created this method to memorize something in runner before stage begins but after initialization is done. It's obviously a little bit hacky, but I think it's still useful, i.e. the hack is not dirty=)


@staticmethod
def _get_tensorboard_logger(state: RunnerState) -> SummaryWriter:
for logger_name, logger in state.loggers.items():
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you can simply import state.loggers["tensorboard"]

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

and what will happen when you rename that logger? =(
what do you think about the new solution? it seems to become even less readable

@Scitator
Copy link
Member

Scitator commented Oct 4, 2019

somehow moved to #407
:)

@Scitator Scitator closed this Oct 4, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request WIP Pull request is under construction
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants