Skip to content

Increase Shogun coverage in benchmarks #4046

@karlnapf

Description

@karlnapf

This task is to check MLPack's benchmarking framework, and add more Shogun methods to it. We eventually want all of Shogun's algorithms in there to be covered.
This is really quite simple, and requires few scripts to be added, examples

Some information can be found here:
https://github.com/shogun-toolbox/shogun/wiki/GSoC_2017_project_fundamental_usual_suspects

Another nice thing to have would be a sorted list of where shogun does best/worst against other frameworks.

Steps:

  1. Find out which algorithms are covered by the framework (both by shogun and in general)
  2. Go to http://www.mlpack.org/benchmark.html and check the latest reports
  3. Find a reported algorithm that is implemented in Shogun, but not yet covered by the benchmarks (LARS is such an example)
  4. Add it
  5. Goto 1

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions