Skip to content

Model Registry provides a single pane of glass for ML model developers to index and manage models, versions, and ML artifacts metadata. It fills a gap between model experimentation and production activities. It provides a central interface for all stakeholders in the MLOps lifecycle to collaborate on ML models.

License

Notifications You must be signed in to change notification settings

kubeflow/model-registry

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Model Registry

build checks status codecov FOSSA Status OpenSSF Best Practices

Model registry provides a central repository for model developers to store and manage models, versions, and artifacts metadata.

Red Hat's Pledge

  • Red Hat drives the project's development through Open Source principles, ensuring transparency, sustainability, and community ownership.
  • Red Hat values the Kubeflow community and commits to providing a minimum of 12 months' notice before ending project maintenance after the initial release.

Alpha

This Kubeflow component has alpha status with limited support. See the Kubeflow versioning policies. The Kubeflow team is interested in your feedback about the usability of the feature.

Documentation links:

  1. Introduction
  1. Installation
  1. Concepts
  1. Python client
  1. Tutorials
  1. FAQs
  2. Development
  1. UI

Pre-requisites:

OpenAPI Proxy Server

The model registry proxy server implementation follows a contract-first approach, where the contract is identified by model-registry.yaml OpenAPI specification.

You can also easily display the latest OpenAPI contract for model-registry in a Swagger-like editor directly from this repository; for example, here.

Starting the OpenAPI Proxy Server

Run the following command to start the OpenAPI proxy server from source:

make run/proxy

The proxy service implements the OpenAPI defined in model-registry.yaml to create a Model Registry specific REST API.

Model registry logical model

For a high-level documentation of the Model Registry logical model, please check this guide.

Model Registry Core

The model registry core is the layer which implements the core/business logic by interacting with the underlying datastore internal service. It provides a model registry domain-specific api that is in charge to proxy all, appropriately transformed, requests to the datastore internal service.

Model registry library

For more background on Model Registry Go core library and instructions on using it, please check getting started guide.

Development

Database Schema Changes

When making changes to the database schema, you need to regenerate the GORM structs. This is done using the gen/gorm target:

make gen/gorm

This target will:

  1. Start a temporary database
  2. Run migrations
  3. Generate GORM structs based on the schema
  4. Clean up the temporary database

NOTE: The target requires Docker to be running.

Building

Run the following command to build the server binary:

make build

The generated binary uses spf13 cmdline args. More information on using the server can be obtained by running the command:

./model-registry --help

Run the following command to clean the server binary, generated models and etc.:

make clean

Testing

Run the following command to trigger all tests:

make test

or, to see the statement coverage:

make test-cover

Docker Image

Building the docker image

The following command builds a docker image for the server with the tag model-registry:

docker build -t model-registry .

Note that the first build will be longer as it downloads the build tool dependencies. Subsequent builds will re-use the cached tools layer.

Running the proxy server

The following command starts the proxy server:

docker run -d -p <hostname>:<port>:8080 --user <uid>:<gid> --name server model-registry proxy -n 0.0.0.0

Where, <uid>, <gid>, and <host-path> are the same as in the migrate command above. And <hostname> and <port> are the local ip and port to use to expose the container's default 8080 listening port. The server listens on localhost by default, hence the -n 0.0.0.0 option allows the server port to be exposed.

Running model registry

NOTE: Docker Compose or Podman Compose must be installed in your environment.

There are two docker-compose files that make the startup easier:

  • docker-compose.yaml - Uses pre-built images from registry
  • docker-compose-local.yaml - Builds model registry from source

Both files support MySQL and PostgreSQL databases using profiles.

Using Makefile targets (recommended)

The easiest way to run the services is using the provided Makefile targets:

# Start with MySQL (using pre-built images)
make compose/up

# Start with PostgreSQL (using pre-built images)  
make compose/up/postgres

# Start with MySQL (builds from source)
make compose/local/up

# Start with PostgreSQL (builds from source)
make compose/local/up/postgres

# Stop services
make compose/down  # or compose/local/down

# Clean up all volumes and networks
make compose/clean

Manual docker-compose usage

Alternatively, you can run the compose files directly:

# Using pre-built images with MySQL
docker-compose --profile mysql up

# Using pre-built images with PostgreSQL  
DB_TYPE=postgres docker-compose --profile postgres up

# Building from source with PostgreSQL
DB_TYPE=postgres docker-compose -f docker-compose-local.yaml --profile postgres up

The Makefile automatically detects whether to use docker-compose, podman-compose, or docker compose based on what's available on your system.

Testing architecture

The following diagram illustrates testing strategy for the several components in Model Registry project:

Go layers components are tested with Unit Tests written in Go, as well as Integration Tests leveraging Testcontainers. This allows to verify the expected "Core layer" of logical data mapping developed and implemented in Go, matches technical expectations.

Python client is also tested with Unit Tests and Integration Tests written in Python.

End-to-end testing is developed with KinD and Pytest; this higher-lever layer of testing is used to demonstrate User Stories from high level perspective.

Related Components

Model Catalog Service

Kubernetes Components

  • Controller - Kubernetes controller for model registry CRDs
  • CSI Driver - Container Storage Interface for model artifacts

Client Components

Job Components

Development & Deployment

FAQ

How do I delete metadata resources using the Model Registry API?

MR utilizes a common ARCHIVED status for all types. To delete something, simply update its status.

Tips

Pull image rate limiting

Occasionally you may encounter an 'ImagePullBackOff' error when deploying the Model Registry manifests. See example below for the model-registry-db container.

Failed to pull image “mysql:8.3.0”: rpc error: code = Unknown desc = fetching target platform image selected from image index: reading manifest sha256:f9097d95a4ba5451fff79f4110ea6d750ac17ca08840f1190a73320b84ca4c62 in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit

This error is triggered by the rate limits from docker.io; in this example specifically about the image mysql:8.3.0 (the expanded reference is docker.io/library/mysql:8.3.0). To mitigate this error you could authenticate using image pull secrets for local development; or replace the image used with alternative mirrored images, for instance with the following example:

manifests/kustomize/overlays/db/model-registry-db-deployment.yaml file.

spec.template.spec.containers.image: public.ecr.aws/docker/library/mysql:8.3.0

About

Model Registry provides a single pane of glass for ML model developers to index and manage models, versions, and ML artifacts metadata. It fills a gap between model experimentation and production activities. It provides a central interface for all stakeholders in the MLOps lifecycle to collaborate on ML models.

Resources

License

Contributing

Security policy

Stars

Watchers

Forks

Packages

 
 
 

Contributors 56