Skip to content

teefastt/seldon-core

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Seldon Core API

Branch Status
master Build Status
release-0.2 Build Status
release-0.1 Build Status

Seldon Core is an open source platform for deploying machine learning models on Kubernetes.

Goals

Machine learning deployment has many challenges. Seldon Core intends to help with these challenges. Its high level goals are:

  • Allow organisations to run and manage machine learning models built using any machine learning toolkit. Any model that can be run inside a Docker container can run in Seldon Core.
  • Provide a production ready machine learning deployment system on top of Kubernetes and integrating well with other Cloud Native tools.
  • Provide the tools to allow complex metrics, optimization and proper compliance of machine learning models in production.
    • Optimize your models using multi-armed bandit solvers
    • Run Outlier Detection models
    • Get alerts on Concept Drift
    • Provide black-box model explanations of running models
  • Automatically expose REST and gRPC endpoints to allow business application to easily call your machine learning models.
  • Handle full lifecycle management of the deployed model:
    • Updating the runtime graph with no downtime
    • Scaling
    • Monitoring
    • Security

We have a roadmap which we welcome feedback.

Prerequisites

A Kubernetes Cluster. Kubernetes can be deployed into many environments, both on cloud and on-premise.

Quick Start

Read the overview to using seldon-core.

Example Components

Seldon-core allows various types of components to be built and plugged into the runtime prediction graph. These include generic components such as models, routers, transformers and combiners. Some example components that are available as part of the project are:

Seldon allows you to build up runtime inference graphs that provide powerful optimization and metrics for your running models. Example components that help ensure you provide a compliant production machine learning system are available:

Integrations

Install

Follow the install guide for details on ways to install seldon onto your Kubernetes cluster.

Deployment Guide

API

  1. Wrap your runtime prediction model.
  2. Define your runtime inference graph in a seldon deployment custom resource.
  3. Deploy the graph.
  4. Serve Predictions.

Advanced Tutorials

Reference

Articles/Blogs/Videos

External content:

Internal Tutorials:

  • Seldon Tech Overview
    • Overview of Seldon Core technology.
  • Seldon Core with Kubeflow Demo
    • End to end demo of hetrogeneous models trained within Kubeflow and deployed with Seldon. Shows continuous deployment from single model to A/B test to multi-armed bandit.

Release Highlights

Testing

Configuration

Community

Developer

Latest Seldon Images

Description Image URL Stable Version Development
Seldon Operator seldonio/cluster-manager 0.2.6 0.2.7-SNAPSHOT
Seldon Service Orchestrator seldonio/engine 0.2.6 0.2.7-SNAPSHOT
Seldon API Gateway seldonio/apife 0.2.6 0.2.7-SNAPSHOT
Seldon Python 3 (3.6) Wrapper for S2I seldonio/seldon-core-s2i-python3 0.5 0.6-SNAPSHOT
Seldon Python 3.6 Wrapper for S2I seldonio/seldon-core-s2i-python36 0.5 0.6-SNAPSHOT
Seldon Python 2 Wrapper for S2I seldonio/seldon-core-s2i-python2 0.5 0.6-SNAPSHOT
Seldon Python ONNX Wrapper for S2I seldonio/seldon-core-s2i-python3-ngraph-onnx 0.3
Seldon Java Build Wrapper for S2I seldonio/seldon-core-s2i-java-build 0.1
Seldon Java Runtime Wrapper for S2I seldonio/seldon-core-s2i-java-runtime 0.1
Seldon R Wrapper for S2I seldonio/seldon-core-s2i-r 0.2
Seldon NodeJS Wrapper for S2I seldonio/seldon-core-s2i-nodejs 0.1 0.2-SNAPSHOT
Seldon Tensorflow Serving proxy seldonio/tfserving-proxy 0.1
Seldon NVIDIA inference server proxy seldonio/nvidia-inference-server-proxy 0.1
Seldon AWS SageMaker proxy seldonio/sagemaker-proxy 0.1

Java Packages

Description Package Version
Seldon Core Wrapper seldon-core-wrapper 0.1.5
Seldon Core JPMML seldon-core-jpmml 0.0.1

Usage Reporting

Tools that help the development of Seldon Core from anonymous usage.

About

Machine Learning Deployment for Kubernetes

Resources

License

Code of conduct

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Java 38.9%
  • Python 19.3%
  • Jupyter Notebook 18.7%
  • Smarty 13.4%
  • JavaScript 4.1%
  • Shell 3.5%
  • Other 2.1%