Skip to content

scikit-rec Documentation

Welcome to the scikit-rec documentation! This library provides a scikit-learn style framework for building, training, and evaluating production-ready recommendation systems.

What is scikit-rec?

scikit-rec is a comprehensive Python package that standardizes the development of recommendation models. It provides:

  • ๐Ÿ—๏ธ Modular 3-layer architecture (Recommender โ†’ Scorer โ†’ Estimator)
  • ๐Ÿ“Š Multiple evaluation strategies (Simple, IPS, DR, SNIPS, and more)
  • ๐ŸŽฏ Rich metrics library (NDCG, MAP, Precision, ROC-AUC, Expected Reward)
  • ๐Ÿš€ Production-ready with real-time and batch inference support
  • ๐Ÿ”ง Hyperparameter optimization at both estimator and recommender levels
  • ๐ŸŽ›๏ธ Config-driven orchestration for Kubeflow pipelines and MLOps

Key Features

3-Layer Architecture

The library uses a clean, modular architecture that separates concerns:

Recommender (Business Logic)
    โ†“
Scorer (Item Scoring Strategy)
    โ†“
Estimator (ML Model)

This design allows you to mix and match components to build custom recommendation systems.

Learn more: Architecture Overview ยท Capability matrix

Multiple Recommender Types

Recommender Best For
RankingRecommender Standard ranking; also powers Two-Tower, NeuralFactorization, NCF embedding models
SequentialRecommender Order-aware history (SASRec transformer)
HierarchicalSequentialRecommender Session-structured history with cross-session memory (HRNN)
ContextualBanditsRecommender Exploration & A/B testing
UpliftRecommender Causal impact estimation (T/S/X-Learner)
GcslRecommender Multi-objective optimization (GCSL)

Learn more: Recommender Types Comparison

Comprehensive Evaluation

Built-in support for state-of-the-art evaluation techniques:

  • SimpleEvaluator: Standard on-policy evaluation
  • ReplayMatchEvaluator: Replay-based evaluation
  • IPSEvaluator: Inverse Propensity Scoring (off-policy)
  • DREvaluator: Doubly Robust estimation
  • SNIPSEvaluator: Self-Normalized IPS
  • PolicyWeightedEvaluator: Policy-weighted evaluation

Learn more: Evaluation Guide

Quick Example

from skrec.estimator.classification.xgb_classifier import XGBClassifierEstimator
from skrec.recommender.ranking.ranking_recommender import RankingRecommender
from skrec.scorer.universal import UniversalScorer
from skrec.examples.datasets import (
    sample_binary_reward_interactions,
    sample_binary_reward_users,
    sample_binary_reward_items
)

# Build the pipeline: Estimator โ†’ Scorer โ†’ Recommender
estimator = XGBClassifierEstimator({"learning_rate": 0.1, "n_estimators": 100})
scorer = UniversalScorer(estimator)
recommender = RankingRecommender(scorer)

# Train
recommender.train(
    interactions_ds=sample_binary_reward_interactions,
    users_ds=sample_binary_reward_users,
    items_ds=sample_binary_reward_items
)

# Recommend
recommendations = recommender.recommend(
    interactions=interactions_df,
    users=users_df,
    top_k=5
)

Support

Next Steps