scikit-rec Documentation¶
Welcome to the scikit-rec documentation! This library provides a scikit-learn style framework for building, training, and evaluating production-ready recommendation systems.
What is scikit-rec?¶
scikit-rec is a comprehensive Python package that standardizes the development of recommendation models. It provides:
- ๐๏ธ Modular 3-layer architecture (Recommender โ Scorer โ Estimator)
- ๐ Multiple evaluation strategies (Simple, IPS, DR, SNIPS, and more)
- ๐ฏ Rich metrics library (NDCG, MAP, Precision, ROC-AUC, Expected Reward)
- ๐ Production-ready with real-time and batch inference support
- ๐ง Hyperparameter optimization at both estimator and recommender levels
- ๐๏ธ Config-driven orchestration for Kubeflow pipelines and MLOps
Quick Links¶
-
5-minute tutorial to build your first recommender
-
Complete walkthrough of the library's components
-
Detailed guides for each recommender type
-
HPO, production deployment, and hybrid strategies
Key Features¶
3-Layer Architecture¶
The library uses a clean, modular architecture that separates concerns:
This design allows you to mix and match components to build custom recommendation systems.
Learn more: Architecture Overview ยท Capability matrix
Multiple Recommender Types¶
| Recommender | Best For |
|---|---|
| RankingRecommender | Standard ranking; also powers Two-Tower, NeuralFactorization, NCF embedding models |
| SequentialRecommender | Order-aware history (SASRec transformer) |
| HierarchicalSequentialRecommender | Session-structured history with cross-session memory (HRNN) |
| ContextualBanditsRecommender | Exploration & A/B testing |
| UpliftRecommender | Causal impact estimation (T/S/X-Learner) |
| GcslRecommender | Multi-objective optimization (GCSL) |
Learn more: Recommender Types Comparison
Comprehensive Evaluation¶
Built-in support for state-of-the-art evaluation techniques:
- SimpleEvaluator: Standard on-policy evaluation
- ReplayMatchEvaluator: Replay-based evaluation
- IPSEvaluator: Inverse Propensity Scoring (off-policy)
- DREvaluator: Doubly Robust estimation
- SNIPSEvaluator: Self-Normalized IPS
- PolicyWeightedEvaluator: Policy-weighted evaluation
Learn more: Evaluation Guide
Quick Example¶
from skrec.estimator.classification.xgb_classifier import XGBClassifierEstimator
from skrec.recommender.ranking.ranking_recommender import RankingRecommender
from skrec.scorer.universal import UniversalScorer
from skrec.examples.datasets import (
sample_binary_reward_interactions,
sample_binary_reward_users,
sample_binary_reward_items
)
# Build the pipeline: Estimator โ Scorer โ Recommender
estimator = XGBClassifierEstimator({"learning_rate": 0.1, "n_estimators": 100})
scorer = UniversalScorer(estimator)
recommender = RankingRecommender(scorer)
# Train
recommender.train(
interactions_ds=sample_binary_reward_interactions,
users_ds=sample_binary_reward_users,
items_ds=sample_binary_reward_items
)
# Recommend
recommendations = recommender.recommend(
interactions=interactions_df,
users=users_df,
top_k=5
)
Support¶
- Issues: Report bugs in GitHub Issues