all AI news
An Overview of Contextual Bandits
Feb. 2, 2024, 2:47 p.m. | Ugur Yildirim
Towards Data Science - Medium towardsdatascience.com
A dynamic approach to treatment personalization
Outline
- Introduction
- When To Use Contextual Bandits
2.1. Contextual Bandit vs Multi-Armed Bandit vs A/B Testing
2.2. Contextual Bandit vs Multiple MABs
2.3. Contextual Bandit vs Multi-Step Reinforcement Learning
2.4. Contextual Bandit vs Uplift Modeling - Exploration and Exploitation in Contextual Bandits
3.1. ε-greedy
3.2. Upper Confidence Bound (UCB)
3.3. Thompson Sampling - Contextual Bandit Algorithm Steps
- Offline Policy Evaluation in Contextual Bandits
5.1. OPE Using Causal Inference Methods
5.2. OPE Using Sampling Methods - Contextual …
a-b-testing algorithm confidence dynamic editors pick evaluation experimentation exploitation multi-armed-bandit multiple overview policy reinforcement treatment
More from towardsdatascience.com / Towards Data Science - Medium
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Senior Data Engineer
@ Quantexa | Sydney, New South Wales, Australia
Staff Analytics Engineer
@ Warner Bros. Discovery | NY New York 230 Park Avenue South