Multi-Armed Bandits
Theory and Applications to Online Learning in Networks
Multi-armed bandit problems pertain to optimal sequential decision making and learning in unknown environments.Since the first bandit problem posed by Thompson in 1933 for the application of clinical trials, bandit problems have enjoyed lasting attention from multiple research communities and have found a wide range of applications across diverse domains. This book covers classic results and recent development on both Bayesian and frequentist bandit problems. We start in Chapter 1 with a brief overview on the history of bandit problems, contrasting the two schools-Bayesian and frequentis -of approaches and highlighting foundational results an…
Mehr
CHF 100.00
Preise inkl. MwSt. und Versandkosten (Portofrei ab CHF 40.00)
V106:
Fremdlagertitel. Lieferzeit unbestimmt
Produktdetails
Weitere Autoren: Srikant, R. (Hrsg.)
- ISBN: 978-1-62705-638-0
- EAN: 9781627056380
- Produktnummer: 33213425
- Verlag: Morgan & Claypool Publishers
- Sprache: Englisch
- Erscheinungsjahr: 2019
- Seitenangabe: 166 S.
- Masse: H23.5 cm x B19.1 cm x D0.9 cm 323 g
- Abbildungen: Paperback
- Gewicht: 323
13 weitere Werke von Qing Zhao:
Bewertungen
Anmelden