logo
Product categories

EbookNice.com

Most ebook files are in PDF format, so you can easily read them using various software such as Foxit Reader or directly on the Google Chrome browser.
Some ebook files are released by publishers in other formats such as .awz, .mobi, .epub, .fb2, etc. You may need to install specific software to read these formats on mobile/PC, such as Calibre.

Please read the tutorial at this link.  https://ebooknice.com/page/post?id=faq


We offer FREE conversion to the popular formats you request; however, this may take some time. Therefore, right after payment, please email us, and we will try to provide the service as quickly as possible.


For some exceptional file formats or broken links (if any), please refrain from opening any disputes. Instead, email us first, and we will try to assist within a maximum of 6 hours.

EbookNice Team

(Ebook) Approximate Dynamic Programming: Solving the Curses of Dimensionality, Second Edition by Warren B. Powell(auth.), Walter A. Shewhart, Samuel S. Wilks(eds.) ISBN 9780470604458, 9781118029176, 047060445X, 1118029178

  • SKU: EBN-4299418
Zoomable Image
$ 32 $ 40 (-20%)

Status:

Available

4.3

40 reviews
Instant download (eBook) Approximate Dynamic Programming: Solving the Curses of Dimensionality, Second Edition after payment.
Authors:Warren B. Powell(auth.), Walter A. Shewhart, Samuel S. Wilks(eds.)
Pages:647 pages.
Year:2011
Publisher:Wiley
Language:english
File Size:7.71 MB
Format:pdf
ISBNS:9780470604458, 9781118029176, 047060445X, 1118029178
Categories: Ebooks

Product desciption

(Ebook) Approximate Dynamic Programming: Solving the Curses of Dimensionality, Second Edition by Warren B. Powell(auth.), Walter A. Shewhart, Samuel S. Wilks(eds.) ISBN 9780470604458, 9781118029176, 047060445X, 1118029178

Praise for the First Edition

"Finally, a book devoted to dynamic programming and written using the language of operations research (OR)! This beautiful book fills a gap in the libraries of OR specialists and practitioners."
Computing Reviews

This new edition showcases a focus on modeling and computation for complex classes of approximate dynamic programming problems

Understanding approximate dynamic programming (ADP) is vital in order to develop practical and high-quality solutions to complex industrial problems, particularly when those problems involve making decisions in the presence of uncertainty. Approximate Dynamic Programming, Second Edition uniquely integrates four distinct disciplines—Markov decision processes, mathematical programming, simulation, and statistics—to demonstrate how to successfully approach, model, and solve a wide range of real-life problems using ADP.

The book continues to bridge the gap between computer science, simulation, and operations research and now adopts the notation and vocabulary of reinforcement learning as well as stochastic search and simulation optimization. The author outlines the essential algorithms that serve as a starting point in the design of practical solutions for real problems. The three curses of dimensionality that impact complex problems are introduced and detailed coverage of implementation challenges is provided. The Second Edition also features:

  • A new chapter describing four fundamental classes of policies for working with diverse stochastic optimization problems: myopic policies, look-ahead policies, policy function approximations, and policies based on value function approximations

  • A new chapter on policy search that brings together stochastic search and simulation optimization concepts and introduces a new class of optimal learning strategies

  • Updated coverage of the exploration exploitation problem in ADP, now including a recently developed method for doing active learning in the presence of a physical state, using the concept of the knowledge gradient

  • A new sequence of chapters describing statistical methods for approximating value functions, estimating the value of a fixed policy, and value function approximation while searching for optimal policies

The presented coverage of ADP emphasizes models and algorithms, focusing on related applications and computation while also discussing the theoretical side of the topic that explores proofs of convergence and rate of convergence. A related website features an ongoing discussion of the evolving fields of approximation dynamic programming and reinforcement learning, along with additional readings, software, and datasets.

Requiring only a basic understanding of statistics and probability, Approximate Dynamic Programming, Second Edition is an excellent book for industrial engineering and operations research courses at the upper-undergraduate and graduate levels. It also serves as a valuable reference for researchers and professionals who utilize dynamic programming, stochastic programming, and control theory to solve problems in their everyday work.Content:
Chapter 1 The Challenges of Dynamic Programming (pages 1–23):
Chapter 2 Some Illustrative Models (pages 25–56):
Chapter 3 Introduction to Markov Decision Processes (pages 57–109):
Chapter 4 Introduction to Approximate Dynamic Programming (pages 111–165):
Chapter 5 Modeling Dynamic Programs (pages 167–219):
Chapter 6 Policies (pages 221–248):
Chapter 7 Policy Search (pages 249–288):
Chapter 8 Approximating Value Functions (pages 289–336):
Chapter 9 Learning Value Function Approximations (pages 337–381):
Chapter 10 Optimizing While Learning (pages 383–418):
Chapter 11 Adaptive Estimation and Stepsizes (pages 419–456):
Chapter 12 Exploration Versus Exploitation (pages 457–496):
Chapter 13 Value Function Approximations for Resource Allocation Problems (pages 497–539):
Chapter 14 Dynamic Resource Allocation Problems (pages 541–592):
Chapter 15 Implementation Challenges (pages 593–606):


*Free conversion of into popular formats such as PDF, DOCX, DOC, AZW, EPUB, and MOBI after payment.

Related Products