logo
Product categories

EbookNice.com

Most ebook files are in PDF format, so you can easily read them using various software such as Foxit Reader or directly on the Google Chrome browser.
Some ebook files are released by publishers in other formats such as .awz, .mobi, .epub, .fb2, etc. You may need to install specific software to read these formats on mobile/PC, such as Calibre.

Please read the tutorial at this link.  https://ebooknice.com/page/post?id=faq


We offer FREE conversion to the popular formats you request; however, this may take some time. Therefore, right after payment, please email us, and we will try to provide the service as quickly as possible.


For some exceptional file formats or broken links (if any), please refrain from opening any disputes. Instead, email us first, and we will try to assist within a maximum of 6 hours.

EbookNice Team

(Ebook) Strengthening Deep Neural Networks: Making AI Less Susceptible to Adversarial Trickery by Katy Warr ISBN 9781492044956, 1492044954

  • SKU: EBN-10521840
Zoomable Image
$ 32 $ 40 (-20%)

Status:

Available

0.0

0 reviews
Instant download (eBook) Strengthening Deep Neural Networks: Making AI Less Susceptible to Adversarial Trickery after payment.
Authors:Katy Warr
Pages:246 pages.
Year:2019
Editon:1
Publisher:O’Reilly Media
Language:english
File Size:32.55 MB
Format:pdf
ISBNS:9781492044956, 1492044954
Categories: Ebooks

Product desciption

(Ebook) Strengthening Deep Neural Networks: Making AI Less Susceptible to Adversarial Trickery by Katy Warr ISBN 9781492044956, 1492044954

As deep neural networks (DNNs) become increasingly common in real-world applications, the potential to deliberately "fool" them with data that wouldn’t trick a human presents a new attack vector. This practical book examines real-world scenarios where DNNs—the algorithms intrinsic to much of AI—are used daily to process image, audio, and video data.
Author Katy Warr considers attack motivations, the risks posed by this adversarial input, and methods for increasing AI robustness to these attacks. If you’re a data scientist developing DNN algorithms, a security architect interested in how to make AI systems more resilient to attack, or someone fascinated by the differences between artificial and biological perception, this book is for you.
• Delve into DNNs and discover how they could be tricked by adversarial input
• Investigate methods used to generate adversarial input capable of fooling DNNs
• Explore real-world scenarios and model the adversarial threat
• Evaluate neural network robustness; learn methods to increase resilience of AI systems to adversarial data
• Examine some ways in which AI might become better at mimicking human perception in years to come
*Free conversion of into popular formats such as PDF, DOCX, DOC, AZW, EPUB, and MOBI after payment.

Related Products