logo
Product categories

EbookNice.com

Most ebook files are in PDF format, so you can easily read them using various software such as Foxit Reader or directly on the Google Chrome browser.
Some ebook files are released by publishers in other formats such as .awz, .mobi, .epub, .fb2, etc. You may need to install specific software to read these formats on mobile/PC, such as Calibre.

Please read the tutorial at this link.  https://ebooknice.com/page/post?id=faq


We offer FREE conversion to the popular formats you request; however, this may take some time. Therefore, right after payment, please email us, and we will try to provide the service as quickly as possible.


For some exceptional file formats or broken links (if any), please refrain from opening any disputes. Instead, email us first, and we will try to assist within a maximum of 6 hours.

EbookNice Team

Are Large Language Models Sensitive to the Motives Behind Communication? by Addison J. Wu & Ryan Liu & Kerem Oktar & Theodore R. Sumers & Thomas L. Griffiths instant download

  • SKU: EBN-239948798
Zoomable Image
$ 32 $ 40 (-20%)

Status:

Available

4.9

18 reviews
Instant download (eBook) Are Large Language Models Sensitive to the Motives Behind Communication? after payment.
Authors:Addison J. Wu & Ryan Liu & Kerem Oktar & Theodore R. Sumers & Thomas L. Griffiths
Pages:updating ...
Year:2025
Publisher:x
Language:english
File Size:2.56 MB
Format:pdf
Categories: Ebooks

Product desciption

Are Large Language Models Sensitive to the Motives Behind Communication? by Addison J. Wu & Ryan Liu & Kerem Oktar & Theodore R. Sumers & Thomas L. Griffiths instant download

arXiv:2510.19687v1 [cs.CL] 22 Oct 20251Department of Computer Science, Princeton University2Department of Psychology, Princeton University3AnthropicAbstractHuman communication is motivated: people speak, write, and create content witha particular communicative intent in mind. As a result, information that largelanguage models (LLMs) and AI agents process is inherently framed by humans’intentions and incentives. People are adept at navigating such nuanced information:we routinely identify benevolent or self-serving motives in order to decide whatstatements to trust. For LLMs to be effective in the real world, they too must critically evaluate content by factoring in the motivations of the source—for instance,weighing the credibility of claims made in a sales pitch. In this paper, we undertake a comprehensive study of whether LLMs have this capacity for motivationalvigilance. We first employ controlled experiments from cognitive science to verifythat LLMs’ behavior is consistent with rational models of learning from motivatedtestimony, and find they successfully discount information from biased sources ina human-like manner. We then extend our evaluation to sponsored online adverts,a more naturalistic reflection of LLM agents’ information ecosystems. In thesesettings, we find that LLMs’ inferences do not track the rational models’ predictions nearly as closely—partly due to additional information that distracts themfrom vigilance-relevant considerations. However, a simple steering interventionthat boosts the salience of intentions and incentives substantially increases thecorrespondence between LLMs and the rational model. These results suggest thatLLMs possess a basic sensitivity to the motivations of others, but generalizing tonovel real-world settings will require further improvements to these models.1 IntroductionMuch of the information available online—and hence a large fraction of the data large languagemodels (LLMs) are tasked with processing—is the product
*Free conversion of into popular formats such as PDF, DOCX, DOC, AZW, EPUB, and MOBI after payment.

Related Products