logo
Product categories

EbookNice.com

Most ebook files are in PDF format, so you can easily read them using various software such as Foxit Reader or directly on the Google Chrome browser.
Some ebook files are released by publishers in other formats such as .awz, .mobi, .epub, .fb2, etc. You may need to install specific software to read these formats on mobile/PC, such as Calibre.

Please read the tutorial at this link.  https://ebooknice.com/page/post?id=faq


We offer FREE conversion to the popular formats you request; however, this may take some time. Therefore, right after payment, please email us, and we will try to provide the service as quickly as possible.


For some exceptional file formats or broken links (if any), please refrain from opening any disputes. Instead, email us first, and we will try to assist within a maximum of 6 hours.

EbookNice Team

How large language models encode theory-of-mind: a study on sparse parameter patterns by Yuheng Wu & Wentao Guo & Zirui Liu & Heng Ji & Zhaozhuo Xu & Denghui Zhang instant download

  • SKU: EBN-239164658
Zoomable Image
$ 32 $ 40 (-20%)

Status:

Available

4.4

15 reviews
Instant download (eBook) How large language models encode theory-of-mind: a study on sparse parameter patterns after payment.
Authors:Yuheng Wu & Wentao Guo & Zirui Liu & Heng Ji & Zhaozhuo Xu & Denghui Zhang
Pages:updating ...
Year:2025
Publisher:x
Language:english
File Size:1.23 MB
Format:pdf
Categories: Ebooks

Product desciption

How large language models encode theory-of-mind: a study on sparse parameter patterns by Yuheng Wu & Wentao Guo & Zirui Liu & Heng Ji & Zhaozhuo Xu & Denghui Zhang instant download

npj Artificial Intelligence, doi:10.1038/s44387-025-00031-9

This paper investigates the emergence of Theory-of-Mind (ToM) capabilities in large language models(LLMs) from a mechanistic perspective, focusing on the role of extremely sparse parameter patterns.1234567890():,;1234567890():,;We introduce a novel method to identify ToM-sensitive parameters and reveal that perturbing as littleas 0.001% of these parameters significantly degrades ToM performance while also impairingcontextual localization and language understanding. To understand this effect, we analyze theirinteractions with core architectural components of LLMs. Our findings demonstrate that thesesensitive parameters are closely linked to the positional encoding module, particularly in models usingRotary Position Embedding (RoPE), where perturbations disrupt dominant frequency activationscritical for contextual processing. Furthermore, we show that perturbing ToM-sensitive parametersaffects LLMs’ attention mechanism by modulating the angle between queries and keys underpositional encoding. These insights provide a deeper understanding of how LLMs acquire socialreasoning abilities, bridging AI interpretability with cognitive science.

*Free conversion of into popular formats such as PDF, DOCX, DOC, AZW, EPUB, and MOBI after payment.

Related Products