Alphappimi: a comprehensive deep learning framework for predicting PPI-modulator interactions by Dayan Liu & Tao Song & Shuang Wang & Xue Li & Peifu Han & Jianmin Wang & Shudong Wang instant download
Journal of Cheminformatics,AbstractProtein-protein interactions (PPIs) regulate essential biological processes through complex interfaces, with their dysfunction is associated with various diseases. Consequently, the identifcation of PPIs and their interface-targeting modulators has emerged as a critical therapeutic approach. However, discovering modulators that target PPIs and PPI interfaces remains challenging as traditional structure-similarity-based methods fail to efectively characterize PPI targets, particularly those for which no active compounds are known. Here, we present AlphaPPIMI, a comprehensive deep learning framework that combines large-scale pretrained language models with domain adaptation for predicting PPI-modulator interactions, specifcally targeting PPI interface. To enable robust model development and evaluation, we constructed comprehensive benchmark datasets of PPI-modulator interactions (PPIMI). Our framework integrates comprehensive molecular features from Uni-Mol2, protein representations derived from state-of-the-art language models (ESM2 and ProTrans), and PPI structural characteristics encoded by PFeature. Through a specialized cross-attention architecture and conditional domain adversarial networks (CDAN), AlphaPPIMI efectively learns potential associations between PPI targets and modulators while ensuring robust cross-domain generalization. Extensive evaluations indicate that AlphaPPIMI achieves consistently improved performance over existing methods in PPIMI prediction, ofering a promising approach for prioritizing candidate PPI modulators, particularly those targeting protein–protein interfaces.Scientifc contribution This work presents AlphaPPIMI, a novel deep learning framework for accurately predicting modulators targeting protein-protein interactions (PPIs) and their interfaces. Its core contributions include a specialized cross-attention module for the synergistic fusion of multimodal pretrained representations, and the nov
*Free conversion of into popular formats such as PDF, DOCX, DOC, AZW, EPUB, and MOBI after payment.