Treffer: Dynamic graph representation learning with disentangled information bottleneck.

Title:
Dynamic graph representation learning with disentangled information bottleneck.
Authors:
Wang J; School of Computer Science and Technology, Xi'an Jiaotong University, Xi'an, 710049, China; Ministry of Education Key Laboratory of Intelligent Networks and Network Security, Xi'an Jiaotong University, Xi'an, 710049, China. Electronic address: wang1946456505@stu.xjtu.edu.cn., Bai Y; School of Computer Science and Technology, Xi'an Jiaotong University, Xi'an, 710049, China; Ministry of Education Key Laboratory of Intelligent Networks and Network Security, Xi'an Jiaotong University, Xi'an, 710049, China. Electronic address: byx0916@stu.xju.edu.cn., Zhu C; School of Computer Science and Technology, Xi'an Jiaotong University, Xi'an, 710049, China; Ministry of Education Key Laboratory of Intelligent Networks and Network Security, Xi'an Jiaotong University, Xi'an, 710049, China; State Grid Shaanxi Electric Power Co., Ltd., Training Center, 710068, China. Electronic address: zhuchunqiang@stu.xju.edu.cn., Qian H; the Ant Financial Services Group, Hangzhou, Zhejiang, 310000, China. Electronic address: qianhao.qh@antgroup.com., Liu Z; the Ant Financial Services Group, Hangzhou, Zhejiang, 310000, China. Electronic address: zigiliu@antgroup.com., Luo M; School of Computer Science and Technology, Xi'an Jiaotong University, Xi'an, 710049, China; Ministry of Education Key Laboratory of Intelligent Networks and Network Security, Xi'an Jiaotong University, Xi'an, 710049, China. Electronic address: minnluo@xjtu.edu.cn.
Source:
Neural networks : the official journal of the International Neural Network Society [Neural Netw] 2026 Feb; Vol. 194, pp. 108056. Date of Electronic Publication: 2025 Sep 02.
Publication Type:
Journal Article
Language:
English
Journal Info:
Publisher: Pergamon Press Country of Publication: United States NLM ID: 8805018 Publication Model: Print-Electronic Cited Medium: Internet ISSN: 1879-2782 (Electronic) Linking ISSN: 08936080 NLM ISO Abbreviation: Neural Netw Subsets: MEDLINE
Imprint Name(s):
Original Publication: New York : Pergamon Press, [c1988-
Contributed Indexing:
Keywords: Disentangled representation learning; Dynamic graph; Graph representation learning; Information bottleneck
Entry Date(s):
Date Created: 20250915 Date Completed: 20251216 Latest Revision: 20251216
Update Code:
20251216
DOI:
10.1016/j.neunet.2025.108056
PMID:
40953546
Database:
MEDLINE

Weitere Informationen

Dynamic graph representation learning recently garnered enormous research attention. Despite the notable successes of existing methods, they usually characterize dynamic graphs as a perceptual whole and learn dynamic graph representations within an entangled feature space, which overlook different temporal dependencies inherent in the data. Specifically, the evolution of dynamic graphs is usually decided by a dichotomy in properties: time-invariant properties and time-varying properties. Existing holistic works fail to distinguish these temporal properties and may suffer suboptimal performance in downstream tasks. To tackle this problem, we propose to learn macro-disentangled dynamic graph representations based on the Information Bottleneck theory, leading to a novel dynamic graph representation learning method, Disentangled Dynamic Graph Information Bottleneck (DDGIB). Our DDGIB explicitly embeds the dynamic graphs into a time-invariant representation space and a time-varying representation space. The time-invariant representation space encapsulates stable properties across the temporal span of dynamic graphs, whereas the time-varying representation space encapsulates time-fluctuating properties. The macro disentanglement on the temporal dependencies facilitates the representations' performance on downstream tasks. Furthermore, we theoretically prove the sufficiency and macro disentanglement of DDGIB. The sufficiency demonstrates that DDGIB can achieve sufficient representations for any possible downstream tasks, while the macro disentanglement certifies that DDGIB can embed the different temporal properties into their corresponding temporal representation space. Extensive experimental results on various datasets and downstream tasks demonstrate the superiority of our method.
(Copyright © 2025. Published by Elsevier Ltd.)

Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.