📄 中文摘要
该研究提出了一种新颖的序列化图标记范式,以更有效地封装全局信号。现有的图学习方法在生成图级表示时面临信息瓶颈,单一标记范式未能充分利用自注意力在编码标记序列中的固有优势,最终退化为节点信号的加权和。为了解决这一问题,研究者设计了一种图序列化方法,将节点信号聚合为序列化图标记,并自动引入位置编码。随后,堆叠的自注意力层被应用于编码这一标记序列,从而捕捉全局信息。
📄 English Summary
Enhanced Graph Transformer with Serialized Graph Tokens
This research introduces a novel serialized token paradigm to more effectively encapsulate global signals in graph learning. Existing methods face an information bottleneck when generating graph-level representations, as the prevalent single token paradigm fails to fully leverage the inherent strengths of self-attention in encoding token sequences, ultimately degenerating into a weighted sum of node signals. To address this issue, a graph serialization method is proposed to aggregate node signals into serialized graph tokens, with positional encoding being automatically incorporated. Stacked self-attention layers are then applied to encode this token sequence, capturing global information more effectively.
Powered by Cloudflare Workers + Payload CMS + Claude 3.5
数据源: OpenAI, Google AI, DeepMind, AWS ML Blog, HuggingFace 等