Transformers and Graphs: A New Approach
Transformers have demonstrated their effectiveness in graph learning, particularly for node-level tasks. However, existing methods often encounter an information bottleneck when it comes to generating graph-level representations. The prevalent single token paradigm fails to fully leverage the power of self-attention in encoding token sequences, reducing to a weighted sum of node signals.
Token Serialization for Improved Representation
To address this issue, a novel serialized token paradigm has been developed, designed to encapsulate global signals more effectively. This method involves graph serialization to aggregate node signals into serialized tokens, with automatic inclusion of positional encoding. Subsequently, self-attention layers are applied to encode this token sequence and capture its internal dependencies.
By modeling complex interactions between multiple graph tokens, this approach allows for more expressive representations. Experimental results demonstrate that the method achieves state-of-the-art performance on several graph-level benchmarks. Ablation studies confirm its effectiveness.
๐ฌ Commenti (0)
๐ Accedi o registrati per commentare gli articoli.
Nessun commento ancora. Sii il primo a commentare!