DEV Community

TildAlice
TildAlice

Posted on • Originally published at tildalice.io

GraphSAGE vs GAT: Reddit/PPI Inductive Learning 95% F1

The 95% F1 Barrier on Reddit That Broke GCN

Graph Convolutional Networks hit a wall. Not because they weren't accurate—Kipf and Welling's GCN (ICLR 2017) dominated transductive tasks—but because they couldn't handle nodes that didn't exist during training. Every time Reddit added a new user or PPI discovered a new protein interaction, the entire model needed retraining. You can read the full GraphSAGE paper here and the GAT paper here.

GraphSAGE (Hamilton et al., NeurIPS 2017) cracked this by learning aggregation functions instead of fixed embeddings. GAT (Veličković et al., ICLR 2018) took a different route: attention-weighted neighbors. Both papers claimed inductive superiority—but which actually delivers when you're staring at 232,965 Reddit posts or 56,944 PPI proteins?

I ran both implementations on identical hardware, and the results weren't what I expected.

Visual abstraction of neural networks in AI technology, featuring data flow and algorithms.

Photo by Google DeepMind on Pexels

GraphSAGE: Sampling Neighbors to Scale


Continue reading the full article on TildAlice

Top comments (0)