The 95% F1 Barrier on Reddit That Broke GCN
Graph Convolutional Networks hit a wall. Not because they weren't accurate—Kipf and Welling's GCN (ICLR 2017) dominated transductive tasks—but because they couldn't handle nodes that didn't exist during training. Every time Reddit added a new user or PPI discovered a new protein interaction, the entire model needed retraining. You can read the full GraphSAGE paper here and the GAT paper here.
GraphSAGE (Hamilton et al., NeurIPS 2017) cracked this by learning aggregation functions instead of fixed embeddings. GAT (Veličković et al., ICLR 2018) took a different route: attention-weighted neighbors. Both papers claimed inductive superiority—but which actually delivers when you're staring at 232,965 Reddit posts or 56,944 PPI proteins?
I ran both implementations on identical hardware, and the results weren't what I expected.
GraphSAGE: Sampling Neighbors to Scale
Continue reading the full article on TildAlice

Top comments (0)