<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: ekka</title>
    <description>The latest articles on DEV Community by ekka (@ekagraranjan).</description>
    <link>https://dev.to/ekagraranjan</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/ekagraranjan"/>
    <language>en</language>
    <item>
      <title>ASAP: Pooling layer for Graph Neural Nets</title>
      <dc:creator>ekka</dc:creator>
      <pubDate>Wed, 20 May 2020 15:57:49 +0000</pubDate>
      <link>https://dev.to/ekagraranjan/asap-3bdb</link>
      <guid>https://dev.to/ekagraranjan/asap-3bdb</guid>
      <description>&lt;h2&gt;
  
  
  My Final Project
&lt;/h2&gt;

&lt;p&gt;I am interested in Deep Learning and have been trying to keep pace with the rapidly evolving field. In my last internship, I worked on designing new pooling layers for Graph Neural Networks. My work finds application in tasks involving graphs, e.g., from finding toxicity of Molecule/Drug to learning attributes about users on a social network.&lt;/p&gt;

&lt;h2&gt;
  
  
  Link to Code
&lt;/h2&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vWogaON8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://practicaldev-herokuapp-com.freetls.fastly.net/assets/github-logo-28d89282e0daa1e2496205e2f218a44c755b0dd6536bbadf5ed5a44a7ca54716.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/malllabiisc"&gt;
        malllabiisc
      &lt;/a&gt; / &lt;a href="https://github.com/malllabiisc/ASAP"&gt;
        ASAP
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      AAAI 2020 - ASAP: Adaptive Structure Aware Pooling for Learning Hierarchical Graph Representations
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;h2&gt;
ASAP: Adaptive Structure Aware Pooling for Learning Hierarchical Graph Representations&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://aaai.org/Conferences/AAAI-20/" rel="nofollow"&gt;&lt;img src="https://camo.githubusercontent.com/77f1d476eb79703bf8306e720b265d2d31dea349/687474703a2f2f696d672e736869656c64732e696f2f62616467652f414141492d323032302d3462343463652e737667" alt="Conference"&gt;&lt;/a&gt; &lt;a href="https://arxiv.org/abs/1911.07979" rel="nofollow"&gt;&lt;img src="https://camo.githubusercontent.com/072ae641dc8a0ff7d6a6ea7235fa6237bd5bc899/687474703a2f2f696d672e736869656c64732e696f2f62616467652f70617065722d61727869762e313931312e30373937392d4233314231422e737667" alt="Paper"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Source code for &lt;a href="https://aaai.org/Conferences/AAAI-20/" rel="nofollow"&gt;AAAI 2020&lt;/a&gt; paper: &lt;a href="https://arxiv.org/abs/1911.07979" rel="nofollow"&gt;&lt;strong&gt;ASAP: Adaptive Structure Aware Pooling for Learning Hierarchical Graph Representation&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a rel="noopener noreferrer" href="https://raw.githubusercontent.com/malllabiisc/ASAP/master/Readme.md/./ASAP-overview.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ymTs3WHq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/malllabiisc/ASAP/master/Readme.md/./ASAP-overview.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Overview of ASAP:&lt;/strong&gt; &lt;em&gt;ASAP initially considers all possible local clusters with a fixed receptive field for a given input graph. It then computes the cluster membership of the nodes using an attention mechanism. These clusters are then scored using a GNN. Further, a fraction of the top scoring clusters are selected as nodes in the pooled graph and new edge weights are computed between neighboring clusters. Please refer to Section 4 of the paper for details.&lt;/em&gt;&lt;/p&gt;
&lt;h3&gt;
File Descriptions&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;main.py&lt;/code&gt; - contains the driver code for the whole project&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;asap_pool.py&lt;/code&gt; - source code for ASAP pooling operator proposed in the paper&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;le_conv.py&lt;/code&gt; - source code for LEConv GNN used in the paper&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;asap_pool_model.py&lt;/code&gt; - a network which uses ASAP pooling as pooling operator&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
Dependencies&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Python…&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/malllabiisc/ASAP"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;h2&gt;
  
  
  How I built it (what's the stack? did I run into issues or discover something new along the way?)
&lt;/h2&gt;

&lt;p&gt;The Machine Learning domain is quite lucky in the sense that the tools needed for research and deployment are open-sourced. Major libraries like &lt;code&gt;Pytorch&lt;/code&gt;, &lt;code&gt;Tensorflow&lt;/code&gt;, and &lt;code&gt;sklearn&lt;/code&gt; are all maintained actively by the open-source community.&lt;/p&gt;

&lt;p&gt;My research stack primarily includes &lt;code&gt;Pytorch&lt;/code&gt; library. For the purpose of working in the graph domain, I had to use &lt;code&gt;Pytorch_Geometric&lt;/code&gt; which is a library made on top of Pytorch. Its maintainer has done tremendous work in building and keeping it up to date. So whenever I ran into problems or implemented a new feature for my research work, I made it sure to package it properly and merge it to &lt;code&gt;Pytorch_Geometric&lt;/code&gt; so that others can use it too. &lt;/p&gt;

&lt;p&gt;For managing sessions on the GPU server, I used &lt;code&gt;tmux&lt;/code&gt; client so that if my personal machine turns off, the session is still alive on the server to which I can connect later.&lt;/p&gt;

&lt;p&gt;To collaborate with my mentor I used &lt;code&gt;Github&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Additional Thoughts / Feelings / Stories
&lt;/h2&gt;

&lt;p&gt;It took 3.5 months of consistent work and frequent iterations to submit our work which was accepted as a long paper at AAAI 2020 (a top-tier ML conference). This would not be possible without all the open-source tools available to me for research. I am grateful to the open-source community for sharing their work. Realizing the impact of open-source tools on my workflow motivates me to spend my free time in contributing to open-source projects.&lt;/p&gt;

</description>
      <category>octograd2020</category>
      <category>devgrad2020</category>
      <category>machinelearning</category>
    </item>
  </channel>
</rss>
