<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: yamasita taisuke</title>
    <description>The latest articles on DEV Community by yamasita taisuke (@yaasita).</description>
    <link>https://dev.to/yaasita</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/yaasita"/>
    <language>en</language>
    <item>
      <title>Find duplicate files</title>
      <dc:creator>yamasita taisuke</dc:creator>
      <pubDate>Tue, 01 Jun 2021 18:36:49 +0000</pubDate>
      <link>https://dev.to/yaasita/find-duplicate-files-2nhc</link>
      <guid>https://dev.to/yaasita/find-duplicate-files-2nhc</guid>
      <description>&lt;p&gt;Duplicate files in this case are files with different names but the same contents.&lt;br&gt;
How do we find them? &lt;/p&gt;
&lt;h1&gt;
  
  
  TL;DR
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;Mostly good to use fdupes&lt;/li&gt;
&lt;li&gt;There's a way to handle the process (it won't be exact).&lt;/li&gt;
&lt;/ul&gt;
&lt;h1&gt;
  
  
  How to find duplicate files
&lt;/h1&gt;

&lt;p&gt;First, consider a file like the following&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ls -l
total 20
-rw-r--r-- 1 root root    1 May  6 00:53 a.dat
-rw-r--r-- 1 root root    2 May  6 00:53 b.dat
-rw-r--r-- 1 root root 1024 May  6 01:00 c.dat
-rw-r--r-- 1 root root 1024 May  6 01:00 d.dat
-rw-r--r-- 1 root root 1024 May  6 01:00 e.dat
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For a small file like this, you can calculate the hash of all files as follows, but let's try to be creative.&lt;br&gt;
The hash value calculation can be done with md5, sha-1 or whatever, but we use light md5 as well as fdupes.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ md5sum * | perl -nalE 'push(@{$u{$F[0]}},$F[1])}{for(keys %u){say"@{$u{$_}}"if@{$u{$_}}&amp;gt;1}'
c.dat e.dat
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For a.dat and b.dat, the file size is 1 byte and 2 bytes, respectively, so we know they are unique without any calculation.&lt;br&gt;
Next, before calculating the hash values of c.dat, d.dat, and e.dat, let's calculate the first 10 bytes.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ head -c 10 c.dat | md5sum
f6513d74de14a6c81ef2e7c1c1de4ab1  -
$ head -c 10 d.dat | md5sum
660f54f5a7658cbf1462b2a91fbe7487  -
$ head -c 10 e.dat | md5sum
f6513d74de14a6c81ef2e7c1c1de4ab1  -
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This way we know that d.dat is unique!&lt;br&gt;
In this way, it may be faster to calculate and compare the first Xbytes first before calculating the whole file.&lt;/p&gt;

&lt;p&gt;For example, if c.dat and d.dat are 10Gbyte files, you need to read 20Gbyte to calculate the hash.&lt;br&gt;
If there is a difference within the first 10 bytes, the data to be read will be only 20 bytes, which is much faster.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/adrianlopezroche/fdupes/blob/master/fdupes.c#L69"&gt;fdupes reads the first 4096 bytes&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://gist.github.com/yaasita/df7f1aad5fbb49e6dd9914d4bebda96b"&gt;If there is a difference within 4096 bytes, the process will proceed at high speed.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rIb5hg6V--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pz5sboy4tpsuafmpx6jq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rIb5hg6V--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pz5sboy4tpsuafmpx6jq.png" alt="fist4096byte"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;However, if they are the same size and the first 100kbytes or so are the same, there is a high probability that they are the same.&lt;br&gt;
I didn't want it to be that accurate, but I wanted it to display quickly, so I made it myself.&lt;/p&gt;

&lt;p&gt;There seems to be nothing in fdupes that terminates processing at the first X bytes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/adrianlopezroche/fdupes/issues/79"&gt;https://github.com/adrianlopezroche/fdupes/issues/79&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://zenn.dev/yaasita/articles/463b7b1a7cabe9"&gt;jdupes&lt;/a&gt; has an option (-TT option) to determine duplicate files by the first 4096 bytes, which fdupes cannot do.&lt;br&gt;
However, in my environment, there were several files that were the same up to 4096 bytes.&lt;/p&gt;
&lt;h1&gt;
  
  
  Implementation
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://github.com/yaasita/chofuku"&gt;https://github.com/yaasita/chofuku&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Use SQLite's in-memory mode.&lt;br&gt;
First, record the name and file.&lt;br&gt;
I have not considered hard links this time.&lt;br&gt;
If you want to support hard links, you can record the i-node number in the i-node and give the same one without hash calculation. &lt;br&gt;
symbolic links are skipped.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SELECT * FROM files;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;name&lt;/th&gt;
&lt;th&gt;size&lt;/th&gt;
&lt;th&gt;head100k_hash&lt;/th&gt;
&lt;th&gt;full_hash&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;a.dat&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;b.dat&lt;/td&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;c.dat&lt;/td&gt;
&lt;td&gt;1024&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;d.dat&lt;/td&gt;
&lt;td&gt;1024&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;e.dat&lt;/td&gt;
&lt;td&gt;1024&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;f.dat&lt;/td&gt;
&lt;td&gt;1150976&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;g.dat&lt;/td&gt;
&lt;td&gt;1150976&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;h.dat&lt;/td&gt;
&lt;td&gt;1150976&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Count duplicates with the following SQL (I always use this query to count duplicates)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SELECT size, head100k_hash, full_hash, json_group_array(name) FROM files GROUP BY size, head100k_hash, full_hash HAVING count(*) &amp;gt; 1;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;size&lt;/th&gt;
&lt;th&gt;head100k_hash&lt;/th&gt;
&lt;th&gt;full_hash&lt;/th&gt;
&lt;th&gt;json_group_array(name)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;1024&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;["c.dat","d.dat","e.dat"]&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;1150976&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;["f.dat","g.dat","h.dat"]&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;I got a list of files that are the same size.&lt;br&gt;
If the -size-only option is specified, the process will be terminated.&lt;/p&gt;

&lt;p&gt;The next step is to calculate the hash for files of the same size.&lt;br&gt;
Files with 0 bytes will not be hashed.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;name&lt;/th&gt;
&lt;th&gt;size&lt;/th&gt;
&lt;th&gt;head100k_hash&lt;/th&gt;
&lt;th&gt;full_hash&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;a.dat&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;b.dat&lt;/td&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;c.dat&lt;/td&gt;
&lt;td&gt;1024&lt;/td&gt;
&lt;td&gt;2f2bf74e24d26a2a159c4f130eec39ac&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;d.dat&lt;/td&gt;
&lt;td&gt;1024&lt;/td&gt;
&lt;td&gt;fc65c6cb47f6eed0aa6a34448a8bfcec&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;e.dat&lt;/td&gt;
&lt;td&gt;1024&lt;/td&gt;
&lt;td&gt;2f2bf74e24d26a2a159c4f130eec39ac&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;f.dat&lt;/td&gt;
&lt;td&gt;1150976&lt;/td&gt;
&lt;td&gt;595cd4e40357324cec2737e067d582b1&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;g.dat&lt;/td&gt;
&lt;td&gt;1150976&lt;/td&gt;
&lt;td&gt;595cd4e40357324cec2737e067d582b1&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;h.dat&lt;/td&gt;
&lt;td&gt;1150976&lt;/td&gt;
&lt;td&gt;595cd4e40357324cec2737e067d582b1&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Counting duplicates.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SELECT size, head100k_hash, full_hash, json_group_array(name) FROM files GROUP BY size, head100k_hash, full_hash HAVING count(*) &amp;gt; 1;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;size&lt;/th&gt;
&lt;th&gt;head100k_hash&lt;/th&gt;
&lt;th&gt;full_hash&lt;/th&gt;
&lt;th&gt;json_group_array(name)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;1024&lt;/td&gt;
&lt;td&gt;2f2bf74e24d26a2a159c4f130eec39ac&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;["c.dat","e.dat"]&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;1150976&lt;/td&gt;
&lt;td&gt;595cd4e40357324cec2737e067d582b1&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;["f.dat","g.dat","h.dat"]&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;If the -100k-only option is specified, the process will be terminated here.&lt;/p&gt;

&lt;p&gt;For files that are still duplicates, calculate the hash value of the entire file.&lt;/p&gt;

&lt;p&gt;If the file is less than 100Kbytes, the calculation will be skipped.&lt;br&gt;
The hash value calculation part can be parallelized, but you need to make sure that it does not exceed the limit of file opening (the value of ulimit -n).&lt;br&gt;
In this case, it is done sequentially.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;name&lt;/th&gt;
&lt;th&gt;size&lt;/th&gt;
&lt;th&gt;head100k_hash&lt;/th&gt;
&lt;th&gt;full_hash&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;a.dat&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;b.dat&lt;/td&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;c.dat&lt;/td&gt;
&lt;td&gt;1024&lt;/td&gt;
&lt;td&gt;2f2bf74e24d26a2a159c4f130eec39ac&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;d.dat&lt;/td&gt;
&lt;td&gt;1024&lt;/td&gt;
&lt;td&gt;fc65c6cb47f6eed0aa6a34448a8bfcec&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;e.dat&lt;/td&gt;
&lt;td&gt;1024&lt;/td&gt;
&lt;td&gt;2f2bf74e24d26a2a159c4f130eec39ac&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;f.dat&lt;/td&gt;
&lt;td&gt;1150976&lt;/td&gt;
&lt;td&gt;595cd4e40357324cec2737e067d582b1&lt;/td&gt;
&lt;td&gt;ca2e51ae14747a1f1f0dcb81e982c287&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;g.dat&lt;/td&gt;
&lt;td&gt;1150976&lt;/td&gt;
&lt;td&gt;595cd4e40357324cec2737e067d582b1&lt;/td&gt;
&lt;td&gt;067d1eed705e0f7756ceb37a10462665&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;h.dat&lt;/td&gt;
&lt;td&gt;1150976&lt;/td&gt;
&lt;td&gt;595cd4e40357324cec2737e067d582b1&lt;/td&gt;
&lt;td&gt;ca2e51ae14747a1f1f0dcb81e982c287&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Count and output duplicates.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SELECT size, head100k_hash, full_hash, json_group_array(name) FROM files GROUP BY size, head100k_hash, full_hash HAVING count(*) &amp;gt; 1;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;size&lt;/th&gt;
&lt;th&gt;head100k_hash&lt;/th&gt;
&lt;th&gt;full_hash&lt;/th&gt;
&lt;th&gt;json_group_array(name)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;1024&lt;/td&gt;
&lt;td&gt;2f2bf74e24d26a2a159c4f130eec39ac&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;["c.dat","e.dat"]&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;1150976&lt;/td&gt;
&lt;td&gt;595cd4e40357324cec2737e067d582b1&lt;/td&gt;
&lt;td&gt;ca2e51ae14747a1f1f0dcb81e982c287&lt;/td&gt;
&lt;td&gt;["f.dat","h.dat"]&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;If you want to go through the whole file hashing process, it's much faster to use fdupes.&lt;/p&gt;

&lt;p&gt;If you don't specify the "-100k-only" or "-size-only" option, then there is no point in using this program.&lt;/p&gt;

&lt;h1&gt;
  
  
  conclusion
&lt;/h1&gt;

&lt;p&gt;Even though the hash calculation was censored at 100kbytes, the result was the same as that of fdupes.&lt;/p&gt;

&lt;p&gt;However, this is only for my environment.&lt;br&gt;
There may be some files in the world where the first 100 kbytes are used as a common header.&lt;/p&gt;

&lt;p&gt;I think it's better to change the method to suit the situation.&lt;br&gt;
Most of the time, fdupes are good.&lt;/p&gt;

&lt;p&gt;You can also use a file system like the following.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://openzfs.github.io/openzfs-docs/Performance%20and%20Tuning/Workload%20Tuning.html?highlight=deduplication#deduplication"&gt;zfs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://btrfs.wiki.kernel.org/index.php/User_notes_on_dedupe"&gt;btrfs&lt;/a&gt;
They use a lot of memory.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://ipfs.io/"&gt;ipfs&lt;/a&gt; can look for uniqueness across the network.&lt;/p&gt;

</description>
      <category>fdupes</category>
      <category>go</category>
    </item>
    <item>
      <title>I have updated edit-slack.vim</title>
      <dc:creator>yamasita taisuke</dc:creator>
      <pubDate>Sat, 03 Apr 2021 14:50:08 +0000</pubDate>
      <link>https://dev.to/yaasita/i-have-updated-edit-slack-vim-4p2j</link>
      <guid>https://dev.to/yaasita/i-have-updated-edit-slack-vim-4p2j</guid>
      <description>&lt;p&gt;&lt;a href="https://github.com/yaasita/edit-slack.vim" rel="noopener noreferrer"&gt;https://github.com/yaasita/edit-slack.vim&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I have updated my slack plugin because it was not working.&lt;/p&gt;

&lt;h1&gt;
  
  
  What is this
&lt;/h1&gt;

&lt;p&gt;This is a plugin to post to slack from vim.&lt;br&gt;
You can send a message using only vim.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv6pwy9o53s05y31l0vnf.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv6pwy9o53s05y31l0vnf.gif" alt="demo"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is a post from vim to slack via a binary written in golang&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa5uyawdb4yrjmwdkkdbk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa5uyawdb4yrjmwdkkdbk.png" alt="How it works"&gt;&lt;/a&gt;&lt;br&gt;
(c) &lt;a href="https://github.com/tenntenn/gopher-stickers" rel="noopener noreferrer"&gt;gopher stickers&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The display will invoke the command if the buffer name starts with "slack://".&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;autocmd BufReadCmd slack://* call edit_slack#Open(expand("&amp;lt;amatch&amp;gt;"))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The post gets the part written below "=== Message ===" in vimscript.&lt;br&gt;
vim passes it in the standard input of the command.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs0tw083dhhcske8sxx76.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs0tw083dhhcske8sxx76.png" alt="post data"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Run the following script when you write&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;autocmd BufWriteCmd slack://* call edit_slack#Write(expand("&amp;lt;amatch&amp;gt;"))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Call system() in the function&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;call system(command, postdata)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Update details are as follows.&lt;/p&gt;

&lt;h1&gt;
  
  
  Upgrade golang
&lt;/h1&gt;

&lt;p&gt;Updated golang version to 1.15.&lt;br&gt;
I've also added the use of Module and &lt;a href="https://github.com/slack-go/slack" rel="noopener noreferrer"&gt;slack-go&lt;/a&gt;.&lt;/p&gt;
&lt;h1&gt;
  
  
  Support for slack's new API
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://api.slack.com/legacy/custom-integrations/legacy-tokens" rel="noopener noreferrer"&gt;Support new token&lt;/a&gt;&lt;br&gt;
YouTube：&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/z9PD7-UXSbA"&gt;
&lt;/iframe&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Change to use &lt;a href="https://api.slack.com/methods#conversations" rel="noopener noreferrer"&gt;ConversationsAPI&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All channels can be opened with the following rules&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# list
slack://ch

# channel
slack://ch/&amp;lt;channel name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Threaded support
&lt;/h1&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Threaded uri
slack://ch/&amp;lt;channel name&amp;gt;/&amp;lt;timestamp&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Added the command &lt;code&gt;:EditSlackOpenReplies&lt;/code&gt; to open threads.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyxsuozrvr9e8g29zgdtn.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyxsuozrvr9e8g29zgdtn.gif" alt="open thread"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You need to get the timestamp for each statement to open the thread. I used to show it in the message list, but it got too cluttered and hard to read, so I hid it with the &lt;a href="http://vimdoc.sourceforge.net/htmldoc/syntax.html#conceal" rel="noopener noreferrer"&gt;conceal function&lt;/a&gt;.&lt;br&gt;
If you set syntax on, it will be hidden.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgozqa2m3hh33pkbzssdq.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgozqa2m3hh33pkbzssdq.gif" alt="syntax on"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you change &lt;code&gt;[]&lt;/code&gt; to &lt;code&gt;[x]&lt;/code&gt;, you can also post to the channel.&lt;br&gt;
However, there is a bug in slack that causes the name and icon to belong to the application.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdpap0zm6kq2xn5j743hk.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdpap0zm6kq2xn5j743hk.gif" alt="broadcast"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h1&gt;
  
  
  Joining and leaving the channel
&lt;/h1&gt;

&lt;p&gt;&lt;code&gt;:EditSlackJoin&lt;/code&gt; / &lt;code&gt;:EditSlackLeave&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fez00iu6m2b7gz0h9djll.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fez00iu6m2b7gz0h9djll.gif" alt="join and leave"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h1&gt;
  
  
  Uploading and downloading files
&lt;/h1&gt;

&lt;p&gt;Added support for downloading and uploading files.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;:EditSlackDownloadFile /path/to/savefile
:EditSlackUploadFile /path/to/uploadfile
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkvanxx4w2qxrd1ctvgd0.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkvanxx4w2qxrd1ctvgd0.gif" alt="upload1"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1xef307de96wm5gdmtkp.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1xef307de96wm5gdmtkp.gif" alt="download1"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h1&gt;
  
  
  Search command
&lt;/h1&gt;

&lt;p&gt;Added a search function.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;:EditSlackSearch search_word
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv2c7tvc40zubvb5wyqw0.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv2c7tvc40zubvb5wyqw0.gif" alt="search"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Other functions
&lt;/h1&gt;

&lt;p&gt;It may support reaction functions in the future.&lt;/p&gt;

</description>
      <category>vim</category>
      <category>slack</category>
      <category>go</category>
    </item>
  </channel>
</rss>
