<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Susumu OTA</title>
    <description>The latest articles on DEV Community by Susumu OTA (@susumuota).</description>
    <link>https://dev.to/susumuota</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/susumuota"/>
    <language>en</language>
    <item>
      <title>Faster and More Reliable Hugging Face Downloads Using aria2 and GNU Parallel</title>
      <dc:creator>Susumu OTA</dc:creator>
      <pubDate>Sun, 23 Mar 2025 04:16:34 +0000</pubDate>
      <link>https://dev.to/susumuota/faster-and-more-reliable-hugging-face-downloads-using-aria2-and-gnu-parallel-4f2b</link>
      <guid>https://dev.to/susumuota/faster-and-more-reliable-hugging-face-downloads-using-aria2-and-gnu-parallel-4f2b</guid>
      <description>&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Faster and more reliable hugging face downloads with &lt;code&gt;aria2&lt;/code&gt; and &lt;code&gt;GNU Parallel&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Use &lt;code&gt;aria2&lt;/code&gt; to download Hugging Face models and datasets in parallel. If errors occur during the download, you can resume the download from where it left off.&lt;/li&gt;
&lt;li&gt;Use &lt;code&gt;GNU Parallel&lt;/code&gt; to quickly verify the hashes of the downloaded files in parallel using multiple CPU cores.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Downloading machine learning models and datasets from Hugging Face is time-consuming and unreliable. It is especially slow when dealing with large files or unstable internet connections. Follow this guide to speed up and improve the reliability of your Hugging Face downloads using two powerful command-line tools: &lt;code&gt;aria2&lt;/code&gt; and &lt;code&gt;GNU Parallel&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Before we get started, make sure you have the following tools installed on your system:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://git-lfs.com/" rel="noopener noreferrer"&gt;Git Large File Storage (git-lfs)&lt;/a&gt;: An open source Git extension for versioning large files.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://aria2.github.io/" rel="noopener noreferrer"&gt;aria2&lt;/a&gt;: A lightweight multi-protocol &amp;amp; multi-source command-line download utility.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.gnu.org/software/parallel/" rel="noopener noreferrer"&gt;GNU Parallel&lt;/a&gt;: A shell tool for executing jobs in parallel using one or more computers.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.gnu.org/software/coreutils/manual/html_node/sha2-utilities.html" rel="noopener noreferrer"&gt;sha256sum&lt;/a&gt;: A command to compute checksums of files using the SHA-256 algorithm. &lt;em&gt;Note: This command is available on typical Linux distributions. macOS's users can install it using Homebrew.&lt;/em&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Ubuntu, macOS or Conda users can install these tools using the following commands:&lt;/p&gt;

&lt;h3&gt;
  
  
  Ubuntu
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install &lt;/span&gt;git-lfs aria2 parallel &lt;span class="nt"&gt;-y&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  macOS
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;brew &lt;span class="nb"&gt;install &lt;/span&gt;git-lfs aria2 parallel
brew &lt;span class="nb"&gt;install &lt;/span&gt;coreutils  &lt;span class="c"&gt;# for sha256sum command&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Conda Environment
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;source&lt;/span&gt; ~/miniconda3/bin/activate
conda create &lt;span class="nt"&gt;-n&lt;/span&gt; hf_dl &lt;span class="nt"&gt;-y&lt;/span&gt;
conda activate hf_dl

conda &lt;span class="nb"&gt;install &lt;/span&gt;conda-forge::git-lfs conda-forge::aria2 conda-forge::parallel &lt;span class="nt"&gt;-y&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Downloading Hugging Face Models
&lt;/h2&gt;

&lt;p&gt;In this section, we will see how to download Hugging Face models (e.g. &lt;code&gt;Qwen/Qwen2.5-72B-Instruct&lt;/code&gt;) using &lt;code&gt;aria2&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;First, let's clone a Hugging Face repository using &lt;code&gt;git&lt;/code&gt;. To avoid downloading the large files, we set the &lt;code&gt;GIT_LFS_SKIP_SMUDGE&lt;/code&gt; environment variable to &lt;code&gt;1&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;GIT_LFS_SKIP_SMUDGE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1 git clone https://huggingface.co/Qwen/Qwen2.5-72B-Instruct
&lt;span class="nb"&gt;cd &lt;/span&gt;Qwen2.5-72B-Instruct
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;git lfs ls-files&lt;/code&gt; command lists the files tracked by &lt;code&gt;git-lfs&lt;/code&gt;. With the &lt;code&gt;-l&lt;/code&gt; option it will show the OID (SHA256 hash) and the filename. We will use this information to download the files using &lt;code&gt;aria2&lt;/code&gt; and to verify the SHA256 hashes.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git lfs ls-files &lt;span class="nt"&gt;-l&lt;/span&gt;
&lt;span class="c"&gt;# 18d5d2b73010054d1c9fc4a1ba777d575e871b10f1155f3ae22481b7752bc425 - model-00001-of-00037.safetensors&lt;/span&gt;
&lt;span class="c"&gt;# 802a3abf41ccdeb01931c5e40eb177ea114a1c47f68cb251d75c2de0fe196677 - model-00002-of-00037.safetensors&lt;/span&gt;
&lt;span class="c"&gt;# c3a2ab093723d4981dcc6b20c7f48c444ccd9d8572b59f0bf7caa632715b7d36 - model-00003-of-00037.safetensors&lt;/span&gt;
&lt;span class="c"&gt;# 5f35d5475cc4730ca9a38f958f74b5322d28acbd4aec30560987ed12e2748d8f - model-00004-of-00037.safetensors&lt;/span&gt;
&lt;span class="c"&gt;# b7f066aef57e0fe29b516ef743fec7a90518151bd5a9df19263dfdee214dfe4d - model-00005-of-00037.safetensors&lt;/span&gt;
&lt;span class="c"&gt;# ...&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With the &lt;code&gt;-n&lt;/code&gt; option, &lt;code&gt;git lfs ls-files&lt;/code&gt; will only show the filenames.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git lfs ls-files &lt;span class="nt"&gt;-n&lt;/span&gt;
&lt;span class="c"&gt;# model-00001-of-00037.safetensors&lt;/span&gt;
&lt;span class="c"&gt;# model-00002-of-00037.safetensors&lt;/span&gt;
&lt;span class="c"&gt;# model-00003-of-00037.safetensors&lt;/span&gt;
&lt;span class="c"&gt;# model-00004-of-00037.safetensors&lt;/span&gt;
&lt;span class="c"&gt;# model-00005-of-00037.safetensors&lt;/span&gt;
&lt;span class="c"&gt;# ...&lt;/span&gt;

git lfs ls-files &lt;span class="nt"&gt;-n&lt;/span&gt; | &lt;span class="nb"&gt;wc&lt;/span&gt; &lt;span class="nt"&gt;-l&lt;/span&gt;  &lt;span class="c"&gt;# 37&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, we create a list of files (&lt;code&gt;files.txt&lt;/code&gt;) to download with &lt;code&gt;aria2&lt;/code&gt;. We use &lt;code&gt;xargs&lt;/code&gt; to generate the download URL and the output filename for the list.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git lfs ls-files &lt;span class="nt"&gt;-n&lt;/span&gt; | xargs &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nt"&gt;-I&lt;/span&gt; &lt;span class="o"&gt;{}&lt;/span&gt; &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="s2"&gt;"https://huggingface.co/Qwen/Qwen2.5-72B-Instruct/resolve/main/{}&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;    out={}"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; files.txt

&lt;span class="nb"&gt;head &lt;/span&gt;files.txt
&lt;span class="c"&gt;# https://huggingface.co/Qwen/Qwen2.5-72B-Instruct/resolve/main/model-00001-of-00037.safetensors&lt;/span&gt;
&lt;span class="c"&gt;#     out=model-00001-of-00037.safetensors&lt;/span&gt;
&lt;span class="c"&gt;# https://huggingface.co/Qwen/Qwen2.5-72B-Instruct/resolve/main/model-00002-of-00037.safetensors&lt;/span&gt;
&lt;span class="c"&gt;#     out=model-00002-of-00037.safetensors&lt;/span&gt;
&lt;span class="c"&gt;# https://huggingface.co/Qwen/Qwen2.5-72B-Instruct/resolve/main/model-00003-of-00037.safetensors&lt;/span&gt;
&lt;span class="c"&gt;#     out=model-00003-of-00037.safetensors&lt;/span&gt;
&lt;span class="c"&gt;# https://huggingface.co/Qwen/Qwen2.5-72B-Instruct/resolve/main/model-00004-of-00037.safetensors&lt;/span&gt;
&lt;span class="c"&gt;#     out=model-00004-of-00037.safetensors&lt;/span&gt;
&lt;span class="c"&gt;# https://huggingface.co/Qwen/Qwen2.5-72B-Instruct/resolve/main/model-00005-of-00037.safetensors&lt;/span&gt;
&lt;span class="c"&gt;#     out=model-00005-of-00037.safetensors&lt;/span&gt;

&lt;span class="nb"&gt;wc&lt;/span&gt; &lt;span class="nt"&gt;-l&lt;/span&gt; files.txt  &lt;span class="c"&gt;# 74 files.txt&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Before downloading the files, we need to remove the files that are already in the directory. Otherwise, &lt;code&gt;aria2&lt;/code&gt; will add a suffix to the downloaded files.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git lfs ls-files &lt;span class="nt"&gt;-n&lt;/span&gt; | xargs &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s1"&gt;'\n'&lt;/span&gt; &lt;span class="nb"&gt;rm&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If the model or dataset requires authentication, you will need to log in to Hugging Face using the &lt;code&gt;huggingface-cli login&lt;/code&gt; command. This command will store the authentication token in the file &lt;code&gt;~/.cache/huggingface/token&lt;/code&gt;. We can use this token to download the files using &lt;code&gt;aria2&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;huggingface-cli login
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, we download the files using &lt;code&gt;aria2&lt;/code&gt;. The &lt;code&gt;-j&lt;/code&gt; option specifies the number of simultaneous downloads. The appropriate values will depend on your network speed and the server's capabilities, but I recommend starting with 4 to around 12. Be careful not to hit the server's rate limit.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aria2c &lt;span class="nt"&gt;-j&lt;/span&gt; 8 &lt;span class="nt"&gt;-i&lt;/span&gt; files.txt &lt;span class="nt"&gt;--header&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"Authorization: Bearer &lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; ~/.cache/huggingface/token&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Verifying the SHA256 Hashes
&lt;/h2&gt;

&lt;p&gt;After downloading the files, we need to verify the SHA256 hashes to ensure the integrity of the files. We use the &lt;code&gt;sha256sum&lt;/code&gt; command to calculate the SHA256 hash of each file and compare it with the expected hash.&lt;/p&gt;

&lt;p&gt;Unfortunately, &lt;code&gt;sha256sum&lt;/code&gt; takes longer time to compute the hash for large files. We can speed up the process by using GNU Parallel (&lt;code&gt;parallel&lt;/code&gt; command) to compute the hashes in parallel.&lt;/p&gt;

&lt;p&gt;First, we create files to store the expected SHA256 hashes for each file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git lfs ls-files &lt;span class="nt"&gt;-l&lt;/span&gt; | &lt;span class="nb"&gt;awk&lt;/span&gt; &lt;span class="s1"&gt;'{print $1 "  " $3 &amp;gt; $3".sha256"}'&lt;/span&gt;

find &lt;span class="nb"&gt;.&lt;/span&gt; &lt;span class="nt"&gt;-name&lt;/span&gt; &lt;span class="s2"&gt;"*.sha256"&lt;/span&gt; &lt;span class="nt"&gt;-print&lt;/span&gt; | &lt;span class="nb"&gt;wc&lt;/span&gt; &lt;span class="nt"&gt;-l&lt;/span&gt;  &lt;span class="c"&gt;# 37&lt;/span&gt;

&lt;span class="nb"&gt;cat &lt;/span&gt;model-00001-of-00037.safetensors.sha256
&lt;span class="c"&gt;# 18d5d2b73010054d1c9fc4a1ba777d575e871b10f1155f3ae22481b7752bc425  model-00001-of-00037.safetensors&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's compute the SHA256 hash of a first file using the &lt;code&gt;sha256sum&lt;/code&gt; command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sha256sum &lt;/span&gt;model-00001-of-00037.safetensors
&lt;span class="c"&gt;# 18d5d2b73010054d1c9fc4a1ba777d575e871b10f1155f3ae22481b7752bc425  model-00001-of-00037.safetensors&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can speed up the process by using GNU Parallel (&lt;code&gt;parallel&lt;/code&gt; command) to compute the hashes in parallel using multiple CPU cores. The &lt;code&gt;-j&lt;/code&gt; option specifies the number of parallel jobs to run. You can set it to the number of CPU cores on your system. In Linux, you can use the &lt;code&gt;nproc&lt;/code&gt; command to find out the number of CPU cores.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;time &lt;/span&gt;find &lt;span class="nb"&gt;.&lt;/span&gt; &lt;span class="nt"&gt;-name&lt;/span&gt; &lt;span class="s2"&gt;"*.sha256"&lt;/span&gt; &lt;span class="nt"&gt;-print&lt;/span&gt; | &lt;span class="nb"&gt;sort&lt;/span&gt; | parallel &lt;span class="nt"&gt;-j&lt;/span&gt; 8 &lt;span class="nt"&gt;-u&lt;/span&gt; &lt;span class="s2"&gt;"sha256sum -c {} 2&amp;gt;&amp;amp;1"&lt;/span&gt; | &lt;span class="nb"&gt;tee &lt;/span&gt;sha256sum.log
&lt;span class="c"&gt;# model-00001-of-00037.safetensors: OK&lt;/span&gt;
&lt;span class="c"&gt;# model-00007-of-00037.safetensors: OK&lt;/span&gt;
&lt;span class="c"&gt;# model-00003-of-00037.safetensors: OK&lt;/span&gt;
&lt;span class="c"&gt;# ...&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's check the contents of the file &lt;code&gt;sha256sum.log&lt;/code&gt;. It should contain the results of the SHA256 hash verification for each file. The &lt;code&gt;OK&lt;/code&gt; message indicates that the hash verification was successful.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;wc&lt;/span&gt; &lt;span class="nt"&gt;-l&lt;/span&gt; sha256sum.log  &lt;span class="c"&gt;# 37 sha256sum.log&lt;/span&gt;

&lt;span class="nb"&gt;sort &lt;/span&gt;sha256sum.log | &lt;span class="nb"&gt;nl&lt;/span&gt;
&lt;span class="c"&gt;#      1  model-00001-of-00037.safetensors: OK&lt;/span&gt;
&lt;span class="c"&gt;#      2  model-00002-of-00037.safetensors: OK&lt;/span&gt;
&lt;span class="c"&gt;#      3  model-00003-of-00037.safetensors: OK&lt;/span&gt;
&lt;span class="c"&gt;# ...&lt;/span&gt;
&lt;span class="c"&gt;#     35  model-00035-of-00037.safetensors: OK&lt;/span&gt;
&lt;span class="c"&gt;#     36  model-00036-of-00037.safetensors: OK&lt;/span&gt;
&lt;span class="c"&gt;#     37  model-00037-of-00037.safetensors: OK&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;OK! All the files have been successfully downloaded and verified!&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;By using &lt;code&gt;aria2&lt;/code&gt; to download files in parallel and &lt;code&gt;GNU Parallel&lt;/code&gt; to compute the SHA256 hashes in parallel, you can speed up and improve the reliability of your Hugging Face downloads.&lt;/li&gt;
&lt;li&gt;These tools are particularly useful when dealing with large files and/or unstable internet connections.&lt;/li&gt;
&lt;li&gt;Remember to adjust the number of parallel downloads or jobs based on your network speed and the server capabilities.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Citation
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight bibtex"&gt;&lt;code&gt;&lt;span class="nc"&gt;@software&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;tange_2024_14550073&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;author&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;{Tange, Ole}&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;title&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;{GNU Parallel 20241222 ('Bashar')}&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;month&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nv"&gt;Dec&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;year&lt;/span&gt;         &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="m"&gt;2024&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;note&lt;/span&gt;         &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;{{GNU Parallel is a general parallelizer to run
                       multiple serial command line programs in parallel
                       without changing them.}}&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;publisher&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;{Zenodo}&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;doi&lt;/span&gt;          &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;{10.5281/zenodo.14550073}&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;url&lt;/span&gt;          &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;{https://doi.org/10.5281/zenodo.14550073}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>huggingface</category>
      <category>llm</category>
      <category>python</category>
      <category>ai</category>
    </item>
  </channel>
</rss>
