<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Nithin I Bhandari</title>
    <description>The latest articles on DEV Community by Nithin I Bhandari (@nithinibhandari1999).</description>
    <link>https://dev.to/nithinibhandari1999</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/nithinibhandari1999"/>
    <language>en</language>
    <item>
      <title>How to run LLAMA 2 on your local computer</title>
      <dc:creator>Nithin I Bhandari</dc:creator>
      <pubDate>Fri, 21 Jul 2023 17:58:01 +0000</pubDate>
      <link>https://dev.to/nithinibhandari1999/how-to-run-llama-2-on-your-local-computer-42g1</link>
      <guid>https://dev.to/nithinibhandari1999/how-to-run-llama-2-on-your-local-computer-42g1</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;LLAMA 2 is a large language model that can generate text, translate languages, and answer your questions in an informative way. In this blog post, I will show you how to run LLAMA 2 on your local computer.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisite:
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Install anaconda&lt;/li&gt;
&lt;li&gt;Install Python 11&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Steps
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Step 1:
&lt;/h3&gt;

&lt;p&gt;1.1: Visit to huggingface.co&lt;br&gt;
Model Link: &lt;a href="https://huggingface.co/meta-llama/Llama-2-7b-hf" rel="noopener noreferrer"&gt;https://huggingface.co/meta-llama/Llama-2-7b-hf&lt;/a&gt;&lt;br&gt;
1.2: Create an account on HuggingFace&lt;br&gt;
1.3: Request for llama model access&lt;br&gt;
It may take a day to get access.&lt;br&gt;
1.4: Go to below link and request llama access&lt;br&gt;
Link: &lt;a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" rel="noopener noreferrer"&gt;https://ai.meta.com/resources/models-and-libraries/llama-downloads/&lt;/a&gt;&lt;br&gt;
1.5: As llama 2 is private repo, login by huggingface and generate a token.&lt;br&gt;
Link: &lt;a href="https://huggingface.co/settings/tokens" rel="noopener noreferrer"&gt;https://huggingface.co/settings/tokens&lt;/a&gt;&lt;/p&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

&lt;p&gt;pip install huggingface_hub&lt;/p&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

&lt;p&gt;huggingface-cli login&lt;/p&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
&lt;br&gt;
  &lt;br&gt;
  &lt;br&gt;
  Step 2: Create a conda environment and activate conda environment&lt;br&gt;
&lt;/h3&gt;
&lt;br&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;p&gt;conda create &lt;span class="nt"&gt;-n&lt;/span&gt; py_3_11_lamma2_run &lt;span class="nv"&gt;python&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;3.11 &lt;span class="nt"&gt;-y&lt;/span&gt;&lt;/p&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;p&gt;conda activate py_3_11_lamma2_run&lt;/p&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
&lt;br&gt;
  &lt;br&gt;
  &lt;br&gt;
  Step 3: Install library&lt;br&gt;
&lt;/h3&gt;
&lt;br&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;p&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;transformers torch accelerate&lt;/p&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
&lt;br&gt;
  &lt;br&gt;
  &lt;br&gt;
  Step 4: Create a file "touch run.py"&lt;br&gt;
&lt;/h3&gt;
&lt;br&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

&lt;p&gt;import time&lt;br&gt;
import torch&lt;br&gt;
from transformers import AutoTokenizer, AutoModelForCausalLM&lt;/p&gt;

&lt;p&gt;timeStart = time.time()&lt;/p&gt;

&lt;p&gt;tokenizer = AutoTokenizer.from_pretrained(&lt;br&gt;
    "meta-llama/Llama-2-7b-chat-hf"&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;model = AutoModelForCausalLM.from_pretrained(&lt;br&gt;
    "meta-llama/Llama-2-7b-chat-hf",&lt;br&gt;
    torch_dtype=torch.bfloat16,&lt;br&gt;
    low_cpu_mem_usage=True,&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;print("Load model time: ", -timeStart + time.time())&lt;/p&gt;

&lt;p&gt;while(True):&lt;br&gt;
    input_str = input('Enter: ')&lt;br&gt;
    input_token_length = input('Enter length: ')&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;if(input_str == 'exit'):
    break

timeStart = time.time()

inputs = tokenizer.encode(
    input_str,
    return_tensors="pt"
)

outputs = model.generate(
    inputs,
    max_new_tokens=int(input_token_length),
)

output_str = tokenizer.decode(outputs[0])

print(output_str)

print("Time taken: ", -timeStart + time.time())
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
&lt;br&gt;
  &lt;br&gt;
  &lt;br&gt;
  Step 5: Run python file&lt;br&gt;
&lt;/h3&gt;
&lt;br&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;p&gt;python run.py&lt;/p&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
&lt;br&gt;
  &lt;br&gt;
  &lt;br&gt;
  Performance:&lt;br&gt;
&lt;/h3&gt;

&lt;p&gt;I am using a CPU with 20 GB of RAM (4 GB + 16 GB).&lt;br&gt;
It took 51 seconds to load the model and 227 seconds to generate a response for 250 tokens.&lt;br&gt;
If you use a GPU, it will take significantly less time.&lt;br&gt;
On Google Colab, i got 16 second for a response.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhuqlq55ozw15u5wjg2kx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhuqlq55ozw15u5wjg2kx.png" alt="Llama 2 load time on CPU"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Congratulations! You have successfully run llama on local machine.
&lt;/h2&gt;

</description>
    </item>
    <item>
      <title>Local Shop Search - Search shop near you</title>
      <dc:creator>Nithin I Bhandari</dc:creator>
      <pubDate>Tue, 11 Jan 2022 18:43:44 +0000</pubDate>
      <link>https://dev.to/nithinibhandari1999/local-shop-search-search-shop-near-you-1135</link>
      <guid>https://dev.to/nithinibhandari1999/local-shop-search-search-shop-near-you-1135</guid>
      <description>&lt;p&gt;Local Shop Search is used to search a shop near you by a map.&lt;br&gt;
Shop keeper can add a shop and list his product.&lt;br&gt;
End-User can Search Products and can visit his shop to purchase the product.&lt;/p&gt;

&lt;p&gt;Live Website: &lt;a href="https://local-shop-search.netlify.app/search/" rel="noopener noreferrer"&gt;https://local-shop-search.netlify.app/search/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzt9ef1wmgwts7p6q1lqe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzt9ef1wmgwts7p6q1lqe.png" alt="Website Demo"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Overview of My Submission
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Tech Stack
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;MongoDB and MongoDB Autocomplete&lt;/li&gt;
&lt;li&gt;Node JS&lt;/li&gt;
&lt;li&gt;Express JS&lt;/li&gt;
&lt;li&gt;React JS&lt;/li&gt;
&lt;li&gt;Leaflet JS&lt;/li&gt;
&lt;li&gt;Netlify for frontend and Netlify serverless backend.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  How search functionality works
&lt;/h4&gt;

&lt;p&gt;Here i am using MongoDB autocomplete nGram which offer autocomplete search and location index to search query by location.&lt;/p&gt;

&lt;h5&gt;
  
  
  Aggregate query example
&lt;/h5&gt;

&lt;p&gt;&lt;a href="https://github.com/NithinIBhandari1999/localshopsearch_client/blob/main/info/mongodbAggregateFunction.json" rel="noopener noreferrer"&gt;Code Link&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F00gpmpdqup15k9587mbn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F00gpmpdqup15k9587mbn.png" alt="MongoDB Autocomplete Query Example"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h5&gt;
  
  
  MongoDB Autocomplete index
&lt;/h5&gt;

&lt;p&gt;&lt;a href="https://github.com/NithinIBhandari1999/localshopsearch_client/blob/main/info/mongodbSearchIndex.json" rel="noopener noreferrer"&gt;Code Link&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe1rph770xlkp03mr1zm2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe1rph770xlkp03mr1zm2.png" alt="MongoDB Autocomplete Index Example"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Submission Category:
&lt;/h3&gt;

&lt;p&gt;E-Commerce Creation&lt;/p&gt;

&lt;h3&gt;
  
  
  Link to Code
&lt;/h3&gt;

&lt;p&gt;Live Link: &lt;a href="https://local-shop-search.netlify.app/search/" rel="noopener noreferrer"&gt;Visit Now&lt;/a&gt;&lt;br&gt;
Read Me: &lt;a href="https://github.com/NithinIBhandari1999/localshopsearch_client/blob/main/README.md" rel="noopener noreferrer"&gt;Visit Now&lt;/a&gt;&lt;br&gt;
Local Shop Search Front end: &lt;a href="https://github.com/NithinIBhandari1999/localshopsearch_client" rel="noopener noreferrer"&gt;Visit Now&lt;/a&gt;&lt;br&gt;
Local Shop Search Back end: &lt;a href="https://github.com/NithinIBhandari1999/localshopsearch_api" rel="noopener noreferrer"&gt;Visit Now&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Additional Resources / Info
&lt;/h3&gt;

&lt;h4&gt;
  
  
  How to deploy
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;Clone frontend and backend code.&lt;/li&gt;
&lt;li&gt;Create an Imagekit account.&lt;/li&gt;
&lt;li&gt;Visit to Netlify and deploy the code, add all env variable.&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>atlashackathon</category>
      <category>mongodb</category>
      <category>react</category>
      <category>node</category>
    </item>
  </channel>
</rss>
