<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: riddhi</title>
    <description>The latest articles on DEV Community by riddhi (@riddhi_007f87a3b86e45bf61).</description>
    <link>https://dev.to/riddhi_007f87a3b86e45bf61</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/riddhi_007f87a3b86e45bf61"/>
    <language>en</language>
    <item>
      <title>How to Choose the Right Algorithm for Model Training</title>
      <dc:creator>riddhi</dc:creator>
      <pubDate>Tue, 01 Oct 2024 12:09:35 +0000</pubDate>
      <link>https://dev.to/riddhi_007f87a3b86e45bf61/how-to-choose-the-right-algorithm-for-model-training-1g1i</link>
      <guid>https://dev.to/riddhi_007f87a3b86e45bf61/how-to-choose-the-right-algorithm-for-model-training-1g1i</guid>
      <description>&lt;p&gt;How should one select the appropriate algorithm for training a model, and what factors or conditions should be considered during this selection process?&lt;/p&gt;

&lt;p&gt;For which types of datasets or situations should specific algorithms be used, and what factors influence the choice of algorithm in each case?&lt;/p&gt;

&lt;p&gt;Can the selection of algorithms be based solely on scatter plots and pair plots? If so, what types of patterns in these plots would indicate which algorithms to choose?&lt;/p&gt;

&lt;p&gt;Under which conditions does each algorithm perform best?&lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>algorithms</category>
      <category>discuss</category>
    </item>
    <item>
      <title>Why did my Google Colab session crash while running the Llama model?</title>
      <dc:creator>riddhi</dc:creator>
      <pubDate>Mon, 12 Aug 2024 12:19:33 +0000</pubDate>
      <link>https://dev.to/riddhi_007f87a3b86e45bf61/why-did-my-google-colab-session-crash-while-running-the-llama-model-4bce</link>
      <guid>https://dev.to/riddhi_007f87a3b86e45bf61/why-did-my-google-colab-session-crash-while-running-the-llama-model-4bce</guid>
      <description>&lt;p&gt;I am trying to use the meta-llama/Llama-2-7b-hf model and run it locally on my premises but the session crashed during the process.&lt;/p&gt;

&lt;p&gt;I am trying to use the meta-llama/Llama-2-7b-hf model and run it locally on my premises. To do this, I am using Google Colab and have obtained an access key from Hugging Face. I am utilizing their transformers library for the necessary tasks. Initially, I used the T4 GPU runtime stack on Google Colab, which provided 12.7 GB of system RAM, 15.0 GB of GPU RAM, and 78.2 GB of disk space. Despite these resources, my session crashed, and I encountered the following error:&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv0ltyp889pn1mzbd4qh4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv0ltyp889pn1mzbd4qh4.png" alt="Image description" width="418" height="61"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Subsequently, I switched to the TPU V2 runtime stack, which offers 334.6 GB of system RAM and 225.3 GB of disk space, but the issue persisted.&lt;/p&gt;

&lt;p&gt;Here is my code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;!pip install transformers
!pip install --upgrade transformers

from huggingface_hub import login
login(token='Access Token From Hugging Face')

import pandas as pd
from transformers import AutoTokenizer, AutoModelForSequenceClassification, TrainingArguments, Trainer
from torch.utils.data import Dataset

# Load pre-trained Meta-Llama-3.1-8B model
model_name = "meta-llama/Llama-2-7b-hf"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>machinelearning</category>
      <category>llm</category>
      <category>python</category>
    </item>
  </channel>
</rss>
