<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Mateus Valentim</title>
    <description>The latest articles on DEV Community by Mateus Valentim (@mattsu).</description>
    <link>https://dev.to/mattsu</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/mattsu"/>
    <language>en</language>
    <item>
      <title>My React Base :)</title>
      <dc:creator>Mateus Valentim</dc:creator>
      <pubDate>Sat, 22 Feb 2025 18:49:52 +0000</pubDate>
      <link>https://dev.to/mattsu/my-react-base--c0c</link>
      <guid>https://dev.to/mattsu/my-react-base--c0c</guid>
      <description>&lt;p&gt;Hi, this is my ReactJS base. Usually, I have to build this from the ground up, but it's really tiring. Repeating everything every time is not cool. So, I created this ReactJS base for myself, and I want to share it with you 🫠.&lt;/p&gt;

&lt;h2&gt;
  
  
  🚀 How to Use
&lt;/h2&gt;

&lt;p&gt;First, you need to clone the React base using this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone https://github.com/mattsu014/reactjs-base.git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, open your project folder using this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd reactjs-base
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, you need to install the &lt;code&gt;node_modules&lt;/code&gt; using this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm install
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And finally, install &lt;code&gt;styled-components&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm install styled-components
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  📂 My React Organization
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxtpgsflcd4fqor0jrtxu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxtpgsflcd4fqor0jrtxu.png" alt="Folder Organization" width="476" height="299"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  📄 Pages
&lt;/h3&gt;

&lt;p&gt;The Pages folder is self-explanatory. Here, I put all the code for each page of my website.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa4m4rtoyyyb2lunznc0t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa4m4rtoyyyb2lunznc0t.png" alt="React Pages" width="478" height="153"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  🛠️ Components
&lt;/h3&gt;

&lt;p&gt;In the Components folder, I put all the "building blocks" for my code, such as Cards, Headers, and others.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7z4pm4bmcfmy3cz98079.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7z4pm4bmcfmy3cz98079.png" alt="React Components" width="470" height="90"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Okay, that's all! This is my simple ReactJS base. You can use your creativity to create more folders and organize your code better.&lt;/p&gt;

&lt;p&gt;Thanks for reading 🤗.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>javascript</category>
      <category>react</category>
      <category>frontend</category>
    </item>
    <item>
      <title>VOSK the Offline Speech Recognition</title>
      <dc:creator>Mateus Valentim</dc:creator>
      <pubDate>Fri, 31 Jan 2025 01:12:24 +0000</pubDate>
      <link>https://dev.to/mattsu/vosk-offline-speech-recognition-3kbb</link>
      <guid>https://dev.to/mattsu/vosk-offline-speech-recognition-3kbb</guid>
      <description>&lt;p&gt;What is &lt;a href="https://alphacephei.com/vosk/" rel="noopener noreferrer"&gt;VOSK&lt;/a&gt;? VOSK is a powerful tool for real-time speech recognition that does not require an internet connection. Developed by &lt;a href="https://alphacephei.com/en/" rel="noopener noreferrer"&gt;Alpha Cephei&lt;/a&gt;, it supports multiple languages and is highly efficient. It can even run on low-performance devices, such as the Raspberry Pi.&lt;/p&gt;

&lt;h2&gt;
  
  
  Using VOSK
&lt;/h2&gt;

&lt;p&gt;To use VOSK, you first need to download the model from this website: &lt;a href="https://alphacephei.com/vosk/models" rel="noopener noreferrer"&gt;https://alphacephei.com/vosk/models&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;VOSK offers models for many languages, including Portuguese, English, Japanese, and others:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft8gl8kzs7h5nqsdnrize.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft8gl8kzs7h5nqsdnrize.png" alt="Vosk Models Exemples" width="800" height="145"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once you download the ZIP file containing the VOSK model, you will need to unzip it&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsl5ou8h07gs1itfsgy4j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsl5ou8h07gs1itfsgy4j.png" alt="VOSK zip" width="459" height="250"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Install Dependences
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;First&lt;/strong&gt;, you need to install the VOSK package and PyAudio using the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install vosk pyaudio
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;PyAudio is a library that allows you to capture and reproduce audio using the PortAudio API. In this code, PyAudio is used to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Open the microphone.&lt;/li&gt;
&lt;li&gt;Capture audio in real time.&lt;/li&gt;
&lt;li&gt;Collect audio data to Vosk for processing.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Let's Code!!!
&lt;/h2&gt;

&lt;p&gt;Code Exemple:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import pyaudio
import json
from vosk import Model, KaldiRecognizer

model = Model("vosk-model-en-us-0.22")
recognizer = KaldiRecognizer(model, 16000)

p = pyaudio.PyAudio()
stream = p.open(format=pyaudio.paInt16, channels=1, rate=16000, input=True, frames_per_buffer=8192)
stream.start_stream()

print("listening...")

while True:
    data = stream.read(4096, exception_on_overflow=False)
    if recognizer.AcceptWaveform(data):
        result = json.loads(recognizer.Result())
        print("You:", result["text"])

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I created this simple example code to demonstrate how VOSK works.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Library Imports&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import pyaudio
import json
from vosk import Model, KaldiRecognizer
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It's a simple import library&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Speech recognition model loading&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;model = Model("vosk-model-en-us-0.22")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This loads the VOSK model for English (US).&lt;br&gt;
&lt;strong&gt;3. Speech recognizer initialization&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;recognizer = KaldiRecognizer(model, 16000)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;KaldiRecognizer:&lt;/strong&gt; Starts Speech Recognition model with 16000 Hz (16 kHz) sampling rate, commum sampling rate for Speech Recognition&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;4. Audio Settings&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;p = pyaudio.PyAudio()
stream = p.open(format=pyaudio.paInt16, channels=1, rate=16000, input=True, frames_per_buffer=8192)
stream.start_stream()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;pyaudio.PyAudio():&lt;/strong&gt; Initializes the PyAudio object, which is used to configure and manage audio capture.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;p.open():&lt;/strong&gt; Opens an audio stream with the following settings:&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;format=pyaudio.paInt16:&lt;/strong&gt; Audio format of 16 bits.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;channels=1:&lt;/strong&gt; Mono audio (1 channel).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;rate=16000:&lt;/strong&gt; Sampling rate of 16000 Hz.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;input=True:&lt;/strong&gt; Indicates that the stream will be used for audio input (capture).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;frames_per_buffer=8192:&lt;/strong&gt; Size of the audio buffer.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;stream.start_stream():&lt;/strong&gt; Starts the audio capture.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;5. Speech Recognition Loop&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;print("listening...")

while True:
    data = stream.read(4096, exception_on_overflow=False)
    if recognizer.AcceptWaveform(data):
        result = json.loads(recognizer.Result())
        print("You:", result["text"])
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;stream.read(4096, exception_on_overflow=False):&lt;/strong&gt;&lt;br&gt;
Reads 4096 frames of audio from the stream. The exception_on_overflow=False parameter prevents the program from raising an exception in case of overflow.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;recognizer.AcceptWaveform(data):&lt;/strong&gt; Sends the audio data to the speech recognizer. If the recognizer detects a complete phrase, it returns True.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;json.loads(recognizer.Result()):&lt;/strong&gt; Converts the speech recognition result (which is in JSON format) into a Python dictionary.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;VOSK is a powerful and efficient tool for real-time speech recognition, supporting multiple languages and running seamlessly on low-performance devices like the Raspberry Pi. &lt;br&gt;
Its offline capability makes it ideal for applications where internet access is limited. &lt;br&gt;
With this guide, you’ve learned how to set up VOSK, configure audio capture, and implement a basic speech recognition system. &lt;br&gt;
Whether for voice-controlled apps, transcription tools, or language learning, VOSK offers a simple yet robust solution for integrating speech recognition into your projects.&lt;/p&gt;




&lt;p&gt;Thanks for reading 🙃&lt;/p&gt;

</description>
      <category>vosk</category>
      <category>python</category>
      <category>ai</category>
      <category>speechrecognition</category>
    </item>
    <item>
      <title>Run DeepSeek R1 Locally with Ollama and Python</title>
      <dc:creator>Mateus Valentim</dc:creator>
      <pubDate>Thu, 30 Jan 2025 04:36:49 +0000</pubDate>
      <link>https://dev.to/mattsu/run-deepseek-r1-locally-with-ollama-and-python-44b6</link>
      <guid>https://dev.to/mattsu/run-deepseek-r1-locally-with-ollama-and-python-44b6</guid>
      <description>&lt;p&gt;In recent years, artificial intelligence and machine learning have revolutionized how we approach complex problems across various domains, from natural language processing to computer vision. In this context, tools like &lt;a href="https://github.com/deepseek-ai/DeepSeek-R1/blob/main/DeepSeek_R1.pdf" rel="noopener noreferrer"&gt;DeepSeek R1&lt;/a&gt; have stood out for their ability to deliver efficient and high-performance solutions to challenging tasks that other models struggle with.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwxfp8w3j9e0224o0d71n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwxfp8w3j9e0224o0d71n.png" alt="DeepSeek R1 Comparison" width="800" height="511"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What is DeepSeek R1?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;DeepSeek R1&lt;/strong&gt; is an artificial intelligence platform designed to simplify the implementation of deep learning models in real-world applications. It offers a variety of features, such as real-time data processing, custom model training, and efficiency—all wrapped in open-source code.&lt;/p&gt;




&lt;h2&gt;
  
  
  DeepSeek Locally and Ollama 🦙
&lt;/h2&gt;

&lt;p&gt;There are many ways to use DeepSeek, one of which is through an &lt;a href="https://api-docs.deepseek.com/" rel="noopener noreferrer"&gt;API Key&lt;/a&gt;. However, here we will focus on running it locally using &lt;a href="https://ollama.com/" rel="noopener noreferrer"&gt;Ollama&lt;/a&gt; and Python.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;First, what is Ollama?&lt;/strong&gt; Simply put, Ollama is a tool that makes it easy to use advanced artificial intelligence models, such as those that generate text or answer questions, without requiring complex technical knowledge. It allows anyone to run and interact with these models easily, whether on a personal computer or in the cloud, making AI technology more accessible for everyday tasks or creative projects.&lt;/p&gt;

&lt;p&gt;To install Ollama, you need to visit this website: &lt;a href="https://ollama.com/download" rel="noopener noreferrer"&gt;https://ollama.com/download&lt;/a&gt;, select your operating system, run the installer, and complete the setup. If everything is set up correctly, execute &lt;code&gt;ollama help&lt;/code&gt; in your terminal, and you should see this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Large language model runner

Usage:
  ollama [flags]
  ollama [command]

Available Commands:
  serve       Start ollama
  create      Create a model from a Modelfile
  show        Show information for a model
  run         Run a model
  stop        Stop a running model
  pull        Pull a model from a registry
  push        Push a model to a registry
  list        List models
  ps          List running models
  cp          Copy a model
  rm          Remove a model
  help        Help about any command

Flags:
  -h, --help      help for ollama
  -v, --version   Show version information

Use "ollama [command] --help" for more information about a command.

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The next step is to &lt;strong&gt;download the model&lt;/strong&gt;. On the Ollama website's search page &lt;a href="https://ollama.com/search" rel="noopener noreferrer"&gt;https://ollama.com/search&lt;/a&gt;, you can find many models to use.&lt;/p&gt;

&lt;p&gt;In this post, we will use &lt;strong&gt;DeepSeek R1 models&lt;/strong&gt; &lt;a href="https://ollama.com/library/deepseek-r1" rel="noopener noreferrer"&gt;https://ollama.com/library/deepseek-r1&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;DeepSeek-R1-Distill-Qwen-1.5B
ollama run deepseek-r1:1.5b

DeepSeek-R1-Distill-Qwen-7B
ollama run deepseek-r1:7b

DeepSeek-R1-Distill-Llama-8B
ollama run deepseek-r1:8b

DeepSeek-R1-Distill-Qwen-14B
ollama run deepseek-r1:14b

DeepSeek-R1-Distill-Qwen-32B
ollama run deepseek-r1:32b

DeepSeek-R1-Distill-Llama-70B
ollama run deepseek-r1:70b
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each model differs in terms of its parameters. Essentially, the parameters of an AI model are like "internal settings" that it adjusts to learn from data and make predictions. The more parameters a model has, the more complex and capable it can be, but it also requires more processing power.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Computational power is very important when choosing a model. The higher the number of parameters, the more computational power your machine will need.&lt;/p&gt;

&lt;p&gt;After downloading the model, your chat will show this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt;&amp;gt;&amp;gt; Send a message (/? for help)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now you have a local AI on your computer.&lt;/p&gt;




&lt;h2&gt;
  
  
  Ollama and Python 🐍
&lt;/h2&gt;

&lt;p&gt;To connect Python and Ollama with DeepSeek, first, you need to create your Python file and install the Ollama package using this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install ollama
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I created the following code as an example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import ollama

model = "deepseek-r1:7b"

while True:
    Myprompt = input("You: ")
    answer = ollama.generate(model=model, prompt=Myprompt)
    print("DeepSeek:", answer["response"])
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Import the package using &lt;code&gt;import ollama&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Define your model using &lt;code&gt;model = "deepseek-r1:7b"&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Start a loop with &lt;code&gt;while True:&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Capture user input using &lt;code&gt;prompt = input("You: ")&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Generate a response from the model using  &lt;code&gt;answer = ollama.generate(model=model, prompt=prompt)&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Print the response using &lt;code&gt;print("DeepSeek:", answer["response"])&lt;/code&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;DeepSeek R1, combined with Ollama, offers a powerful yet accessible way to run advanced AI models locally on your machine. This setup allows you to experiment with AI technology without the need for extensive infrastructure or cloud resources.&lt;/p&gt;

&lt;p&gt;By following the steps outlined in this article, you can easily install Ollama, download DeepSeek R1 models, and integrate them with Python for seamless interaction. This approach not only simplifies the deployment process but also opens up opportunities for customization and experimentation in your AI projects.&lt;/p&gt;




&lt;p&gt;Thanks for reading 🙃&lt;/p&gt;

</description>
      <category>deepseek</category>
      <category>python</category>
      <category>ollama</category>
      <category>ai</category>
    </item>
  </channel>
</rss>
