<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Miguel Ángel Cabrera Miñagorri</title>
    <description>The latest articles on DEV Community by Miguel Ángel Cabrera Miñagorri (@miguelaeh).</description>
    <link>https://dev.to/miguelaeh</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/miguelaeh"/>
    <language>en</language>
    <item>
      <title>Stop prompting AI coding agents with screenshots!</title>
      <dc:creator>Miguel Ángel Cabrera Miñagorri</dc:creator>
      <pubDate>Wed, 12 Nov 2025 19:49:07 +0000</pubDate>
      <link>https://dev.to/miguelaeh/stop-prompting-ai-coding-agents-with-screenshots-2cgm</link>
      <guid>https://dev.to/miguelaeh/stop-prompting-ai-coding-agents-with-screenshots-2cgm</guid>
      <description>&lt;p&gt;Coding agents are awesome. You write your prompt and watch the magic happen. Many times, you can even skip the "watching" part and just do something else while the code updates.&lt;/p&gt;

&lt;p&gt;However, there is still some &lt;strong&gt;friction on how you prompt the agent&lt;/strong&gt;, especially when talking about &lt;strong&gt;UI/UX bugs&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;I see people taking screenshots, drawing on them, then uploading those to the AI and writing a prompt along with it. It gets harder when you need to explain the user's &lt;strong&gt;behavior that led to an error&lt;/strong&gt;. You have to take multiple screenshots, write down what happened, what you expected, sometimes you don't have time to take a screenshot if the UI changes fast, etc.&lt;br&gt;
And after all that work, often &lt;strong&gt;the agent still doesn't understand the issue&lt;/strong&gt;. It's just &lt;strong&gt;frustrating&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The good news is, it &lt;strong&gt;doesn't have to be like that&lt;/strong&gt;! For years, we have had a way to explain and show bugs easily: &lt;strong&gt;screen recordings&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;It's so easy to &lt;strong&gt;just record your screen, explain what's wrong&lt;/strong&gt;, and send it to a colleague. Why for AI it's not the same?&lt;/p&gt;

&lt;p&gt;At &lt;a href="https://nitpicks.ai" rel="noopener noreferrer"&gt;Nitpicks&lt;/a&gt;, we are tackling that friction. Without leaving your product page, &lt;strong&gt;click a button, record your screen showing a bug, and see the fix flow to your GitHub repository&lt;/strong&gt; automatically on a pull request.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://nitpicks.ai" class="ltag_cta ltag_cta--branded" rel="noopener noreferrer"&gt;Give Nitpicks a try today&lt;/a&gt;
&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>resources</category>
      <category>tooling</category>
    </item>
    <item>
      <title>Writing AI prompts felt like writing essays</title>
      <dc:creator>Miguel Ángel Cabrera Miñagorri</dc:creator>
      <pubDate>Sun, 10 Aug 2025 15:58:19 +0000</pubDate>
      <link>https://dev.to/miguelaeh/writing-ai-prompts-felt-like-writing-essays-3h6n</link>
      <guid>https://dev.to/miguelaeh/writing-ai-prompts-felt-like-writing-essays-3h6n</guid>
      <description>&lt;p&gt;I’m a heavy user of AI coding tools, but for certain tasks — especially those involving visual changes — my prompts started getting absurdly long.&lt;/p&gt;

&lt;p&gt;At first, I was explaining everything in text. Then I began attaching screenshots. Eventually, I even saw people editing images with arrows to explain what they meant.&lt;/p&gt;

&lt;p&gt;One day, I thought: &lt;strong&gt;how could I just show the thing instead of describing it?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;So I started experimenting with video. At first, it was just for fun — a way to respond to those “Hey, here is a nitpick” PM videos. I didn’t expect much.&lt;/p&gt;

&lt;p&gt;But at some point, I figured out a way to make the AI understand these videos incredibly well, often better than it did with my carefully crafted text prompts with images.&lt;/p&gt;

&lt;p&gt;That’s when I built a small tool — &lt;a href="https://nitpicks.ai" rel="noopener noreferrer"&gt;Nitpicks&lt;/a&gt; — that lets you record your screen (via a Chrome extension) while explaining a bug or an improvement or a new feature you want to implement, and automatically get a GitHub pull request addressing the code changes.&lt;/p&gt;

&lt;p&gt;It’s been especially helpful in product teams where not everyone codes — people can just show what they mean, and the fix appears in a few minutes.&lt;/p&gt;

&lt;p&gt;I’m still refining it, and I’d love to hear your thoughts. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://nitpicks.ai" class="crayons-btn crayons-btn--primary" rel="noopener noreferrer"&gt;Use Nitpicks at no cost&lt;/a&gt;
&lt;/p&gt;

</description>
      <category>ai</category>
      <category>vibecoding</category>
      <category>webdev</category>
      <category>react</category>
    </item>
    <item>
      <title>Strategies to monetize AI apps and agents</title>
      <dc:creator>Miguel Ángel Cabrera Miñagorri</dc:creator>
      <pubDate>Mon, 24 Mar 2025 13:27:02 +0000</pubDate>
      <link>https://dev.to/miguelaeh/strategies-to-monetize-ai-apps-and-agents-2bpd</link>
      <guid>https://dev.to/miguelaeh/strategies-to-monetize-ai-apps-and-agents-2bpd</guid>
      <description>&lt;p&gt;I am a developer who loves to build new stuff. I have built a huge amount of free and open-source apps and tools. For the past couple of years, I focused on building many AI tools and agents.&lt;/p&gt;

&lt;h2&gt;
  
  
  Traditional apps vs AI apps/agents monetization strategies
&lt;/h2&gt;

&lt;p&gt;There is a fundamental difference between the traditional app and the new AI app and agents.&lt;/p&gt;

&lt;p&gt;While the infrastructure costs for traditional applications are almost negligible - which means you can offer free tiers or even a free application - when we add AI, even a small number of users may break the bank due to the inference costs.&lt;/p&gt;

&lt;p&gt;As developers, we are forced to add paywalls to the AI apps and agents so that we can, at least, cover the inference costs. There are two main approaches:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Credit system&lt;/strong&gt;: selling users certain amounts of credits that are consumed as they use our app. This is the best alternative if you are building AI agents because you never know how much computation the user will actually spend.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Subscription&lt;/strong&gt;: users pay a fixed amount every month. This is a high-risk implementation for the developer because the user consumption may be bigger than the subscription. With traditional applications, subscriptions could be of different tiers so that a user paying a "tier 1" had access to certain features while others paying a "tier 2" may have access to extra ones. The limitation with AI apps is different because you need to limit a feature the user is already paying for, which is not well received by the paying users.&lt;/p&gt;

&lt;p&gt;There is a third approach that is becoming popular, despite the security issues implied for the users, which is &lt;strong&gt;asking for an AI provider API key&lt;/strong&gt;. I highly discourage this method for two reasons: 1. Non-technical users have no idea what's an API key and they will mismanage it. 2. It goes against most providers' ToS.&lt;/p&gt;

&lt;h2&gt;
  
  
  Monetizing AI apps without paywalls
&lt;/h2&gt;

&lt;p&gt;The ideal solution from the developer's point of view is that each user automatically pays his inference. Similar to how you plug appliances into the wall socket, you pay your electricity utility bill and connect whatever you want.&lt;br&gt;
Also, developers want to monetize the apps, so allowing them to charge per run or adding a markup to the spending they could monetize from the first usage.&lt;/p&gt;

&lt;p&gt;For users, this should happen automatically without having to pass a paywall per app and they should have total control and monitoring of how much they are spending and where their information is going. &lt;/p&gt;
&lt;h2&gt;
  
  
  The solution we built
&lt;/h2&gt;

&lt;p&gt;We built &lt;a href="https://www.brainlink.dev/developers" rel="noopener noreferrer"&gt;BrainLink&lt;/a&gt; with the above in mind, to help fellow developers get more users and monetize their AI apps from the first usage.&lt;/p&gt;

&lt;p&gt;Every user has a brain (an account) that can link to any application supporting BrainLink with a single click. The brain &lt;strong&gt;provides the application with identity, inference costs automatically covered by the user&lt;/strong&gt; (using any of the 180+ supported models), &lt;strong&gt;and monetization&lt;/strong&gt; (either adding a markup fee or a per-run price), among other features we are working on.&lt;/p&gt;

&lt;p&gt;A user can access all the applications using just this brain, which significantly reduces the friction associated with signup forms and paywalls on individual apps.&lt;/p&gt;

&lt;p&gt;I would love to get your feedback on BrainLink, so please feel free to comment or reach out!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.brainlink.dev/developers" class="ltag_cta ltag_cta--branded" rel="noopener noreferrer"&gt;BrainLink website&lt;/a&gt;
&lt;/p&gt;

</description>
      <category>ai</category>
      <category>tooling</category>
      <category>programming</category>
      <category>startup</category>
    </item>
    <item>
      <title>Stop asking users for their AI API keys</title>
      <dc:creator>Miguel Ángel Cabrera Miñagorri</dc:creator>
      <pubDate>Wed, 12 Feb 2025 16:56:00 +0000</pubDate>
      <link>https://dev.to/miguelaeh/stop-asking-users-for-their-ai-api-keys-2583</link>
      <guid>https://dev.to/miguelaeh/stop-asking-users-for-their-ai-api-keys-2583</guid>
      <description>&lt;p&gt;It is becoming a trend within the AI space to ask users to bring their own provider API key (OpenAI, Anthorpic, DeepSeek, etc.), especially among indie developers.&lt;/p&gt;

&lt;p&gt;The main purpose is to avoid paying for the inference costs of the users, which are unpredictable and expensive.&lt;/p&gt;

&lt;p&gt;However, asking users for their keys not only creates a ton of friction in your application, it also increases security risks and goes against most providers' policies.&lt;/p&gt;

&lt;p&gt;To solve all these problems, I built &lt;a href="https://www.brainlink.dev" rel="noopener noreferrer"&gt;BrainLink&lt;/a&gt;, which provides users with a global account that they can connect to your application with a single click.&lt;br&gt;
After a &lt;strong&gt;user links their account with your application&lt;/strong&gt;, you can obtain an &lt;strong&gt;access token to perform inference on behalf of the user&lt;/strong&gt;, so that they pay exactly for what they consume within your application.&lt;/p&gt;

&lt;p&gt;BrainLink also &lt;strong&gt;increases the flexibility of your code&lt;/strong&gt;, since you are no longer tied to the user AI provider. You can use any model from any provider and even combine them for different features without needing multiple keys from your users.&lt;/p&gt;

&lt;p&gt;I would love to help you integrate BrainLink if it is of your interest. It takes &lt;strong&gt;just 5 minutes&lt;/strong&gt;. Feel free to write me at &lt;a href="//mailto://miguel@brainlink.dev"&gt;miguel@brainlink.dev&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.brainlink.dev" class="ltag_cta ltag_cta--branded" rel="noopener noreferrer"&gt;Integrate BrainLink in 5 minutes&lt;/a&gt;
&lt;/p&gt;

</description>
      <category>ai</category>
      <category>webdev</category>
      <category>react</category>
      <category>javascript</category>
    </item>
    <item>
      <title>Running AI locally in your users' browsers</title>
      <dc:creator>Miguel Ángel Cabrera Miñagorri</dc:creator>
      <pubDate>Fri, 25 Oct 2024 11:00:02 +0000</pubDate>
      <link>https://dev.to/miguelaeh/running-ai-locally-in-your-users-browsers-2b4e</link>
      <guid>https://dev.to/miguelaeh/running-ai-locally-in-your-users-browsers-2b4e</guid>
      <description>&lt;p&gt;We all know how great AI is, however, there are still two major problems: &lt;strong&gt;data privacy and cost&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;All the applications using AI right now are connected to cloud APIs. These APIs log prompts and contexts and in some cases they use that data to train models. That means that any sensitive data you include on them is potentially exposed.&lt;/p&gt;

&lt;p&gt;Most web applications integrate AI features using the following schema:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F66l5ymaie7sk4ehljsyd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F66l5ymaie7sk4ehljsyd.png" alt="Schema of AI integration in web applications" width="800" height="252"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The problem here is that the application servers need to send the user data to the AI API, which is a third-party API and we cannot really know what will happen with the user data.&lt;/p&gt;

&lt;p&gt;But, why don't we just &lt;strong&gt;process AI in the user device instead of the cloud&lt;/strong&gt;? I have been testing it for a few weeks with amazing results. I found 3 main advantages:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The user data is never sent to a third-party. It always remains on the user device.&lt;/li&gt;
&lt;li&gt;It's free for the app developer, you don't need to pay for the user inference, because it happens directly on the user device.&lt;/li&gt;
&lt;li&gt;The scalability is unlimited as every single new user brings his own computation power.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Let's take a quick look at how the previous schema changes when we offload the AI computation to the users:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa5wp7k0yyf3ydszxykrq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa5wp7k0yyf3ydszxykrq.png" alt="Schema of running AI locally on the user's browser" width="630" height="352"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It's a very simple concept. The user uses the we application as always, but when there is some task that requires to perform AI computation, instead of using a third-party API, we send it to the user and it's device will perform that computation in the most secure way, locally.&lt;/p&gt;

&lt;p&gt;This is not just a dream, &lt;strong&gt;it's already fully functional&lt;/strong&gt;, and I created a platform called &lt;a href="https://www.offload.fyi" rel="noopener noreferrer"&gt;&lt;strong&gt;Offload&lt;/strong&gt;&lt;/a&gt; so that everyone can use this architecture easily, just changing a few lines of code. The SDK will handle everything behind the scenes, from downloading a model that fits on the user device, to help you manage the prompts and evaluate prompt responses locally, sending back the evaluation results to you without exposing the user data. Everything works transparently with a single function invocation.&lt;/p&gt;

&lt;p&gt;I am looking for web developers that may benefit from this, even if it is just for hobby projects, so, if you like this approach &lt;strong&gt;ping me&lt;/strong&gt;! I would love to help you set it up in your application and you will see that it is actually really simple to migrate within minutes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.offload.fyi" class="ltag_cta ltag_cta--branded" rel="noopener noreferrer"&gt;Integrate Offload in your application&lt;/a&gt;
&lt;/p&gt;

</description>
      <category>ai</category>
      <category>webdev</category>
      <category>javascript</category>
      <category>privacy</category>
    </item>
    <item>
      <title>Offload - A unified javascript SDK that enables in-browser AI</title>
      <dc:creator>Miguel Ángel Cabrera Miñagorri</dc:creator>
      <pubDate>Tue, 08 Oct 2024 10:29:20 +0000</pubDate>
      <link>https://dev.to/miguelaeh/offload-a-unified-javascript-sdk-that-enables-in-browser-ai-2aii</link>
      <guid>https://dev.to/miguelaeh/offload-a-unified-javascript-sdk-that-enables-in-browser-ai-2aii</guid>
      <description>&lt;p&gt;Today I want to share &lt;a href="https://www.offload.fyi" rel="noopener noreferrer"&gt;Offload&lt;/a&gt;, a javascript SDK to run AI directly on the users' browser.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcrp8hbcfjeo9pt5qano1.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcrp8hbcfjeo9pt5qano1.gif" alt="Offload Widget GIF" width="346" height="231"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What's Offload?
&lt;/h2&gt;

&lt;p&gt;It is an SDK you can use to add AI to your website but with one peculiarity: it allows your users to &lt;strong&gt;run AI tasks locally, keeping their data on their devices&lt;/strong&gt;, avoiding the need to send it to a third-party inference API.&lt;/p&gt;

&lt;p&gt;Additionally, it &lt;strong&gt;decreases your costs&lt;/strong&gt; and helps your application scale inexpensively. As more inference is offloaded to the users' devices, the fewer resources you need to allocate or spend on third-party APIs.&lt;/p&gt;

&lt;p&gt;If you are an application developer, integrating Offload will only improve your application, as it will continue to work as usual while offering your users the ability to process their data locally, without any effort on your part. &lt;/p&gt;

&lt;h2&gt;
  
  
  Offload features
&lt;/h2&gt;

&lt;p&gt;You can integrate Offload as a direct replacement of whatever SDK you are using right now, just changing your inference function calls.&lt;/p&gt;

&lt;p&gt;Offload serves** models of different sizes to your users automatically**, depending on the device and its resources. If the user's device does not have enough resources, Offload will not show that user the option to process the data locally and will fall back to whatever API you specify via the dashboard.&lt;/p&gt;

&lt;p&gt;In the dashboard, you can configure and manage the prompts, customize and test them for the different models, and get &lt;strong&gt;analytics&lt;/strong&gt; from the users, and more. Everything without exposing your users' data to any third party, as everything is processed on-device.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Offload supports generating text responses, enforcing structured data objects via JSON schemas, streaming the text response, and more.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If there's anything else we do not support that you'd like to see, please leave a comment!&lt;/p&gt;

&lt;h2&gt;
  
  
  Why is Offload important?
&lt;/h2&gt;

&lt;p&gt;I believe local AI is the future. However, as AI continues to advance, I am increasingly concerned about how our data is processed.&lt;/p&gt;

&lt;p&gt;Every application that implements an AI feature today uses a remote API, where it sends the users' data. Most of these applications use public APIs such as OpenAI, Anthropic, and others. The flow is simple: the application collects the user data and sends it along with the prompt to the remote API, which replies with the generated text or image. &lt;/p&gt;

&lt;p&gt;The big problem with this approach is that when you give an application access to a document, (or photo, video, or any piece of data), it sends your document to a remote API, which may include any sensitive information it contains. The remote API likely records the prompts, uses the data to train new models, or sells your data for other purposes.&lt;/p&gt;

&lt;p&gt;I think the data privacy problem is even worse now that we have LLMs. LLMs allow indexing huge amounts of unstructured information in new ways that weren't possible before, and this increases the danger of exposing any personal piece of information.&lt;/p&gt;

&lt;p&gt;For example, let's say you have a diary. It likely includes where you live, your schedules, who your friends are, where you work, maybe how much you earn, and much more. Even if not written directly, it can probably be inferred from the diary's content. Up until now, to infer that information, someone would need to read it entirely. However, with LLMs, one could gain enough data to impersonate you in seconds.&lt;/p&gt;

&lt;p&gt;By using an app to chat with your diary, you are potentially exposing your information, as it is sent to some API.&lt;br&gt;
On the other hand, if such an application uses Offload, you can use it securely since your data doesn't leave your device, and thus, it cannot be exposed.&lt;/p&gt;

&lt;p&gt;This is especially important in industries that work with highly sensitive data, such as healthcare, legal, document processing apps, personal assistants, etc.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.offload.fyi" class="ltag_cta ltag_cta--branded" rel="noopener noreferrer"&gt;Integrate Offload in your application today!&lt;/a&gt;
&lt;/p&gt;

</description>
      <category>javascript</category>
      <category>news</category>
      <category>machinelearning</category>
      <category>nextjs</category>
    </item>
    <item>
      <title>Offloading AI inference to your users' devices</title>
      <dc:creator>Miguel Ángel Cabrera Miñagorri</dc:creator>
      <pubDate>Thu, 12 Sep 2024 17:32:18 +0000</pubDate>
      <link>https://dev.to/miguelaeh/offloading-ai-inference-to-your-users-devices-30nb</link>
      <guid>https://dev.to/miguelaeh/offloading-ai-inference-to-your-users-devices-30nb</guid>
      <description>&lt;p&gt;Integrating LLMs in existing web applications is becoming the norm. Also, there are more and more AI native companies. These create autonomous agents putting the LLM in the center and giving it tools allowing it to perform actions on different systems.&lt;/p&gt;

&lt;p&gt;In this post I will present a new project called &lt;a href="https://offload.fyi" rel="noopener noreferrer"&gt;Offload&lt;/a&gt;, which allows you to move all that processing to the user devices, increasing their data privacy and reducing the inference costs.&lt;/p&gt;

&lt;h2&gt;
  
  
  The 2 problems
&lt;/h2&gt;

&lt;p&gt;The are two big concerns when integrating AI in an application: &lt;strong&gt;Cost&lt;/strong&gt; and &lt;strong&gt;user data privacy&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Cost.&lt;/strong&gt; The typical way to connect an LLM is to use a third-party API, like OpenAI, Anthropic, or others, there are many alternatives in the market. These APIs are very practical, with just an HTTP request you can easily integrate an LLM into your application. However, these APIs are expensive at scale. They are putting big efforts into reducing the cost, but if you make many API calls per user per day the bill becomes huge.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. User data privacy.&lt;/strong&gt; Using third-party APIs for inference is not the best alternative if you work with sensitive user data. These APIs often use the data you send to continue training the model which can expose your confidential data. Also, the data could become visible at some level when it reaches the third-party API provider (for example in a logging system). This is not just a problem for companies, but also for consumers that may not want to send their data to those API providers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Addressing them
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://offload.fyi" rel="noopener noreferrer"&gt;Offload&lt;/a&gt; addresses both problems at once. The application "invokes" the LLM via an SDK that behind the scenes runs the model directly on each user device instead of calling a third-party API. This saves money on the inference bill because you do not need to pay for API usage and maintain the user data within each user device, not needing to send it to any API.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://offload.fyi" class="ltag_cta ltag_cta--branded" rel="noopener noreferrer"&gt;If this is of your interest and want to remain in the loop, check out the Offload website here&lt;/a&gt;
&lt;/p&gt;

</description>
      <category>ai</category>
      <category>api</category>
      <category>opensource</category>
      <category>machinelearning</category>
    </item>
    <item>
      <title>Automating your home with computer vision using any camera</title>
      <dc:creator>Miguel Ángel Cabrera Miñagorri</dc:creator>
      <pubDate>Tue, 14 May 2024 07:51:54 +0000</pubDate>
      <link>https://dev.to/miguelaeh/automating-your-home-with-computer-vision-using-any-camera-59f</link>
      <guid>https://dev.to/miguelaeh/automating-your-home-with-computer-vision-using-any-camera-59f</guid>
      <description>&lt;p&gt;We have more and more devices at home, these can perform several actions, but they need some intelligence to do that.&lt;/p&gt;

&lt;p&gt;Home assistant allows us to automate tasks in our home by providing a hub where you can connect devices and give them instructions. It also allows some "if then, do that" logic, however, we felt like a more powerful brain was still missing.&lt;/p&gt;

&lt;p&gt;During the last week, I created a small &lt;a href="https://agents.pipeless.ai"&gt;Pipeless Agents&lt;/a&gt; integration for Home Assistant, which allows you to automate your home using computer vision by connecting your existing cameras, no need to change them.&lt;/p&gt;

&lt;p&gt;Until now, we could perform basic actions thanks to motion sensors and basic people recognition, but with this approach, we can allow our home to make its own decisions on what to do.&lt;/p&gt;

&lt;p&gt;I created a tutorial so you can understand the basics. On it, you will learn how to connect Home Assistant with Pipeless Agents, set up a project, add your camera streams, and implement your custom logic and video filters.&lt;br&gt;
We will deploy a simple example that turns off our TV when the people watching it leave the scene. You can continue playing around with that basic implementation and create more complex workflows and applications.&lt;/p&gt;

&lt;p&gt;You don't need to know anything about computer vision since your code receives a structured data stream, you just need very basic Python knowledge.&lt;/p&gt;

&lt;p&gt;The following is the complete step-by-step tutorial: &lt;br&gt;
&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/4xVDB5DIC7k"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

</description>
      <category>python</category>
      <category>machinelearning</category>
      <category>tutorial</category>
      <category>ai</category>
    </item>
    <item>
      <title>No-code Real-time Object Detection without training models</title>
      <dc:creator>Miguel Ángel Cabrera Miñagorri</dc:creator>
      <pubDate>Thu, 02 May 2024 10:33:39 +0000</pubDate>
      <link>https://dev.to/miguelaeh/no-code-real-time-object-detection-without-training-models-59b0</link>
      <guid>https://dev.to/miguelaeh/no-code-real-time-object-detection-without-training-models-59b0</guid>
      <description>&lt;p&gt;I am so happy to share this new feature of &lt;a href="https://agents.pipeless.ai"&gt;Pipeless Agents&lt;/a&gt; that allows you to export object detection models without training them. Just specify what you want to detect and your model will be ready in a few seconds!&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/-hTiUD_6f5U"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

</description>
      <category>computervision</category>
      <category>ai</category>
      <category>programming</category>
      <category>python</category>
    </item>
    <item>
      <title>Vision AI agents for any task</title>
      <dc:creator>Miguel Ángel Cabrera Miñagorri</dc:creator>
      <pubDate>Tue, 30 Apr 2024 09:52:57 +0000</pubDate>
      <link>https://dev.to/miguelaeh/vision-ai-agents-for-any-task-3f40</link>
      <guid>https://dev.to/miguelaeh/vision-ai-agents-for-any-task-3f40</guid>
      <description>&lt;p&gt;After spending some months working on the &lt;a href="https://github.com/pipeless-ai/pipeless"&gt;Pipeless open-source framework&lt;/a&gt;, today I bring something new and really cool: &lt;a href="https://agents.pipeless.ai"&gt;Pipeless Agents&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F273v7d3qbg22wh1p8z15.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F273v7d3qbg22wh1p8z15.png" alt="Pipeless Agents Annoucement" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Imagine providing a video source and directly processing a data stream that represents what is happening on the video, just like when you work with normal data. Each payload represents an event on the video, an object, or whatever you are interested in.&lt;/p&gt;

&lt;p&gt;Sounds good, right? Well, it is now possible.&lt;/p&gt;

&lt;p&gt;With &lt;a href="https://agents.pipeless.ai"&gt;Pipeless Agents&lt;/a&gt; you can &lt;strong&gt;create any kind of automation based on real-time video inputs&lt;/strong&gt;. You do not need infrastructure, you do not need to label data or train models. You connect a git repository with your agent logic and the rest is handled for you.&lt;/p&gt;

&lt;h2&gt;
  
  
  So, what does that agent logic look like?
&lt;/h2&gt;

&lt;p&gt;It is just a script that processes the data extracted from your video sources, like the following:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd3xmgsj00om0dh4t9oph.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd3xmgsj00om0dh4t9oph.png" alt="Pipeless Agents agent code example" width="800" height="404"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see, &lt;strong&gt;with just two lines of code&lt;/strong&gt; you can define the agent. The first imports the SDK and the second is a for loop that will run for every data structure extracted from the video. Inside the loop, you can do whatever you want. You can send emails or Slack notifications, you can call some webhooks, you can stop a production line, you can analyze and store the data in a database, ... There are no restrictions, &lt;strong&gt;the only limit is your imagination&lt;/strong&gt;! &lt;/p&gt;

&lt;h2&gt;
  
  
  But, how does the agent know the kind of data/events you want?
&lt;/h2&gt;

&lt;p&gt;We use some filters for that. Every filter focuses on exporting specific data or detecting a specific event. When you connect your video sources you also specify the list of filters that you want to apply to the video and every filter produces a well-defined data structure, which is what your agent receives.&lt;/p&gt;

&lt;p&gt;Right now, we are providing some pre-defined filters such as object detection, but we are working to allow you to define your custom filters. Let us know if there is some specific filter you would like to see!&lt;/p&gt;

&lt;h2&gt;
  
  
  How do you get started?
&lt;/h2&gt;

&lt;p&gt;Just go to &lt;a href="https://agents.pipeless.ai"&gt;https://agents.pipeless.ai&lt;/a&gt; and create your first agent!&lt;br&gt;
Also don’t forget to send us your feedback, we love to hear your thoughts!&lt;/p&gt;

&lt;p&gt;Hope you enjoy it!&lt;/p&gt;

</description>
      <category>ai</category>
      <category>machinelearning</category>
      <category>python</category>
      <category>computervision</category>
    </item>
    <item>
      <title>Computer vision at the edge with Nvidia Jetson in 2 commands</title>
      <dc:creator>Miguel Ángel Cabrera Miñagorri</dc:creator>
      <pubDate>Wed, 24 Jan 2024 10:10:44 +0000</pubDate>
      <link>https://dev.to/miguelaeh/computer-vision-at-the-edge-with-nvidia-jetson-in-2-commands-1kg5</link>
      <guid>https://dev.to/miguelaeh/computer-vision-at-the-edge-with-nvidia-jetson-in-2-commands-1kg5</guid>
      <description>&lt;p&gt;A few days ago I explained the benefits of using the &lt;a href="https://www.pipeless.ai/blog/a-computer-vision-app-in-minutes/Creating%20a%20computer%20vision%20app%20in%20minutes%20with%20just%20two%20Python%20functions"&gt;Pipeless computer vision framework&lt;/a&gt; to develop and deploy your applications. Among other advantages, you get &lt;strong&gt;multi-stream processing&lt;/strong&gt; and dynamic configuration out-of-the-box. This means you can &lt;strong&gt;add, edit and remove streams on the fly, without restarting your program&lt;/strong&gt;, as well as specify how those streams should be processed at the time of adding the stream.&lt;br&gt;
In this post I will guide you through the list of commands that you need to deploy a Pipeless application to a Nvidia Jetson device. This example has been tested on a Nvidia Jetson Xavier, but it should work with other models too.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9vea53d81kbjxzj48q2q.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9vea53d81kbjxzj48q2q.jpeg" alt="Nvidia Jetson image - Pipeless computer vision framework" width="258" height="195"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Walkthrough
&lt;/h2&gt;

&lt;p&gt;First, install Pipeless on the Jetson device. Connect to the device via ssh and run the following command. Note iT will show some env vars at the end that you need to export:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl https://raw.githubusercontent.com/pipeless-ai/pipeless/main/install.sh | bash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Then, the only other piece we need is to add our Pipeless stages. In this case, we will use the &lt;a href="https://www.pipeless.ai/docs/docs/v1/examples/yolov8"&gt;YOLOv8 example&lt;/a&gt;. You can learn more about Pipeless stages &lt;a href="https://www.pipeless.ai/docs/docs/v1/getting-started"&gt;here&lt;/a&gt;, but in short, a stage is like a micro-pipeline. You can plug several stages one after the other dynamically when providing streams to Pipeless, so you can modify the processing behaviour per stream without changing your code and without restarting your application.&lt;/p&gt;

&lt;p&gt;Let’s install some dependencies:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install opencv-python numpy ultralytics
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Create the new project folder and download the YOLOv8 stage functions:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pipeless init my-project --template empty # Using the empty template we avoid the interactive shell
cd my-project
wget -O - https://github.com/pipeless-ai/pipeless/archive/main.tar.gz | tar -xz --strip=2 "pipeless-main/examples/yolo"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;You can now start Pipeless:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pipeless start --stages-dir .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;And provide a stream as follows:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pipeless add stream --input-uri "https://pipeless-public.s3.eu-west-3.amazonaws.com/cats.mp4" --output-uri "screen" --frame-path "yolo"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;The above command assumes you have a display connected to the Jetson device to visualize the output stream. If you don’t have a display connected you can change the output URI to use a file or some multimedia server you may have.&lt;/p&gt;

&lt;p&gt;And that’s all! Impressive, right?&lt;/p&gt;

&lt;p&gt;You can find more examples in our &lt;a href="https://www.pipeless.ai/docs"&gt;documentation&lt;/a&gt; and learn how to create applications from scratch using Pipeless.&lt;/p&gt;

&lt;p&gt;If you like the ease of creating and deploying computer vision applications with Pipeless &lt;strong&gt;don’t forget to star our GitHub repository&lt;/strong&gt;.&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--A9-wwsHG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/pipeless-ai"&gt;
        pipeless-ai
      &lt;/a&gt; / &lt;a href="https://github.com/pipeless-ai/pipeless"&gt;
        pipeless
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      An open-source computer vision framework to build and deploy apps in minutes
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;p&gt;
  &lt;a href="https://pipeless.ai" rel="nofollow"&gt;
    
      
      &lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4TeV4140--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://raw.githubusercontent.com/pipeless-ai/pipeless/main/assets/pipeless-400x400-rounded.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4TeV4140--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://raw.githubusercontent.com/pipeless-ai/pipeless/main/assets/pipeless-400x400-rounded.png" height="128"&gt;&lt;/a&gt;
    
    &lt;/a&gt;&lt;/p&gt;
&lt;h1&gt;
Pipeless&lt;/h1&gt;
&lt;p&gt;
  &lt;a href="https://pipeless.ai" rel="nofollow"&gt;
    &lt;img src="https://camo.githubusercontent.com/019f3b4cebadc66d8155c29f2fe0d02a2c6272b46750962f7b77814a83fa1de8/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f4d4144452532304259253230506970656c65737325323061692d3030303030302e7376673f7374796c653d666f722d7468652d6261646765266c6f676f3d506970656c657373266c6162656c436f6c6f723d303030"&gt;
  &lt;/a&gt;
  &lt;a href="https://github.com/pipeless-ai/pipeless/releases"&gt;
    &lt;img alt="" src="https://camo.githubusercontent.com/5f0a050bfea5e21732bd2338e5ff18ffcae03c4e117ea2a294cce0a5ba80b8e7/68747470733a2f2f696d672e736869656c64732e696f2f6769746875622f762f72656c656173652f706970656c6573732d61692f706970656c6573733f7374796c653d666f722d7468652d6261646765266c6162656c3d6c6174657374266c6162656c436f6c6f723d303030303030"&gt;
  &lt;/a&gt;
  &lt;a href="https://github.com/miguelaeh/pipeless/blob/main/license.md"&gt;
    &lt;img alt="" src="https://camo.githubusercontent.com/492fc004d2bd9fa9ff63c13531ecd7e16031bdfb484ba09328ddcfd6e41aa68d/68747470733a2f2f696d672e736869656c64732e696f2f707970692f6c2f706970656c6573732d61693f7374796c653d666f722d7468652d6261646765266c6162656c436f6c6f723d303030303030"&gt;
  &lt;/a&gt;
  &lt;a href="https://github.com/miguelaeh/pipeless/discussions"&gt;
    &lt;img alt="" src="https://camo.githubusercontent.com/97a877eb30084b0ec4a445be52402617487b497379b00626deb211d7cf15b827/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f4a6f696e25323074686525323064697363757373696f6e732d626c61636b2e7376673f7374796c653d666f722d7468652d6261646765266c6f676f3d266c6162656c436f6c6f723d303030303030266c6f676f57696474683d3230"&gt;
  &lt;/a&gt;
  &lt;a href="https://discord.gg/K2qxQ8uedG" rel="nofollow"&gt;
    &lt;img alt="" src="https://camo.githubusercontent.com/7776c21fcd6d57a186f7955409dc09d7668225f49c8455d94ed44df05b31ba5c/68747470733a2f2f696d672e736869656c64732e696f2f646973636f72642f313135363932333632383833313634393837333f7374796c653d666f722d7468652d6261646765266c6f676f3d646973636f7264266c6f676f436f6c6f723d464646464646266c6162656c3d436861742532306f6e253230646973636f7264266c6162656c436f6c6f723d626c61636b"&gt;
  &lt;/a&gt;
&lt;/p&gt;
&lt;div&gt;
   &lt;p&gt;&lt;b&gt;Easily create, deploy and run computer vision applications.&lt;/b&gt;&lt;/p&gt;
   &lt;br&gt;
   &lt;br&gt;
   &lt;div&gt;
      &lt;a rel="noopener noreferrer nofollow" href="https://raw.githubusercontent.com/pipeless-ai/pipeless/main/assets/examples.gif"&gt;&lt;img width="382" alt="Loading video..." src="https://res.cloudinary.com/practicaldev/image/fetch/s--t_hvTAgk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_800/https://raw.githubusercontent.com/pipeless-ai/pipeless/main/assets/examples.gif"&gt;&lt;/a&gt;
   &lt;/div&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Pipeless is an open-source framework that takes care of everything you need to develop and deploy computer vision applications in just minutes.&lt;/strong&gt; That includes code parallelization, multimedia pipelines, memory management, model inference, multi-stream management, and more. Pipeless allows you to &lt;strong&gt;ship applications that work in real-time in minutes instead of weeks/months&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Pipeless is inspired by modern serverless technologies. You provide some functions and Pipeless takes care of executing them for new video frames and everything involved.&lt;/p&gt;

&lt;p&gt;With Pipeless you create self-contained boxes that we call "stages". Each stage is a micro pipeline that performs a specific task. Then, you can combine stages dynamically per stream, allowing you to process each stream with a different pipeline without changing your code and without restarting the program. To create a stage you simply provide a pre-process function, a model and a post-process function.&lt;/p&gt;

&lt;p&gt;…&lt;/p&gt;
&lt;/div&gt;


&lt;/div&gt;
&lt;br&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/pipeless-ai/pipeless"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;br&gt;
&lt;/div&gt;
&lt;br&gt;


</description>
      <category>computervision</category>
      <category>tutorial</category>
      <category>machinelearning</category>
      <category>ai</category>
    </item>
    <item>
      <title>Creating a computer vision app in minutes with just two Python functions</title>
      <dc:creator>Miguel Ángel Cabrera Miñagorri</dc:creator>
      <pubDate>Tue, 02 Jan 2024 12:10:23 +0000</pubDate>
      <link>https://dev.to/miguelaeh/creating-a-computer-vision-app-in-minutes-with-just-two-python-functions-jmk</link>
      <guid>https://dev.to/miguelaeh/creating-a-computer-vision-app-in-minutes-with-just-two-python-functions-jmk</guid>
      <description>&lt;p&gt;This article starts with an overview of what a typical computer vision application requires. Then, it introduces Pipeless, an open-source framework that offers a serverless development experience for embedded computer vision. Finally, you will find a detailed step-by-step guide on the creation and execution of a simple object detection app with just a couple of Python functions and a model.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--QkdFHadM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/29e8o3ahm5m1fpmgxfyx.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--QkdFHadM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/29e8o3ahm5m1fpmgxfyx.jpeg" alt="Computer vision with Pipeless framework" width="656" height="410"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction - Inside a Computer Vision Application
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;The art of identifying visual events via a camera interface and reacting to them&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That is what I would answer if someone asked me to describe what computer vision is in one sentence. But it is probably not what you want to hear. So let's dive into how computer vision applications are typically structured and what is required in each subsystem.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Really fast frame processing&lt;/strong&gt;: Note that to process a stream of 60 FPS in real-time, you only have 16 ms to process each frame. This is achieved, in part, via multi-threading and multi-processing. In many cases, you want to start processing a frame even before the previous one has finished.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;An AI model&lt;/strong&gt; to run inference on each frame and perform object detection, segmentation, pose estimation, etc: Luckily, there are more and more open-source models that perform pretty well, so we don't have to create our own from scratch, you usually just fine-tune the parameters of a model to match your use case (we will not deep dive into this today).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;An inference runtime&lt;/strong&gt;: The inference runtime takes care of loading the model and running it efficiently on the different available devices (GPUs or CPUs).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;A GPU&lt;/strong&gt;: To run the inference using the model fast enough, we require a GPU. This happens because GPUs can handle orders of magnitude more parallel operations than a CPU, and a model at the lowest level is just a huge bunch of mathematical operations. You will need to deal with the memory where the frames are located. They can be at the GPU memory or at the CPU memory (RAM) and copying frames between those is a very heavy operation due to the frame sizes that will make your processing slow.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multimedia pipelines&lt;/strong&gt;: These are the pieces that allow you to take streams from sources, split them into frames, provide them as input to the models, and, sometimes, make modifications and rebuild the stream to forward it.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Stream management&lt;/strong&gt;: You may want to make the application resistant to interruptions in the stream, re-connections, adding and removing streams dynamically, processing several of them at the same time, etc.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All those systems need to be created or incorporated into your project and thus, it is code that you need to maintain. The problem is that you end up maintaining a huge amount of code that is not specific to your application, but subsystems around the actual case-specific code.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Pipeless Framework
&lt;/h2&gt;

&lt;p&gt;To avoid having to build all the above from scratch, you can use &lt;a href="https://www.pipeless.ai"&gt;Pipeless&lt;/a&gt;. It is &lt;strong&gt;an open-source framework for computer vision&lt;/strong&gt; that allows you to provide a few functions specific to your case and it &lt;strong&gt;takes care of everything&lt;/strong&gt; else.&lt;/p&gt;

&lt;p&gt;Pipeless splits the application's logic into "stages," where &lt;strong&gt;a stage is like a micro app for a single model&lt;/strong&gt;. A stage can include pre-processing, running inference with the pre-processed input, and post-processing the model output to take any action. Then, you can chain as many stages as you want to compose the full application even with several models.&lt;/p&gt;

&lt;p&gt;To provide the logic of each stage, you simply add a code function that is very specific to your application, and Pipeless takes care of calling it when required. This is why you can think about Pipeless as a framework that provides a &lt;strong&gt;serverless-like development experience for embedded computer vision&lt;/strong&gt;. You &lt;strong&gt;provide a few functions and you don't have to worry about all the surrounding systems&lt;/strong&gt; that are required.&lt;/p&gt;

&lt;p&gt;Another great feature of Pipeless is that you can &lt;strong&gt;add, remove, and update streams dynamically via a CLI or a REST API to fully automate your workflows&lt;/strong&gt;. You can even specify restart policies that indicate when the processing of a stream should be restarted, whether it should be restarted after an error, etc.&lt;/p&gt;

&lt;p&gt;Finally, to deploy Pipeless you just need to install it and run it along with your code functions on any device, whether it is in a cloud VM or containerized mode, or directly within an edge device like a Nvidia Jetson, a Raspberry, or any others.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating an Object Detection Application
&lt;/h2&gt;

&lt;p&gt;Let's deep dive into how to create a simple application for object detection using Pipeless.&lt;/p&gt;

&lt;p&gt;The first thing we have to do is to install it. Thanks to the installation script, it is very simple:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl https://raw.githubusercontent.com/pipeless-ai/pipeless/main/install.sh | bash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Now, we have to create a project. A Pipeless project is a directory that contains stages. Every stage is under a sub-directory, and inside each sub-directory, we create the files containing hooks (our specific code functions). The name that we provide to each stage folder is the stage name that we have to indicate to Pipeless later when we want to run that stage for a stream.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pipeless init my-project --template empty
cd my-project
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Here, the empty template tells the CLI to just create the directory, if you do not provide any template, the CLI will prompt you several questions to create the stage interactively.&lt;/p&gt;

&lt;p&gt;As mentioned above, we now need to add a stage to our project. Let's download an example stage from GitHub with the following command:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;wget -O - https://github.com/pipeless-ai/pipeless/archive/main.tar.gz |
          tar -xz --strip=2 "pipeless-main/examples/onnx-yolo"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;That will create a stage directory, onnx-yolo, that contains our application functions.&lt;/p&gt;

&lt;p&gt;Let's check the content of each of the stage files; i.e., our application hooks.&lt;/p&gt;

&lt;p&gt;We have the &lt;code&gt;pre-process.py&lt;/code&gt; file, which defines a function (&lt;code&gt;hook&lt;/code&gt;) taking a frame and a context. The function makes some operations to prepare the input data from the received RGB frame in order to match the format that the model expects. That data is added to the &lt;code&gt;frame_data['inference_input']&lt;/code&gt; which is what Pipeless will pass to the model.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;hook&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;frame_data&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;frame&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;frame_data&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;original&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nf"&gt;view&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="n"&gt;yolo_input_shape&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;640&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;640&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c1"&gt;# h,w,c
&lt;/span&gt;    &lt;span class="n"&gt;frame&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;cv2&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;cvtColor&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;frame&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;cv2&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;COLOR_BGR2RGB&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;frame&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;resize_rgb_frame&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;frame&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;yolo_input_shape&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;frame&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;cv2&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;normalize&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;frame&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="bp"&gt;None&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;0.0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;1.0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;cv2&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;NORM_MINMAX&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;frame&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;transpose&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;frame&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;axes&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="c1"&gt;# Convert to c,h,w
&lt;/span&gt;    &lt;span class="n"&gt;inference_inputs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;frame&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;astype&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;float32&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;frame_data&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;inference_input&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;inference_inputs&lt;/span&gt;

&lt;span class="p"&gt;...&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;some&lt;/span&gt; &lt;span class="n"&gt;other&lt;/span&gt; &lt;span class="n"&gt;auxiliar&lt;/span&gt; &lt;span class="n"&gt;functions&lt;/span&gt; &lt;span class="n"&gt;that&lt;/span&gt; &lt;span class="n"&gt;we&lt;/span&gt; &lt;span class="n"&gt;call&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="n"&gt;the&lt;/span&gt; &lt;span class="n"&gt;hook&lt;/span&gt; &lt;span class="n"&gt;function&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;We also have the &lt;code&gt;process.json&lt;/code&gt; file, which indicates Pipeless the inference runtime to use (in this case, the ONNX Runtime), where to find the model that it should load, and some optional parameters for it, such as the &lt;code&gt;execution_provider&lt;/code&gt; to use, i.e., CPU, CUDA, TensortRT, etc.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;runtime&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;onnx&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;model_uri&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://pipeless-public.s3.eu-west-3.amazonaws.com/yolov8n.onnx&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;inference_params&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;execution_provider&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;tensorrt&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Finally, the &lt;code&gt;post-process.py&lt;/code&gt; file defines a function similar to the one at &lt;code&gt;pre-process.py&lt;/code&gt;. This time, it takes the inference output that Pipeless stored at &lt;code&gt;frame_data["inference_output"]&lt;/code&gt; and performs the operations to parse that output into bounding boxes. Later, it draws the bounding boxes over the frame, to finally assign the modified frame to &lt;code&gt;frame_data['modified']&lt;/code&gt;. With that, Pipeless will forward the stream that we provide but with the modified frames including the bounding boxes.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;hook&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;frame_data&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;_&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;frame&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;frame_data&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;original&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="n"&gt;model_output&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;frame_data&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;inference_output&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="n"&gt;yolo_input_shape&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;640&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;640&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c1"&gt;# h,w,c
&lt;/span&gt;    &lt;span class="n"&gt;boxes&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;scores&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;class_ids&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt;
           &lt;span class="nf"&gt;parse_yolo_output&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;model_output&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;frame&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;shape&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;yolo_input_shape&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;class_labels&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;yolo_classes&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;id&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="nb"&gt;id&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;class_ids&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;range&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;boxes&lt;/span&gt;&lt;span class="p"&gt;)):&lt;/span&gt;
        &lt;span class="nf"&gt;draw_bbox&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;frame&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;boxes&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="n"&gt;class_labels&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="n"&gt;scores&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;

    &lt;span class="n"&gt;frame_data&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;modified&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;frame&lt;/span&gt;

&lt;span class="p"&gt;...&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;some&lt;/span&gt; &lt;span class="n"&gt;other&lt;/span&gt; &lt;span class="n"&gt;auxiliar&lt;/span&gt; &lt;span class="n"&gt;functions&lt;/span&gt; &lt;span class="n"&gt;that&lt;/span&gt; &lt;span class="n"&gt;we&lt;/span&gt; &lt;span class="n"&gt;call&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="n"&gt;the&lt;/span&gt; &lt;span class="n"&gt;hook&lt;/span&gt; &lt;span class="n"&gt;function&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;The final step is to start Pipeless and provide a stream. To start Pipeless, simply run the following command from the &lt;code&gt;my-project&lt;/code&gt; directory:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pipeless start --stages-dir .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Once running, let's provide a stream from the webcam (&lt;code&gt;v4l2&lt;/code&gt;) and show the output directly on the screen. Note we have to provide the list of stages that the stream should execute in order; in our case, it is just the &lt;code&gt;onnx-yolo&lt;/code&gt; stage:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pipeless add stream --input-uri "v4l2" --output-uri "screen" --frame-path "onnx-yolo"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;And that's all!&lt;/p&gt;
&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;We have described how creating a computer vision application is a complex task due to many factors and the subsystems that we have to implement around it. With a framework like Pipeless, getting up and running takes just a few minutes and you can focus just on writing the code for your specific use case. Furthermore, Pipeless' stages are highly reusable and easy to maintain so the maintenance will be easy and you will be able to iterate very fast.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;If you want to get involved with Pipeless and contribute to its development, you can do so through its &lt;a href="https://github.com/pipeless-ai/pipeless"&gt;GitHub repository&lt;/a&gt;, don't forget to add your star!.&lt;/p&gt;
&lt;/blockquote&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--A9-wwsHG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/pipeless-ai"&gt;
        pipeless-ai
      &lt;/a&gt; / &lt;a href="https://github.com/pipeless-ai/pipeless"&gt;
        pipeless
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      An open-source computer vision framework to build and deploy apps in minutes without worrying about multimedia pipelines
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;p&gt;
  &lt;a href="https://pipeless.ai" rel="nofollow"&gt;
    
      
      &lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4TeV4140--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://raw.githubusercontent.com/pipeless-ai/pipeless/main/assets/pipeless-400x400-rounded.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4TeV4140--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://raw.githubusercontent.com/pipeless-ai/pipeless/main/assets/pipeless-400x400-rounded.png" height="128"&gt;&lt;/a&gt;
    
    &lt;/a&gt;&lt;/p&gt;
&lt;h1&gt;
Pipeless&lt;/h1&gt;
&lt;p&gt;
  &lt;a href="https://pipeless.ai" rel="nofollow"&gt;
    &lt;img src="https://camo.githubusercontent.com/019f3b4cebadc66d8155c29f2fe0d02a2c6272b46750962f7b77814a83fa1de8/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f4d4144452532304259253230506970656c65737325323061692d3030303030302e7376673f7374796c653d666f722d7468652d6261646765266c6f676f3d506970656c657373266c6162656c436f6c6f723d303030"&gt;
  &lt;/a&gt;
  &lt;a href="https://github.com/pipeless-ai/pipeless/releases"&gt;
    &lt;img alt="" src="https://camo.githubusercontent.com/5f0a050bfea5e21732bd2338e5ff18ffcae03c4e117ea2a294cce0a5ba80b8e7/68747470733a2f2f696d672e736869656c64732e696f2f6769746875622f762f72656c656173652f706970656c6573732d61692f706970656c6573733f7374796c653d666f722d7468652d6261646765266c6162656c3d6c6174657374266c6162656c436f6c6f723d303030303030"&gt;
  &lt;/a&gt;
  &lt;a href="https://github.com/miguelaeh/pipeless/blob/main/license.md"&gt;
    &lt;img alt="" src="https://camo.githubusercontent.com/492fc004d2bd9fa9ff63c13531ecd7e16031bdfb484ba09328ddcfd6e41aa68d/68747470733a2f2f696d672e736869656c64732e696f2f707970692f6c2f706970656c6573732d61693f7374796c653d666f722d7468652d6261646765266c6162656c436f6c6f723d303030303030"&gt;
  &lt;/a&gt;
  &lt;a href="https://github.com/miguelaeh/pipeless/discussions"&gt;
    &lt;img alt="" src="https://camo.githubusercontent.com/97a877eb30084b0ec4a445be52402617487b497379b00626deb211d7cf15b827/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f4a6f696e25323074686525323064697363757373696f6e732d626c61636b2e7376673f7374796c653d666f722d7468652d6261646765266c6f676f3d266c6162656c436f6c6f723d303030303030266c6f676f57696474683d3230"&gt;
  &lt;/a&gt;
  &lt;a href="https://discord.gg/K2qxQ8uedG" rel="nofollow"&gt;
    &lt;img alt="" src="https://camo.githubusercontent.com/7776c21fcd6d57a186f7955409dc09d7668225f49c8455d94ed44df05b31ba5c/68747470733a2f2f696d672e736869656c64732e696f2f646973636f72642f313135363932333632383833313634393837333f7374796c653d666f722d7468652d6261646765266c6f676f3d646973636f7264266c6f676f436f6c6f723d464646464646266c6162656c3d436861742532306f6e253230646973636f7264266c6162656c436f6c6f723d626c61636b"&gt;
  &lt;/a&gt;
&lt;/p&gt;
&lt;div&gt;
   &lt;p&gt;&lt;b&gt;Easily create, deploy and run computer vision applications.&lt;/b&gt;&lt;/p&gt;
   &lt;br&gt;
&lt;p&gt;&lt;a href="https://pipeless.ai" rel="nofollow"&gt;Check the live demo in the website&lt;/a&gt;
&lt;br&gt;&lt;/p&gt;
   &lt;div&gt;
      &lt;a rel="noopener noreferrer nofollow" href="https://raw.githubusercontent.com/pipeless-ai/pipeless/main/assets/examples.gif"&gt;&lt;img width="382" alt="Loading video..." src="https://res.cloudinary.com/practicaldev/image/fetch/s--t_hvTAgk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_800/https://raw.githubusercontent.com/pipeless-ai/pipeless/main/assets/examples.gif"&gt;&lt;/a&gt;
   &lt;/div&gt;

&lt;/div&gt;

&lt;p&gt;Pipeless is an open-source &lt;b&gt;computer vision framework&lt;/b&gt; to create and deploy applications without the complexity of building and maintaining multimedia pipelines. It ships everything you need to create and deploy efficient computer vision applications that work in real-time in just minutes.&lt;/p&gt;

&lt;p&gt;Pipeless is inspired by modern serverless technologies. It provides the development experience of serverless frameworks applied to computer vision. You provide some functions that are executed for new video frames and Pipeless takes care of everything else.&lt;/p&gt;

&lt;p&gt;You can easily use industry-standard models, such as YOLO, or load your custom model in one of the supported inference runtimes. Pipeless ships some of the most popular inference runtimes, such as the ONNX Runtime, allowing you to run inference with high performance on CPU or GPU out-of-the-box.&lt;/p&gt;

&lt;p&gt;You can deploy your Pipeless application to edge…&lt;/p&gt;
&lt;/div&gt;


&lt;/div&gt;
&lt;br&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/pipeless-ai/pipeless"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;br&gt;
&lt;/div&gt;
&lt;br&gt;


</description>
      <category>python</category>
      <category>ai</category>
      <category>computervision</category>
      <category>tutorial</category>
    </item>
  </channel>
</rss>
