<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Pedro Kauati</title>
    <description>The latest articles on DEV Community by Pedro Kauati (@pkauati).</description>
    <link>https://dev.to/pkauati</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/pkauati"/>
    <language>en</language>
    <item>
      <title>Managing Asynchronous Work with Celery and Redis</title>
      <dc:creator>Pedro Kauati</dc:creator>
      <pubDate>Fri, 10 Oct 2025 10:41:39 +0000</pubDate>
      <link>https://dev.to/pkauati/managing-asynchronous-work-with-celery-and-redis-99b</link>
      <guid>https://dev.to/pkauati/managing-asynchronous-work-with-celery-and-redis-99b</guid>
      <description>&lt;p&gt;This is the sequel of my &lt;a href="https://dev.to/pkauati/from-idea-to-deployment-with-fastapi-and-postgres-19k1"&gt;previous post&lt;/a&gt;, where I implemented a bug tracking web service with FastAPI. This time I'm gonna talk about the latest addition to the project: An async background job system and what it's being used for. &lt;/p&gt;

&lt;p&gt;If you just want to see the completed project, feel free to give it a try, it's &lt;a href="https://flyswatter-api.onrender.com/docs#/" rel="noopener noreferrer"&gt;live&lt;/a&gt;! The source code can be found &lt;a href="https://github.com/felipevk/flyswatter" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h1&gt;
  
  
  Why Async?🧠
&lt;/h1&gt;

&lt;p&gt;So after implementing the basic functionalities of the bug tracker, I wanted to get familiar with another important topic of back end development. What caught my attention was the idea of introducing concurrency. To put it simply, concurrency is the ability of a system to switch focus between different tasks. An operating system running on a single core CPU can achieve this, which is how you're able to run multiple programs even if they're not running in parallel.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwf29pxzvzmo4cs8wnua3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwf29pxzvzmo4cs8wnua3.png" alt="Synchronous vs Asynchronous vs Parallel" width="800" height="416"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And why is this useful for a web service? Trivial tasks may simply run one-per-one, without any hiccups. But add tasks with more processing or that depend on external services alongside several more client requests and your API will start to become slower, even for the trivial operations.&lt;/p&gt;

&lt;p&gt;So concurrency allows our service to run non-trivial tasks on the background while it's still serving whatever else.&lt;/p&gt;

&lt;p&gt;And what kind of task would that be? In my case, I decided to add a feature that project owners can request for a PDF report of their current projects. &lt;/p&gt;

&lt;h1&gt;
  
  
  Development💻
&lt;/h1&gt;

&lt;h3&gt;
  
  
  Stack update
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Message Queue Manager: Celery🥕&lt;/li&gt;
&lt;li&gt;MQ Data Store: Redis🟥&lt;/li&gt;
&lt;li&gt;PDF Generator Library: FPDF📝&lt;/li&gt;
&lt;li&gt;Blob Storage to host PDFs: MinIO️🗃️&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Gotchas🌀
&lt;/h3&gt;

&lt;p&gt;Celery was the beating heart of this operation. It took me a few tries to understand the system, specially since it involved running some code outside the FastAPI environment. &lt;/p&gt;

&lt;p&gt;For starters, I got too used to FastAPI's hot reload, something that the worker container would not benefit from. So to ensure that I was working with the latest worker codebase I had to start running the containers with the &lt;code&gt;--build&lt;/code&gt; flag.&lt;/p&gt;

&lt;p&gt;I also had to struggle a bit with properly connecting the task methods to the celery app. What worked for me was calling the method &lt;code&gt;autodiscover_tasks&lt;/code&gt; in the celery app object. This method takes a list of modules and allows the app to register tasks from files named tasks.py.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from celery import Celery
app = Celery(
    "celery_app", broker={REDIS_URL}, backend={REDIS_URL}
)
app.autodiscover_tasks(["app.worker"])
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#tasks.py
from .celery_app import app
@app.task(bind=True,retry_backoff=True,...)
def generate_report(self, job_id: str, user_id: str):
    ...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Another challenging point was testing. I would say you currently have 3 ways to test a celery worker:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use the pytest-celery library for asynchronous tests&lt;/li&gt;
&lt;li&gt;Use celery.contrib.pytest also for asynchronous tests&lt;/li&gt;
&lt;li&gt;Run synchronous tests by setting the flag &lt;code&gt;task_always_eager&lt;/code&gt; to true&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I found out to be a bit confusing to follow the documentation because I would often be unsure of which solution I was being reading about. In the end I opted for the synchronous tests, so the actual work done by the tasks could be asserted. This &lt;a href="https://tomwojcik.com/posts/2021-03-02/testing-celery-without-eager-tasks" rel="noopener noreferrer"&gt;post&lt;/a&gt; goes into more detail on the trade off involved in each method.&lt;/p&gt;

&lt;p&gt;When working with MinIO for data storage, I had to create two client connections, one private connection for communication between worker and storage, and a public one so the pdf files could be retrieved with a public facing url. Without that, the files could only be accessed via the docker network. An extra host had to be added to the minio container in order to make this work in dev.&lt;/p&gt;

&lt;h1&gt;
  
  
  Async Job System🕒
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fudwtouwulu9ytb1hobmw.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fudwtouwulu9ytb1hobmw.jpg" alt="Job system workflow" width="800" height="600"&gt;&lt;/a&gt;&lt;br&gt;
Here's the workflow for this system:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;When a request for a non-trivial task happens, the API creates a new job, saves its state in the database and dispatches a new task to celery, with the proper info. Job state is saved as &lt;code&gt;QUEUED&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Celery stores the task in the Redis container.&lt;/li&gt;
&lt;li&gt;If a worker is available, it picks up the task from the queue, changes the job state to &lt;code&gt;RUNNING&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Worker performs the task, which in this case is to generate a pdf and upload to minIO. The pdf url is saved in the database as an artifact.&lt;/li&gt;
&lt;li&gt;If successful, the job is marked as &lt;code&gt;SUCCEEDED&lt;/code&gt; and it now points to the artifact.&lt;/li&gt;
&lt;li&gt;If it fails due to a recoverable error, such as network connection, the task retries after a few seconds. The exponential backoff adds time to each attempt.&lt;/li&gt;
&lt;li&gt;If it fails due to an unrecoverable error, the job is marked as &lt;code&gt;FAILED&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;The API can check the status of jobs via the endpoint &lt;code&gt;/jobs/result&lt;/code&gt;. If a job is complete, its result can be accessed.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With this system, more workers can easily be added and future features can offload resource intensive work to the task queue without much extra setup. &lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion💡
&lt;/h1&gt;

&lt;p&gt;Coming from Game Development, I was used to concurrency in another context, involving coroutines and game objects, and applying this concept to a backend application deepened my understanding of it.&lt;/p&gt;

&lt;p&gt;As demand grows, web services require smart solutions to remain responsive and reliable. That only made me more curious to see how other systems and architectures shape performance and development efficiency.&lt;/p&gt;

&lt;p&gt;As for Flyswatter, I’ll keep polishing the codebase and improving test coverage, but when it comes to new features, I’m ready to move on to another project and dive deeper into system design. This one taught me a lot, and I can’t wait to see what comes next!&lt;/p&gt;

</description>
      <category>architecture</category>
      <category>backend</category>
      <category>python</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>From Idea to Deployment with FastAPI and Postgres</title>
      <dc:creator>Pedro Kauati</dc:creator>
      <pubDate>Fri, 26 Sep 2025 09:52:41 +0000</pubDate>
      <link>https://dev.to/pkauati/from-idea-to-deployment-with-fastapi-and-postgres-19k1</link>
      <guid>https://dev.to/pkauati/from-idea-to-deployment-with-fastapi-and-postgres-19k1</guid>
      <description>&lt;p&gt;Hello! My name is Pedro Kauati👋&lt;/p&gt;

&lt;p&gt;In this post I'm going to describe my experience creating a webapp using FastAPI⚡&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;TL;DR&lt;/strong&gt;: I created a back-end api that you can see it running &lt;a href="https://flyswatter-api.onrender.com/docs#/" rel="noopener noreferrer"&gt;here&lt;/a&gt; and check out the code &lt;a href="https://github.com/felipevk/flyswatter" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h1&gt;
  
  
  Motivation💭
&lt;/h1&gt;

&lt;p&gt;My background is in Game Development. I have worked in several companies in Vancouver, mainly using C# and C++. I have dipped my toes in Web Development before, but I was never in charge of seeing something through from start to finish. So I decided to create my own web app (at least the back-end part). Since I'm familiar with Python from other projects, I decided to go with the FastAPI framework.&lt;/p&gt;

&lt;h1&gt;
  
  
  Idea💡
&lt;/h1&gt;

&lt;p&gt;So I wanted to do something... doable. But still useful. Something that could have a real application, even if it only lives as a hobby project. &lt;/p&gt;

&lt;p&gt;An Issue Tracker seemed like a good fit. And with that... Flyswatter was born (well, not yet)! A bug tracker where users could report issues from different projects, comment on them and assign issues to themselves or others. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1ws5g4xeodgjx4uoc5dw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1ws5g4xeodgjx4uoc5dw.png" alt="Flyswatter" width="256" height="256"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Development🛠️
&lt;/h1&gt;

&lt;h3&gt;
  
  
  Stack
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;FastAPI⚡&lt;/li&gt;
&lt;li&gt;PostgreSQL🐘&lt;/li&gt;
&lt;li&gt;SQLAlchemy✨&lt;/li&gt;
&lt;li&gt;Alembic🧪&lt;/li&gt;
&lt;li&gt;Docker + docker-compose🐋&lt;/li&gt;
&lt;li&gt;OAuth2 + JWT tokens for authentication🔐&lt;/li&gt;
&lt;li&gt;pytest running on Github Actions🤖&lt;/li&gt;
&lt;li&gt;Sentry for error monitoring📊&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Gotchas🔍
&lt;/h3&gt;

&lt;p&gt;During development I ran into some points of confusion for someone who's getting into a new development environment:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The difference between running scripts from docker containers and the docker host. Sometimes it makes no difference, but at other times it does. Something I still need to wrap my head around it&lt;/li&gt;
&lt;li&gt;The correct way of managing the database while using Alembic. For example, you should ideally let Alembic generate the tables instead of making them in code. It also creates a table holding the current db revision, which you also shouldn't manually tamper with, unless you know what you're doing. Things like that can create a drift between the database and what Alembic tries to do, risking the migrations to not work at all.&lt;/li&gt;
&lt;li&gt;Some common features of Python that I'm still not very well versed with, like the different ways to load env vars and for how long they last in memory; unpacking a dictionary to a set of kwargs and also Python's import system.&lt;/li&gt;
&lt;li&gt;Watching out for which parts of the database aren't covered in Alembic's auto generation. One that I've noticed is that it can add enum types on upgrade but it doesn't remove them on downgrade.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Base.metadata.drop_all()&lt;/code&gt; nukes all tables except for alembic_versions and other remaining data. It does not go well with alembic inner workings, which may give false positives on migrations or fail them entirely. I lost a few hours trying to find the issue there.&lt;/li&gt;
&lt;li&gt;In order to rollback database changes (useful for testing), you have to:

&lt;ol&gt;
&lt;li&gt;Start a db connection that lasts for as long as all the changes you want to rollback&lt;/li&gt;
&lt;li&gt;Start a nested subtransaction&lt;/li&gt;
&lt;li&gt;Listen to &lt;code&gt;after_transaction_end&lt;/code&gt;, which triggers after session.commit()&lt;/li&gt;
&lt;li&gt;When a transaction ends, start a new subtransaction&lt;/li&gt;
&lt;li&gt;To rollback, call the rollback method on the original connection. This will revert all previous nested transactions&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  🐜Flyswatter🐜
&lt;/h1&gt;

&lt;p&gt;After about 3 weeks of work, the main features of Flyswatter are complete!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgnh2nfphyg3cqbsuzmcn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgnh2nfphyg3cqbsuzmcn.png" alt="Swagger UI visualization of Flyswatter" width="800" height="772"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can check out the &lt;a href="https://github.com/felipevk/flyswatter" rel="noopener noreferrer"&gt;Github repo&lt;/a&gt; and the &lt;a href="https://flyswatter-api.onrender.com/docs#/" rel="noopener noreferrer"&gt;deployment on Render&lt;/a&gt;. I didn't put any work on the front-end, but fortunately FastAPI also makes use of Swagger UI, which generates a docs page with access to all the endpoints.&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion📖
&lt;/h1&gt;

&lt;p&gt;This was a very interesting project which taught me lots in terms of web frameworks and Python in itself. And I'm still not done with it, the next step now is to add background processing with Redis. I'm currently planning on generating a PDF bug report, but I may go with something else in the end.&lt;/p&gt;

&lt;p&gt;Thank you for reading until this point and feel free to share your thoughts about the project!✌️&lt;/p&gt;

</description>
      <category>python</category>
      <category>backend</category>
      <category>postgres</category>
      <category>api</category>
    </item>
  </channel>
</rss>
