Introduction: The Calm Before the Storm
I write these lines as the hum of my laptop fades for the first time in hours. There is a particular silence in the office when the compiler finishes its work and the unit tests turn green; it is a mix of relief, residual adrenaline, and a quiet anxiety. We are just days away from January 19th. That date, which a month ago seemed like a distant point on the calendar, now looms over me like a massive wave about to break. For many, it will be just another Monday, the start of another work week. For me, and for the project that has consumed my nights and weekends, it is D-Day. It is the moment when pardoX ceases to be mine and begins to be yours.
Before diving into the technical details of what we have achieved in these last frantic weeks, I feel a moral and professional obligation to pause for a second and look back. I want to deeply thank everyone who has followed this series of logs, to this moment. Your emails, your comments on LinkedIn, and above all, those shared horror stories about data processes that take hours to execute, have been the fuel that has kept this engine running when fatigue threatened to shut it down. I am not building this in a vacuum; I am building it upon the collective frustration of thousands of engineers who know that our tools should be better.
If there is one thing I have learned the hard way in this final sprint toward version 0.1, it is that there is a gigantic abyss between writing a brilliant script and building a stable product. A month ago, I was celebrating execution times and speed records. I felt invincible watching us process 640 million rows in seconds. But pure speed, while intoxicating, is only half the equation. The “easy” part, if I may be so bold, is making code run fast in a controlled environment, under ideal conditions, and with the wind at your back. The brutally hard part, the one that separates weekend projects from real engineering software, is robustness.
I have spent the last few weeks not looking for ways to shave milliseconds off the stopwatch, but ensuring the engine doesn’t explode when someone decides to use it in a way I hadn’t anticipated. I have had to fight against my own developer ego—the one that wants to keep optimizing loops—to put on the architect’s hat and accept that usability is just as critical as performance. It is useless to be the fastest engine in the world if you need a PhD in nuclear physics to turn it on. The transition from a “speed experiment” to a “data ecosystem” has been painful, full of massive refactoring and tough decisions, but absolutely necessary.
The promise I make to you today, days before the release, is different from the one I made a month ago. I no longer promise you just brute speed. I promise you flow. I have understood that my mission is not just to read a CSV quickly; my mission is to eliminate the friction that exists between the engineer and their data. To achieve that, I have had to make radical decisions, such as abandoning the comfort of my usual development environment and migrating to where the iron truly breathes: Linux. I have had to break the chains of conventional drivers to speak directly with databases. What you are about to read is not just a changelog; it is the chronicle of how I have tried to build the tool I desperately needed myself: an engine that doesn’t just run, but flows, breathes, and works with the precision of a Swiss watch amidst the chaos of our daily data. Welcome to the final report before launch.
Chapter 1. The Leap into the Void: Abandoning the Windows Cage
For over a decade, my development environment has essentially been a comfort zone carefully built upon Windows. It is an operating system I know, with its shortcuts, its quirks, and that friendly graphical interface that makes you feel in control. When I started writing the first lines of code for pardoX, I did so sitting in that comfort. And during the initial stages, when the datasets were “small” (10 or 20 million rows), everything seemed to work fine. But as the project’s ambition grew and data volumes began to brush against hundreds of millions, I started to notice something unsettling. It wasn’t a bug in the code, nor a visible memory leak. It was a physical sensation.
Imagine you have a sports car with a perfectly tuned V12 engine. You floor the accelerator, hear the roar of combustion, feel the vibration of the power, but the car moves sluggishly. You look out the window and realize you are not on an asphalt track; you are driving through a swamp of molasses. That was exactly my experience with Windows in recent weeks. I felt that the Rust engine wanted to run, wanted to devour data, but the “floor” it was running on was sticky.
The fundamental problem, and this is something hard to admit for those of us who have grown up in the Microsoft ecosystem, is that Windows is not designed for the extreme low-level performance that pardoX requires. Windows is an incredibly “polite” operating system; it prioritizes user experience, the graphical interface, and desktop multitasking. But when you try to manage hundreds of simultaneous execution threads and squeeze asynchronous I/O to the physical limit of the NVMe disk, that “politeness” becomes an insurmountable obstacle. The Windows kernel acted like an obsessive micro-manager, constantly intervening in my thread scheduling, deciding when to pause them and when to resume them, adding an invisible but cumulative latency that was suffocating my architecture.
The decision was not easy, but it required pragmatism. I couldn’t afford to format my main workstation and halt daily operations, so I did what any performance-obsessed engineer would do: I doubled down. I decided to acquire dedicated hardware exclusively for this mission. I bought an HP EliteBook, an “all-terrain” machine equipped with 16GB of RAM and a Ryzen 5 processor. This hardware choice was not a random whim; it was a tactical maneuver. By opting for the Ryzen ecosystem, I gained access to the Vega graphics architecture. This was crucial because pardoX has an experimental GPU acceleration module that I had been wanting to unleash for months, and I needed an environment where I could test that hardware integration without intermediate virtualization layers.
With this new machine in my hands, pristine and ready for combat, I didn’t install Windows. I installed Ubuntu 24.04 LTS. The change was revelatory almost immediately. In Linux, and specifically with this AMD hardware combination, resource management is brutally honest. When you ask the Linux kernel to allocate resources, it doesn’t ask “are you sure?” nor does it try to negotiate with you. It simply gives you control. The difference in asynchronous I/O management was abysmal, and seeing the engine natively detect the Vega GPU was one of those small moments of silent victory.
That feeling of a “sticky floor” vanished instantly. Suddenly, traction was total. Response times became deterministic. The “Windows Cage” had opened. I understood then that the environment matters just as much as the code. If we want to build software that competes with giants like Spark or DuckDB, we cannot do it from the comfort of a conventional desktop environment. We have to go down to the basement, get our hands dirty with the terminal, and work close to the metal, where there is no safety net, but there are also no speed limits.
Chapter 2. The Evidence in the Terminal: 182 Seconds
They say data doesn’t lie, but sometimes, it takes too long to tell the truth. When I migrated to Linux and had the new machine ready, I knew the moment for the acid test had arrived. I didn’t want synthetic tests or “toy” use cases. I wanted to face the “monster” again: the Consolidated North Sales dataset. We are talking about 320 independent CSV files, totaling 640 million rows. To put this in perspective, this is a volume of information that would crash Excel before you could even see the loading bar, and would typically require a Spark cluster running and billing dollars per hour in the cloud. I was going to attempt it locally, on a laptop, running on battery power alone.
On this occasion, I decided to leave DuckDB out of the equation. My respect for its SQL engine remains intact, but for this specific test, I was looking to measure pure flow and transformation speed in Rust, a “metal against metal” duel. The opponent to beat was Polars, the current king of speed in the Python ecosystem and the tool that, honestly, has been both my inspiration and my nightmare throughout this development. Polars is incredibly efficient, and beating it is not a trivial task; it’s like trying to win a race against an Olympic athlete wearing shoes you cobbled together in your garage.
I prepped the environment, took a deep breath, and launched the command for pardoX.
The cursor blinked, the progress bars filled up, and suddenly, the success message appeared in neon green. My eyes went straight to the total time: 182.04 seconds.
Three minutes and two seconds. That is what it took for my engine to ingest, process, and rewrite 640 million records into an optimized binary format. We were moving data at a speed of 3.5 million rows per second. The feeling was electric. But victory isn’t real if you don’t have something to compare it to. Immediately after, I executed the exact same pipeline with Polars.
Polars’ result was excellent, as always: 203.85 seconds. But the math was clear. PardoX had crossed the finish line 21.8 seconds sooner. In a 100-meter dash, winning by a fraction of a second is a feat; in massive data processing, winning by nearly 22 seconds is a statement of intent. It means our “Zero-Copy” architecture and obsessive thread management were paying off.
However, what impacted me the most wasn’t the speed—which is what grabs headlines—but the stability. This is where the switch to Linux shone brightly. If you look at the telemetry in the screenshot, you will see that RAM consumption at the end of the process was barely 1.13 GB. Processing over half a billion rows while consuming barely a gigabyte of memory on a laptop is the ultimate proof that efficiency doesn’t require expensive hardware; it requires better engineering.
In Windows, during previous tests, I saw erratic spikes in CPU and memory usage, as if the system was struggling to breathe. Here, in the native Linux environment, consumption was a flat, predictable line. The operating system didn’t get in the way; it became a silent ally, allowing pardoX to use the GPU and processor cores with surgical freedom. This test proved that we hadn’t just built something fast; we built something sustainable. PardoX didn’t win this round through brute force; it won through technical elegance. And that, for an engineer, is the sweetest victory of all.
Chapter 3. Beyond Reading: PardoX as an Interactive Tool
During the first few months of development, I must confess that I treated pardoX like a glorified pipe. My obsession was throughput: how many bytes per second can I push from disk to memory? It was a purely logistical view of data. The engine was an incredibly efficient black box: CSV files went in one end, and Parquet or .prdx files came out the other. Fast, yes. But blind.
The problem with black boxes is that they require blind faith. As a Data Engineer, I hate blind faith. I need to see. I need to verify. When you’re working with 600 million rows, you can’t wait for a 10-minute process to finish only to realize that the date column came in European format and your entire analysis broke. That frustration of having to open a giant file with external tools just to see the headers or verify a data type was what triggered the project’s next evolutionary step.
I realized that if pardoX wanted to be taken seriously, it had to stop being a simple loading script and become a first-class citizen within the Data Scientist’s natural habitat: the Jupyter Notebook.
The transition from “loader” to “explorer” was a design challenge rather than a brute force one. Implementing head(), tail(), or dtypes sounds trivial; any Python student does it in their first week with Pandas. But doing it on a 50GB file, without loading the entire file into RAM and keeping latency in the milliseconds, is another story. I had to teach the Rust engine to be curious, to “peek” into the file without committing to reading it all.
Seeing pardoX running inside a Jupyter cell, responding instantly to my inspection commands, was a moment of profound validation. It was no longer an opaque, external tool; I could now dialogue with the data.
In the screenshot, you can see how I invoke a head() on the massive dataset. The response is immediate. There is no waiting, no fans spinning to the max. The engine jumps to the exact point in the file, decodes only the necessary bytes, and presents me with a clean, formatted preview. The same goes for dtypes. Instead of guessing, I can now ask the engine: “How are you interpreting this column?” And the engine responds with the precision of native Rust types mapped to Python.
This interactivity fundamentally changes the workflow. Now I can iterate. I can load a pointer to the file, verify the structure, inspect the last few rows to ensure there is no garbage at the end of the file (with tail()), and do all this before committing my machine’s resources to a heavy transformation.
PardoX has ceased to be a black box. It now has windows, it has a dashboard, and most importantly, it allows me to “touch” the data. This native inspection capability, without third-party dependencies, is what separates an automation script from a true exploratory analysis tool. I no longer have to leave my Python flow to understand what on earth is inside that monstrous CSV. The power is right there, at the reach of a Shift + Enter.
Chapter 4. The "Killer Feature": PostgreSQL Without Intermediaries
If there is a sacred ritual in the life of any Python developer working with data, it is this: pip install psycopg2 or pip install sqlalchemy. We do it almost by muscle memory. It is the toll we pay to enter the world of databases. And don’t get me wrong, these libraries are masterpieces of community engineering; they have sustained the modern web and thousands of enterprise applications for years. But in the world of Big Data and massive ingestion, these tools hide a “silent tax” that we have meekly accepted for too long.
The problem isn’t that they work poorly; the problem is how they work. When you use a standard Python library to read a million rows from PostgreSQL, an inefficient and costly dance happens under the hood. The database sends raw bytes across the network. The Python library receives those bytes and must, row by row, datum by datum, convert them into a Python Object. An integer in the database (4 bytes) becomes a PyObject (28 bytes or more). A date becomes a datetime object. This “translation” or marshaling not only consumes valuable CPU cycles; it devours RAM with alarming voracity. Have you ever wondered why loading a 1GB dump requires 4GB of RAM in your script? It’s the cost of abstraction. It’s the price of having middlemen.
During the development of pardoX, I became obsessed with eliminating this friction. I asked myself: Why do I need to convert data to Python objects if my ultimate goal is to process them in the Rust engine? Why pay the toll of translation if I can speak the database’s native language?
The answer was one of the most ambitious and complex features I have implemented to date: Native Rust Connectivity.
Instead of relying on external drivers, I decided to implement the PostgreSQL communication protocol directly into pardoX’s Rust core. This means that when pardoX connects to your database, there are no “adapters.” There are no compatibility layers. The engine opens a direct TCP socket against port 5432 and starts speaking in Postgres’s binary protocol (the Wire Protocol).
What you see in the image is technical purity. Data flows from the database server’s disk, travels across the network, and lands directly in memory managed by pardoX. Not a single Python object is created in the transit process. It is a direct pipeline, not a hose full of patches and adapters.
The impact of this is brutal. In terms of memory consumption, we have seen reductions of up to 70% compared to traditional reading via pandas/SQLAlchemy. Transfer speed is limited only by network bandwidth, not by the speed at which Python can create objects. We are talking about saturating the line, drinking data directly from the source without spilling a drop.
But what really excites me isn’t just what we’ve achieved with PostgreSQL today, but what this means for the project’s future. By mastering the technique of implementing network protocols (”wire protocols”) directly in Rust, we have unlocked a universal master key.
If we can speak natively with Postgres, we can speak with anything.
This architecture is the cornerstone for what is coming in the next few months. I am already mapping out the bits to replicate this success with other giants. Next on the list is MySQL and its cousin MariaDB; the logic is the same: eliminate the driver and speak binary. Then we will go for the corporate ecosystem with SQL Server, implementing the TDS (Tabular Data Stream) protocol natively.
But we won’t stop at the traditional relational world. Rust’s flexibility allows us to dream of direct connectors for NoSQL databases like MongoDB, where BSON parsing can be massively accelerated if we avoid high-level JSON overhead.
And looking even further, toward the horizon where modern enterprise data lives, this technology opens the doors to the cloud. I am researching the implementation of Arrow Flight SQL, an emerging protocol that would allow pardoX to connect to Snowflake, AWS Redshift, or Databricks and pull millions of compressed rows, flying across the network, directly into your local laptop’s memory, bypassing the slow ODBC/JDBC drivers that have been the industry bottleneck for decades.
This is the real vision behind version 0.1: Independence. I want pardoX to be an autonomous tool. I don’t want it to force you to install 20 dependencies or configure OS drivers that always fail. I want it so that if you have the credentials, you have the data. Fast, clean, and without intermediaries. We have cut the landline cables and switched to direct fiber optics. And once you taste pure speed, it is impossible to go back.
Chapter 5. Persistence at Light Speed: The .prdx Format
In data engineering, there is a painful asymmetry we often ignore: we tend to put all our effort into optimizing reading, but we passively accept that writing is slow. It is the computational equivalent of having a chef who can chop vegetables at lightning speed but takes forever to put them in the pan. It was useless for me to have achieved pardoX reading 640 million rows in 3 minutes if, when it came time to save the processed results, I had to sit and wait 15 minutes while the system struggled to convert that efficient binary data back into a clumsy text format like CSV.
Writing to CSV in 2026 should be considered a crime against hardware. Converting floating-point numbers to text strings, handling quotes, escaping special characters... all of that is wasted CPU time. On the other hand, Parquet is fantastic and is the industry standard, but its encoding complexity (Snappy, dictionaries, RLE) sometimes imposes an overhead that, for fast local work, feels excessive.
I needed a middle ground. I needed a format that was, essentially, an organized memory dump. Thus, the .prdx format was born.
Without going into details that compromise the project’s intellectual property, I can tell you that the design of .prdx is based on two fundamental pillars: RowGroups and the Zstd (Zstandard) compression algorithm. The philosophy is simple: instead of treating the file as a continuous stream, we divide it into massive logical blocks. Each block is compressed independently and asynchronously using Zstd, which offers, in my experience, the world’s best balance between compression ratio and decompression speed.
But the real magic happens in the orchestration. While pardoX processes data in memory, it fills these buffers. At the exact moment a block is ready, a dedicated thread “freezes” it, compresses it, and shoots it to the NVMe disk. There is no complex serialization, no transformation to text. It is the binary state of your data, encapsulated and saved.
The result of this architecture was, honestly, hard to believe the first time I saw it. During stress tests on Linux, we recorded a sustained write throughput of 3.5 GB/s.
Let me repeat that: 3.5 Gigabytes per second.
To put that in context, we are almost completely saturating the theoretical bandwidth of a current-generation NVMe SSD. We are writing data as fast as storage physics allows. Saving a 20GB DataFrame is no longer a coffee break; it is a 6-second blink.
The utility of this goes beyond showing off high numbers. It radically transforms the way we work. In Data Science, work is iterative and prone to error. You do a cleanup, you make a mistake, you break a column, and you have to start over. With traditional tools, that “start over” means reloading the original CSV (10 minutes lost). With the .prdx format, I have implemented what I call “Instant Save Points.”
Imagine you are in a difficult video game and you save your progress before the final boss. That is .prdx. I do a massive load, save to .prdx in seconds, and then I can experiment with aggressive transformations. Did I mess up? It doesn’t matter. I reload the .prdx at 3.5 GB/s and I am back at the starting point instantly. We have turned disk persistence, which used to be the most tedious bottleneck, into a virtual extension of our RAM. I no longer fear closing the laptop or restarting the kernel; my data is safe and ready to come back to life at light speed.
Chapter 6. The Mathematical Engine: Vectorized Arithmetic
Until just a week ago, if I am brutally honest with myself, pardoX was an exceptionally fast messenger. It was the world’s best mailman: it could pick up a data package (CSV) and deliver it in another format (Parquet/Prdx) at breakneck speeds. But a mailman, however fast, doesn’t open the letters or rewrite their content. The real value in Data Engineering doesn’t lie just in moving information, but in transforming it. That is where the “T” in ETL (Extract, Transform, Load) resides.
Without the ability to mutate data, pardoX remained a support tool, a glorified “converter.” To graduate as a full-fledged ETL engine, I needed to teach it math. But not just any kind of math.
In pure Python, if you want to multiply two columns (Say Price * Quantity) in a list of objects, the interpreter has to iterate row by row. For 640 million rows, that’s 640 million individual instructions, 640 million type checks, and 640 million memory allocations. It is the definition of inefficiency.
To solve this, I had to implement a Vectorized Arithmetic Engine in the Rust core. The idea is to leverage the SIMD (Single Instruction, Multiple Data) instructions of modern processors. Instead of telling the CPU: “Take number A and multiply it by B,” we say: “Take this block of 64 A numbers and multiply them by this block of 64 B numbers in a single clock cycle.”
The image speaks for itself. In the Notebook, I execute a massive multiplication between two floating-point columns. The syntax is simple, identical to what you would do in Pandas, but what happens underneath is radically different. There are no Python for loops. The instruction travels straight to the metal.
The result is that mathematical operations feel instant, even with hundreds of millions of records. We can now add, subtract, multiply, and divide entire columns at low-level speed.
This functionality is the missing piece of the puzzle. It is the difference between a tool that is only good for making backups and a tool that is good for doing business. Now I can calculate taxes, sales projections, profit margins, or normalize metrics directly in the engine, while data flies from memory to disk.
The vision here is clear: I want pardoX to be able to absorb heavy business logic. I want it so that when you load your data, you aren’t just reading it, but you are already preparing it for final analysis. With vectorized arithmetic, we have ceased to be simple byte carriers. We are now information architects.
Chapter 7. The Last 100 Hours: The "Steering Wheel" and the "Pedals"
If you’ve read this far, you might think that pardoX is already finished, ready to conquer the world. The speed is there, the database connection is a marvel, and the .prdx format flies. But I will be honest with you: what we have right now is a Ferrari engine mounted on a wooden chassis. We have the raw power to go 300 km/h, but we are missing the steering wheel and the pedals to ensure that experience doesn’t end in a fatal crash at the first turn.
In these last 100 hours before launch, my focus has shifted radically. I am no longer looking at the speedometer; I am looking at ergonomics. It is useless to have an engine capable of multiplying columns in nanoseconds if, at the end of the calculation, the user doesn’t have a simple way to assign that result back to the DataFrame.
Currently, pardoX can calculate price * quantity, but the result is left “floating” in memory limbo. The immediate technical challenge—my obsession for the next 48 hours—is to implement mutant assignment logic, Python’s famous setitem. It seems trivial to write df['total'] = ..., but in a “Zero-Copy” memory system like ours, this implies major surgery: we have to resize the columnar structure on the fly, allocate new memory without fragmenting the existing one, and align pointers, all without stopping the engine.
The second missing pedal is the emergency brake for dirty data: fillna. Real-world data is ugly; it comes full of holes, nulls, and garbage. An engine that chokes on a null value is a toy. I am building the cleaning kernels so that pardoX can sweep through millions of rows, detect the gaps, and fill them with sentinel values (like 0 or "N/A") at the same breakneck speed at which it reads.
The goal for January 19th is non-negotiable. I don’t want to hand you just a “fast reader.” I want to hand you the full cycle. The success of version 0.1 will not be measured by how fast it loads, but by whether it allows the sacred flow of Data Engineering to be executed without interruptions: Load -> Clean -> Transform -> Save.
I know long nights and a lot of coffee await me. Building the engine was a physics challenge; building the steering wheel is a challenge of user empathy. But when Monday comes, I want you to feel in command of a complete machine, not a science experiment.
Final Reflection and Call to Action
The Loneliness of the Compiler
Often, when we read about major software launches, we imagine huge teams, glass offices in Silicon Valley, and strategy meetings over specialty coffee. But the reality of pardoX, and of most tools that truly change our daily lives, is quite different. This engine was born in solitude. It was born in the silence of the early morning, illuminated only by the blue glow of a monitor, while the rest of the world slept.
There is an invisible fraternity among us engineers. It is the fraternity of those who refuse to accept things as they are. pardoX didn’t emerge because I wanted to be famous or because I sought to reinvent the wheel for sport. It emerged from anger. It emerged from that exact moment, at 3:00 AM, staring at a progress bar frozen at 40%, knowing that my Python script had run out of memory for the umpteenth time. In that moment of solitary frustration, one has two choices: accept that “slow and heavy” is the norm and resign oneself, or decide that the norm is wrong and build something better.
I chose to build. And in that process, I discovered I wasn’t alone. Every message I have received from you during this series of articles has confirmed that the “loneliness of the compiler” is, in reality, a shared experience. We have all felt the helplessness of inefficient tools. We have all wanted to smash the keyboard when the database driver fails. pardoX is my answer to that collective pain. It is my way of saying: “It doesn’t have to be this way. We can do better. We can make it faster.”
The Release: January 19th
Next Monday, January 19th, I will stop talking and start delivering. I will release pardoX version 0.1 Beta for the Python ecosystem.
I want to be brutally transparent about what this means. It is a Beta. It is not a corporate Gold version polished by a marketing department. It is a racing engine we just rolled out of the shop. It is going to run fast, very fast. It is going to connect to PostgreSQL like there is no tomorrow. But it will also have sharp edges. You are likely to find bugs. You may find edge cases that I didn’t imagine in my lab.
And that is exactly what I need. I am not looking for tourists; I am looking for test co-pilots.
A Promise of Universality: The Multi-Language Roadmap
But the vision for pardoX was never to be “just another Python library.” Data doesn’t live only in Python. Data is the blood running through the veins of legacy systems, web backends, and old enterprise servers.
That is why today I make a public commitment to you. The Python launch on January 19th is just the starting gun.
Exactly two weeks later, I will fulfill the promise I made at the beginning of this journey for the “Forgotten Sector”: I will launch the official version for PHP. Because the engineers supporting the web with Laravel and Symfony also deserve to process millions of rows without blocking the server.
And we won’t stop there. The continuous release roadmap will follow an aggressive pace until the universal suite is complete. We will release the installable binary (CLI in PATH for Windows, Mac, and Linux) with native bindings for:
- JavaScript / Node.js (For the modern backend).
- Golang (For high-performance microservices).
- Java (For the corporate world that never sleeps).
- COBOL. (Yes, you read that right. Because there are mainframes moving the world economy, and they deserve modernity too).
If you have any suggestions for another language or environment we are ignoring, my ears are open. This engine is for everyone.
About the Noise and Opinions
On this path, I have learned to filter the noise. The internet is full of opinions on which tool is “the best.” Twitter and Reddit are battlefields where people theoretically discuss whether one language is superior to another.
But honestly, I try not to get distracted by theoretical debates or synthetic benchmark wars. I focus on what builds. If you come to tell me that Rust is better than C++ or vice versa just to be right, I probably won’t answer. I don’t have time for holy wars.
But if you come with an idea, with a strange use case, with a bug you found processing data from a pharmacy in a remote village with an unstable connection… then we are on the same team. If you come with code-dirty hands and a desire to solve a real problem, this is your home.
Join the Resistance
This is my formal invitation. Join the beta. Help me break this to build it better. Download the engine, throw your worst CSVs at it, connect it to that database no one dares to touch, and tell me what happens.
The code is compiled. The tests have passed. The coffee is ready. See you at the launch.
Alberto Cárdenas.
📬 Contact Me: Tell Me Your Horror Story I need to get out of my head and into your reality. Send me your use cases and your frustrations.
- Direct Email: iam@albertocardenas.com (I read all emails that provide value or propose solutions).
- LinkedIn: linkedin.com/in/albertocardenasd (Let’s connect. Mention you read the “pardoX” series so I accept you fast).
- X (Official PardoX): x.com/pardox_io (News and releases).
- X (Personal): x.com/albertocardenas (My day-to-day in the trenches).
- BlueSky: bsky.app/profile/pardoxio.bsky.social
See you in the compiler.








Top comments (0)