DEV Community

Cover image for The 16GB RAM Hell (And Why You Don’t Need a Cluster to Escape It)
Alberto Cardenas
Alberto Cardenas

Posted on

The 16GB RAM Hell (And Why You Don’t Need a Cluster to Escape It)

Introduction: When Your Laptop Says “Enough”
In the daily trenches of Data Engineering, I constantly face complex technical challenges. But ironically, the highest wall I hit isn’t petabyte-scale Big Data, but “Mid Data.”

I’m talking about that awkward spot where you need to process 50 or 100 million records. It’s a treacherous amount of data: too big for Excel without crashing, yet too small to justify spinning up a Spark cluster and burning through cloud credits.

And then there’s the hardware reality. Not all of us have $5,000 workstations. The reality of the industry — especially for contractors or consultants — is that we are often assigned the standard “Lenovo ThinkPad Core i5 with 16GB of RAM” or, if you’re lucky, an M1 MacBook Air with the same memory.

These machines are great for browsing and emails, but when you try to load a 3GB CSV into Pandas, your RAM evaporates. You try Java, and the JVM eats 4GB just to say “Hello.” And there you are, staring at a frozen screen, thinking: “There has to be a better way to do this without asking my boss for a new server.”

pardoX wasn’t born on a Silicon Valley whiteboard seeking venture capital. It was born on that i5 laptop, out of frustration and curiosity.

I’m not here to sell you vaporware, nor to tell you to throw your current code in the trash. I’m not here to say Python is bad or that your stack is useless. Quite the opposite.

I’m here to tell you the story of how, in trying to solve my own headaches, I ended up building an engine in Rust capable of processing those 50 million rows in seconds, on the very same laptop that used to freeze. This is pardoX: a personal project on the verge of becoming an MVP, designed to give power back to your local machine.

Welcome to the quest for the Universal ETL.

  1. I Come Not to Kill Your Stack, But to Save It (The Peace Treaty) In tech, whenever someone announces a “revolutionary new engine,” experienced engineers instinctively shield their code. We know what comes next: a consultant telling us we must rewrite everything in the trendy language of the month.

That is why the first rule of pardoX is what I call “The Peace Treaty.”

I don’t want you to rewrite your PHP backend in Rust. I don’t want you to migrate your Python automation scripts to Go. And I definitely don’t want you to touch that COBOL mainframe that no one dares look in the eye.

pardoX isn’t here to replace your stack; it’s here to complete it.

The True Story Behind the Name: The Holy Trinity
I have a confession to make: while marketing might say pardoX solves the “paradox” of performance vs. cost (which is true), the name has a much geekier, more personal origin.

If you work with data, you know the two giants in the room:

Pandas: The classic. Flexible, friendly, the Python standard. (The Panda bear).
Polars: The new beast. Fast, written in Rust, efficient. (The Polar bear).
But I always felt one was missing to complete the family. If you’re an animation fan (specifically of We Bare Bears), you know the big brother is missing. The loud leader, the one who tries to keep everyone together, the one constantly trying to connect with the outside world.

We were missing Pardo (Grizzly).

pardoX was born to be that “Grizzly” in data engineering. While Pandas is comfort and Polars is pure analytical speed, pardoX is the engine of connection and brute force. It’s the bear that isn’t afraid to get its hands dirty diving into a legacy PHP server or talking to C++ binaries.

The “X”: The Intersection Factor
If “Pardo” is the muscle (the Rust engine), the “X” is the magic. The “X” represents the universal intersection. It is the point where languages that usually don’t speak to each other converge.

It’s the tool that allows a PHP script (which would normally choke on a 1GB CSV) to pass the baton to the Grizzly engine, let it crush the data in milliseconds using SIMD, and hand the clean result back to Python.

The Paradox We Solve (Even If It’s Not Our Name)
Even though the name comes from the bear, the mission is indeed to solve a historic contradiction in our industry. We are told we can only pick two:

Speed (Brutal performance)
Simplicity (Easy to write)
Low Cost (Runs on modest hardware)
pardoX breaks that triangle. It gives you the speed of a cluster, the simplicity of a local library, and it runs on that cheap laptop the consultancy gave you.

The Real Problem: The Migration Lie
We live in a bubble where it seems “Data Engineering” is just modern Python. But the reality in the trenches is different.

There are banks processing critical transactions in COBOL. There are giant e-commerce sites running on WooCommerce (PHP) with 80-million-row tables that suffer every time someone requests a report.

The industry arrogantly tells them: “Throw it all away and migrate to microservices.”

pardoX tells them: “Keep your stack. Just plug in this engine.”

Imagine strapping a nuclear battery to your old sedan. You keep driving the car you know, but now you have an engine underneath (“The Grizzly”) that processes 50 million rows in 12 seconds.

Welcome to the era of the Grizzly.

  1. The Valley of Data Death (Where Laptops Go to Die) There is a dark place in data engineering. A limbo where traditional tools stop working and “Enterprise” solutions are too expensive or complex to justify.

I call it “The 50 Million Valley of Death.”

It’s that awkward data range: between 50 and 500 million rows. It’s too big to double-click, but too small to justify spinning up a Databricks cluster and burning cloud budget.

And this is where the real nightmare begins, because the battlefield isn’t a 128-core server. It’s your desk.

The Scenario: The “Lenovo i5” Reality
Let’s be honest about hardware. On LinkedIn, everyone posts about Netflix or Uber architectures. But in real life, when you join a consultancy or take on a project as a contractor, they don’t give you the keys to the kingdom.

They hand you a standard corporate laptop:

An Intel Core i5 processor (or if you’re lucky, an M1/M2 Mac).
16 GB of RAM (which is actually 12 GB because Chrome and Teams eat the rest).
An SSD that is already half full.
That is your weapon. And with that weapon, you are asked to process the last 5 years of sales history.

The Pain: Pick Your Poison
When you try to cross this valley with your 16GB laptop, you face three fatal destinies:

Death by Excel: You try to open the file. Excel hits 1,048,576 rows and tells you: “That’s as far as I go.” The rest of the data is lost in the abyss. Game over.
Death by Spark (The Bazooka for a Mosquito): You decide to get serious. You install Spark locally. First, you have to install Java. Then configure Hadoop environment variables (winutils.exe on Windows, a classic headache). Finally, you run a simple spark.read.csv(). The JVM (Java Virtual Machine) starts up and swallows 4GB of your RAM just to say "Hello." Your laptop fan starts sounding like a jet turbine. You've spent more time configuring the environment than solving the problem.
Death by Memory (MemoryError): You go back to your trusty Python and Pandas. df = pd.read_csv('giant_sales.csv') You wait... you wait... the progress bar freezes. The mouse stops responding. Your screen goes white. Boom. MemoryError. Or worse, the OS kills the process (OOM Killer) to save itself.
The Mission: Respect RAM Like It’s Gold
This is where the obsession for pardoX was born.

I knew there were incredible tools out there. Polars is fantastic, the current gold standard, but in my tests on limited machines, sometimes its execution strategy or certain complex joins can be aggressive with memory, leading to spikes that a 16GB laptop just can’t handle.

DuckDB is a technological marvel, but it is fundamentally an OLAP database. I didn’t want a database where I had to “load” data to then query it; I wanted a pipeline, a processing tube that let data pass through without holding onto it.

We needed an engine that understood a fundamental truth: On an engineering laptop, RAM is not a resource, it is a treasure.

The mission for pardoX became clear: Build an engine that could process files larger than the available physical memory, without touching the disk (swapping) and without making your computer feel like it’s about to take off.

  1. Anatomy of Speed (Rust, SIMD & Zero-Copy) When I tell someone that pardoX can make PHP process data at the same speed as C++, they look at me like I’m crazy. “PHP is slow,” they say. “Python has the GIL,” they argue.

And they are right. If you try to write a for loop in PHP to iterate over 50 million rows, you'll grow old waiting.

But here is the secret: pardoX doesn’t make Python fast. pardoX makes Python irrelevant for the 12 seconds that matter.

The Approach: The Plug-in Nuclear Battery (Rust & FFI)
Imagine you have a cheap plastic remote control. That is your Python or PHP script. It’s light, easy to use, but if you hit it against a wall, it breaks.

Now imagine that remote control drives a 50-ton industrial excavator. That excavator is Rust.

pardoX works on the principle of Foreign Function Interface (FFI). It’s not just another library that “runs on top” of your language; it’s a native binary, compiled to bare metal, that lives outside your host language’s memory management.

When you call pardox.load(), your language (Python/PHP) is just sending a signal: "Hey, wake up the beast and tell it to eat this file."

At that instant, control passes to the Rust binary. Your language’s “Garbage Collector” stops getting in the way. There is no GIL (Global Interpreter Lock). There are only machine instructions executing at light speed. Your script just waits for the “Ready” signal to receive the results.

SIMD: Eating by the Mouthful, Not the Grain
How do we process gigabytes in seconds? Enter SIMD (Single Instruction, Multiple Data).

Imagine you have a bowl of rice (your data) and you have to eat it all.

The Traditional Approach: You eat grain by grain. You take one grain (a number), process it, swallow. You take the next one. This is what most traditional for loops do.
The SIMD Approach (Vectorization): You use a giant spoon. In a single motion, you scoop up 64 grains and process them all at the same time.
pardoX uses your CPU’s modern instructions (AVX2, NEON on Mac M1) to “bite” data in vector blocks. Instead of adding numbers one by one, we add entire columns in a single clock cycle. It is brute force applied with surgical precision.

The Crown Jewel: Zero-Copy (The Trade Secret)
This is where I have to be careful. I’ve spent months fine-tuning this and, honestly, I don’t want to give away the solution to engineering teams at other tools who are still struggling with RAM consumption.

The biggest bottleneck in ETL isn’t calculation, it’s memory.

Traditionally, when a tool reads a CSV:

It reads bytes from disk to a buffer.
It copies those bytes to convert them to Strings.
It copies those Strings to clean them.
It copies again for the output format.
Every copy duplicates RAM consumption. That’s why your 16GB laptop explodes with a 5GB file.

pardoX uses a radical “Zero-Copy” architecture.

Without going into the low-level details (which is where our competitive advantage lies), the philosophy is this: We never move data unless it is a matter of life and death.

Instead of “loading” the file into RAM, pardoX “looks” at it through a smart window. We manipulate pointers and references to the raw data, transforming it “on the fly” as it travels from source disk to target disk.

It’s like editing a movie. You don’t need to print every frame on paper to edit it. You just need a digital preview.

The Result: We can process a 50GB file on a laptop with 8GB of RAM, because we never try to fit the 50GB into memory at the same time. Data flows through pardoX like water through a high-pressure pipe, without stagnating.

  1. The .prdx Format — Your High-Speed Bridge In software engineering, there is an unwritten rule: “Never invent your own file format.” Standards already exist. Use JSON, use CSV, use Parquet. Inventing something new is usually a symptom of arrogance or misunderstanding the problem.

So, why on earth did we create .prdx?

Believe me, I tried not to. But I realized that existing tools confused two very different concepts: Storage and Transit.

The Difference: Archiving vs. Moving
Imagine you are moving houses.

Parquet is like packing for long-term storage. You fold clothes perfectly, vacuum-seal them to save space, label the box, and tape it shut. It is efficient for keeping (low space usage), but slow to pack (CPU intensive) and slow to unpack.
The .prdx format is like throwing your clothes into the trunk of your car to go to your partner’s house. You don’t fold, you don’t compress, you don’t label. You just throw it in and drive. It takes up more space, yes, but loading and unloading time is practically zero.
Parquet is designed for Cold Storage (S3, Data Lakes). Its priority is compression.
.prdx is designed for Hot Transit (RAM to Disk). Its priority is write speed.

The Innovation: A Structured Memory Dump
Technically, .prdx is not a traditional file format. It is essentially an optimized memory dump.

When pardoX is processing data in RAM, that data has a specific binary structure (thanks to Rust). To create a Parquet file, we would have to take that structure, serialize it, apply Snappy or Gzip compression, and encode it with complex schemas. That costs valuable CPU cycles.

To create a .prdx, pardoX simply takes what it has in memory and dumps it onto the disk exactly as is.

The result?

Writing a 1GB Parquet file can take 10–20 seconds of CPU time.
Writing a 1GB .prdx takes however long your hard drive takes to write 1GB (sometimes less than a second on an NVMe SSD).
The Tactical Advantage: The Polyglot Bridge
This is where the “X” in pardoX (the intersection) shines. The .prdx format acts as a universal “pause button” or an exchange point between languages.

Imagine this real workflow we implemented:

PHP (The Gatherer): PHP is great for connecting to legacy web systems (WordPress/Magento), but terrible at processing data. We use PHP only to extract raw data and dump it to .prdx. PHP doesn’t process, it just transports.
The Bridge: The .prdx file sits on the disk. It is a perfect frozen state of the pipeline.
Python (The Analyst): Milliseconds later, a Python script detects the file. Since .prdx is already binary-structured, Python doesn’t have to “parse” a CSV (which is slow and error-prone). It simply maps the file into memory and starts working instantly.
We eliminated the cost of serialization (converting to JSON/CSV text) and deserialization (converting back to objects).

With .prdx, we allow PHP and Python to share memory via the disk, enabling hybrid architectures that were previously impossible due to slowness.

  1. David vs. Goliaths (The Battle of the Benchmarks) In God we trust; all others must bring data.

It’s useless to talk about “Zero-Copy” or “Rust” if, at the end of the day, the script takes 10 minutes. So we took pardoX to the gym to pit it against the industry heavyweights.

But to make this fair, we didn’t use a cloud server with 128GB of RAM. We used the “Consultant Standard”: a Laptop i5 with 16GB of RAM. If it doesn’t work here, it doesn’t work in the real world.

The Ring
Hardware: Laptop Intel Core i5 / Mac M1 (16GB RAM).
The Challenge: Ingest, process, and save the “Consolidated Sales” dataset.
The Opponent: 50 Million rows (1.7 GB in raw CSV).
The bell rings.

Round 1: The Classics (Pandas and Spark)
There wasn’t much of a fight here. It was a massacre.

Pandas (Python):
The champion of light analysis entered the ring and… fainted in the first second.
Result: Instant MemoryError. Pandas tried to load the entire CSV into RAM, doubling its size due to Python object overhead. Technical K.O.
Apache Spark (Local):
The corporate giant. Spark is powerful, but on a single laptop, it’s like trying to park a semi-truck in your living room.
Result: It took 45 seconds just to start the session. Then, it fought against the Java Garbage Collector. Finally, it completed the task in minutes, or crashed due to Java Heap Space depending on the config. Too much overhead for “just” 1.7GB.
Round 2: The Moderns (DuckDB and Polars)
Here is where it gets serious. These are modern, optimized, brilliant engines.

DuckDB:
An incredible SQL engine. Robust like a tank.
Time: ~31 seconds.
Analysis: DuckDB is very fast at reading, but writing the final result (Parquet) costs it a bit more because it has to serialize from its internal database format. Solid, but not instant.
Polars:
The current King of Speed. The “Gold Standard.”
Time: ~13 seconds.
Analysis: Impressive. Polars flies. It is the benchmark against which we all measure ourselves.
Round 3: The Challenger (pardoX v0.1)
The moment of truth arrived. We ran the pardoX binary.

pardoX (v0.1):
Time: 15–20 seconds.
The Conclusion: Why Celebrate Second Place?
You might look at the numbers and say: “Hey, Polars is still 2 to 4 seconds faster.” And you’re right. Polars is an engineering masterpiece and has years of development head start.

But here is the crucial nuance:

We are breathing down the leader’s neck:
For a v0.1 (Beta) version, being just 2 seconds behind the world leader is a monumental technical achievement. We are in the same league of “Absurd Speed.”
Memory Stability:
During the test, Polars had aggressive RAM spikes to achieve that speed. pardoX remained flat and stable, thanks to our strict streaming approach. On a machine with 8GB of RAM, those Polars spikes could kill the process; pardoX would survive.
The Universal Victory:
Here is the real K.O.: Polars is a Python/Rust library. If your system is in PHP, Node.js, or Ruby, you can’t easily use Polars.
pardoX is an agnostic binary. Those 15–17 seconds are available to any language capable of spawning a process.
We didn’t win by being the fastest in the photo finish (yet). We won because we brought Formula 1 speed to cars that previously couldn’t even enter the race.

  1. Real Universality — The “Last Mile” Challenge There is a moment in every Data Engineer’s life that is devastating.

You just optimized an incredible pipeline. You processed 50 million rows in 15 seconds. You feel like a silicon god. You send the result to your boss or the client.

Five minutes later, you get an email:
“Hey, I tried opening this in Excel/PowerBI and I’m getting weird symbols. Also, the dates are giant numbers. Can you check it?”

In that instant, your processing speed is worthless.
Welcome to the Last Mile Challenge.

The Problem: Your Boss Lives in Excel
It’s useless to process at light speed if the result is incompatible with mortal tools. The real world doesn’t use Jupyter Notebooks to make decisions; it uses Excel, PowerBI, and Tableau.

Most high-performance engines (like Spark) assume the final consumer will be another engineering system. But pardoX had to be different. pardoX had to deliver data ready for human consumption.

War Stories: In the CSV Trenches
To achieve this, we had to get our hands dirty. We had to fight the demons of legacy formats.

  1. The Invisible Enemy (The Carriage Return \r) Modern systems (Linux/Mac) use \n to say “new line.” But the corporate world runs on Windows and old banking systems that use \r\n.

During early tests, pardoX was flying, but the output came out broken. Rows eating other rows. Shifted columns.
We discovered that many CSVs generated by legacy systems (or saved in old Excel) left “orphan” \r characters inside text fields. Traditional Rust parsers would explode or cut the line prematurely.

The Solution: We had to write a custom byte reader (practically at the assembly level) that could “smell” the difference between a real end-of-line \r and a dirty \r inside a product description. Now, pardoX cleans the byte stream before even attempting to parse it.

  1. The Time Traveler (Excel Dates) This was the biggest headache. In engineering, a date is a Timestamp (seconds since 1970). In Excel, a date is a floating-point number (days since January 1, 1900).

When we exported to standard Parquet, PowerBI read the dates as 1672531200. The user saw that and screamed.
“Why is my sales date 1.6 billion?”

The Solution: We had to manually implement the Logical Types of the Parquet specification. It wasn’t enough to save the number; we had to inject metadata into the binary file header to scream at PowerBI: “HEY! This 64-bit integer is not a number, IT IS A DATE! Treat it with respect.”

The Victory: The Purifying Filter
Today, pardoX is not just a speed engine; it is a sewage treatment plant.

Input: Dirty, poorly encoded CSVs (ISO-8859–1 mixed with UTF-8), with hidden characters and inconsistent date formats from COBOL or PHP.
Process: The Rust engine normalizes, cleans, and standardizes at violent speeds.
Output: An immaculate Parquet file, with strict data types, that your boss can drag into PowerBI and see the charts instantly.
That is real universality. It’s not just connecting programming languages; it’s connecting complex engineering with business reality.

  1. The Launch — pardoX v0.1 Beta We’ve talked about the paradox, the pain of dying laptops, and the engineering behind the speed. But at the end of the day, pardoX isn’t just about saving seconds.

It’s about Freedom.

It’s the freedom to accept a 50-million-row project knowing you can process it at your favorite coffee shop, on your regular laptop, while sipping a latte. It’s the freedom of not depending on budget approval for a Spark cluster. It’s the freedom to keep using PHP or your legacy system, but with a Ferrari engine under the hood.

The Road Ahead: Critical Roadmap
Launching v0.1 is just the first step. As you read this, I am already working on the following critical milestones:

Breaking the 12-Second Barrier:
We are at 15–17 seconds. I know we can get down to 12. We are optimizing SIMD vectorization to squeeze every last drop out of M1 processors and modern Intels. It’s a technical goal, almost a sport, but we will get there.
The Promise of Universality (Official Bindings):
Currently, integration is via processes (CLI). The next step is to create native “bridges” for PHP and Node.js, allowing pardoX to feel like a natural extension of the language, not an outsider.
THE ANNOUNCEMENT
I know January is a tough month. You come back from holidays to find a mountain of accumulated data from the year-end close.

That’s why I made a decision:
“You don’t have to wait for 2026. I’m working through the holidays so you don’t have to fight with Spark in January.”

While others rest, I will be compiling, testing, and polishing the binary so it’s ready when you return to the office.

📅 Launch Date: Monday, January 19, 2025
On that day, I will release:

The compiled pardoX v0.1 Beta binary (Windows, Mac, Linux).
The initial “Getting Started” documentation.
Integration examples with Python and PHP.

  1. The Visual Evidence — Numbers Don’t Lie Saying we are fast is marketing. Showing the terminal is engineering.

In this chapter, we open the testing lab. No tricks, no hot cache, no $10,000 cloud servers. Just a laptop, 50 million rows, and a stopwatch.

The Proving Ground: The Dataset
To make the test brutally honest, we built a scenario that simulates the real pain of a month-end close:

Volume: 50 CSV Files.
Size: 1 Million rows per file (50 Million total).
Weight: ~1.7 GB raw.
The Challenge: Read, Consolidate, Process, and Write to Parquet/Native Formats.
The Curiosity: For the COBOL test, we converted these CSVs into a flat .dat file (Fixed Width), the native format of mainframes.

  1. The Battle of the Engines The Robust Standard: DuckDB We started with DuckDB, a tool we deeply admire. As the evidence shows, DuckDB got the job done in ~31 seconds. The Verdict: It’s rock-solid, but the serialization cost when writing the final file takes a toll. It’s a tank: unstoppable, but not instant.

The Slow Giant: Apache Spark
Then, we brought the elephant into the room: Spark (Local).
The result was painful: 181 seconds.
The Verdict: Using Spark for 1.7GB is like using an 18-wheeler to go buy milk. The JVM overhead and local cluster setup eat up performance.

The Current King: Polars
The gold standard. Polars smashed the stopwatch with ~13 seconds.
The Verdict: It is the number to beat. Polars is pure Rust efficiency. If you only use Python and don’t need to leave that ecosystem, it is the best option today.

The Challenger: pardoX (Optimized)
Here is where we get excited. With the latest “Zero-Copy” adjustments and SIMD vectorization, pardoX clocked in at ~20 seconds.
The Analysis:

We are 11 seconds faster than DuckDB. That is 35% faster than one of the most popular engines in the world.
We are only 7 seconds behind Polars.
But the key isn’t the seconds, it’s the memory. pardoX maintained a flat RAM profile, without the aggressive spikes that “Eager” engines sometimes require.

  1. Universality in Action (PHP + JS) This is where pardoX stops competing and starts changing the game. We created a simple Web Interface (UI) with PHP and Javascript.

Imagine you own a hardware store chain. You have 50 branches uploading their daily sales (CSVs) to a cheap PHP server. Normally, processing that would crash your server.
With pardoX integrated into the backend, the UI processed and generated the consolidated report in 25 seconds.

The Impact: A humble web server doing the work of a Big Data cluster, without blocking the webpage for the user.

  1. The Crown Jewel: Native COBOL This is the test I am most proud of. We entered the territory of dinosaurs. No intermediaries, no complex translation layers.

We made a COBOL program call the pardoX engine directly.

Input: .dat file (Mainframe).
Process: pardoX (via FFI).
Output: Modern .prdx file.
We did it. COBOL, a language from 1959, generating high-performance data formats from 2025. This is the ultimate bridge between the past and the future.

  1. The Cherry on Top: The “Fake PostgreSQL” Gateway Finally, the future. What good is data if you can’t see it in Tableau or PowerBI?

We developed an experimental interface: pardoX Gateway.
This tool tricks your BI tools into believing they are connected to a real PostgreSQL database. But behind the scenes, there is no database; pardoX is reading the .prdx files on the fly.

You connect PowerBI to port 9876, and pardoX serves the data instantly. No additional ETLs, no loading data into a Data Warehouse. Just drag and drop.

The Open Invitation — Beyond the Code
We have reached the end of this series, but the beginning of the journey.

Before I close the editor and go back to compiling, I want to be very clear about something. In tech, we sometimes fall into tribalism: “If you use X, you are my enemy.” “If you don’t use Y, you are obsolete.”

pardoX was not born to minimize the work of giants.

I deeply admire what Richie Vink has done with Polars; he has redefined what is possible in Python. I immensely respect the robustness the DuckDB team has brought to local SQL. And, of course, Spark remains the undisputed king when you have terabytes of data and a real cluster.

I am not here to tear down their statues. I stand on their shoulders to look towards a corner that they, by their very nature and scale, have had to overlook.

The Forgotten Sector
My fight is for that “forgotten sector.” It’s the engineers maintaining 20-year-old banking systems. It’s the PHP developers holding up an entire country’s e-commerce. It’s the analysts with no cloud budget whose “Data Lake” is a folder full of CSVs on a corporate laptop.

They deserve speed too. They deserve modern tools too. pardoX is my love letter to that sector.

A Note on Feedback
On this path, I have learned to filter the noise. The internet is full of opinions on which tool is “the best.” But honestly, I try not to get distracted by theoretical debates or benchmark wars.

I focus on what builds.

If you come to tell me Rust is better than C++ or vice versa, I probably won’t answer. But if you come with an idea, with a weird use case, with a bug you found processing data from a pharmacy in a remote town… then we are on the same team.

Join the Resistance (The Constructive One)
I am opening the doors. If this series resonated with you, if you have felt the pain of a frozen screen or the frustration of incompatibility, I invite you not to be just a spectator.

Do you have an idea to improve the date parser?
Do you want to help build the Node.js bindings?
Do you simply want to test the beta and break it with your data?
Let’s talk. Engineer to engineer. No corporate intermediaries.

📬 Contact Me
Direct Email: iam@albertocardenas.com (I read all emails that add value or propose solutions).
LinkedIn: linkedin.com/in/albertocardenasd (Let’s connect. Mention you read the “pardoX” series so I can accept you quickly).
Thank you for reading this far. See you in the compiler. Alberto Cárdenas.

Top comments (0)