<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Borivoj Grujicic</title>
    <description>The latest articles on DEV Community by Borivoj Grujicic (@borivoj_grujicic_4d81cca0).</description>
    <link>https://dev.to/borivoj_grujicic_4d81cca0</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/borivoj_grujicic_4d81cca0"/>
    <language>en</language>
    <item>
      <title>Elusion v8.3.0 is out!</title>
      <dc:creator>Borivoj Grujicic</dc:creator>
      <pubDate>Sun, 05 Apr 2026 19:25:38 +0000</pubDate>
      <link>https://dev.to/borivoj_grujicic_4d81cca0/elusion-v830-is-out-5acl</link>
      <guid>https://dev.to/borivoj_grujicic_4d81cca0/elusion-v830-is-out-5acl</guid>
      <description>&lt;p&gt;Data Engineering Library - Elusion -, now has a built-in Medallion Architecture pipeline framework (Bronze / Silver / Gold) for building production data pipelines in pure Rust.&lt;br&gt;
No Python. No dbt. No Airflow.&lt;br&gt;
✅ DAG-based execution with parallel processing &lt;br&gt;
✅ Auto materialization to Parquet or Delta per layer &lt;br&gt;
✅ Microsoft Fabric / OneLake ready &lt;br&gt;
✅ Config-driven — elusion.toml + connections.toml &lt;br&gt;
✅ One file per model, clean separation of layers&lt;br&gt;
Single binary. Docker ready. Compile and ship.&lt;/p&gt;

&lt;p&gt;👇 Download Starter Template Project from the link bellow! 👇&lt;/p&gt;

&lt;p&gt;🔗 &lt;a href="https://crates.io/crates/elusion" rel="noopener noreferrer"&gt;Crates.io&lt;/a&gt;&lt;br&gt;
🔗 &lt;a href="https://github.com/DataBora/elusion" rel="noopener noreferrer"&gt;GitHub Reporistory&lt;/a&gt;&lt;br&gt;
🚀 &lt;a href="https://github.com/DataBora/elusion-project-startup" rel="noopener noreferrer"&gt;Starter template:&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F04ru8nuoh9wo7hqe9fu3.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F04ru8nuoh9wo7hqe9fu3.jpg" alt=" " width="800" height="537"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>architecture</category>
      <category>dataengineering</category>
      <category>rust</category>
      <category>showdev</category>
    </item>
    <item>
      <title>Elusion v8.0.0 is the best END-TO-END Data Engineering library writen in RUST</title>
      <dc:creator>Borivoj Grujicic</dc:creator>
      <pubDate>Tue, 21 Oct 2025 14:53:51 +0000</pubDate>
      <link>https://dev.to/borivoj_grujicic_4d81cca0/elusion-v800-is-the-best-end-to-end-data-engineering-library-writen-in-rust-1b9g</link>
      <guid>https://dev.to/borivoj_grujicic_4d81cca0/elusion-v800-is-the-best-end-to-end-data-engineering-library-writen-in-rust-1b9g</guid>
      <description>&lt;p&gt;Elusion v8.0.0 just dropped with something I'm genuinely excited about: native SQL execution and CopyData feature.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Functional API still going strong:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Write queries however you want - Unlike SQL, PySpark, or Polars, you can chain operations in ANY order. No more "wait, does filter go before group_by or after?" Just write what makes sense:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;use elusion::prelude::*;
#[tokio::main]
async fn main() -&amp;gt; ElusionResult&amp;lt;()&amp;gt; {
    let sales = CustomDataFrame::new("sales.csv", "sales").await?;

    let result = sales
        .select(["customer_id", "amount", "order_date"])
        .filter("amount &amp;gt; 1000")
        .agg(["SUM(amount) AS total", "COUNT(*) AS orders"])
        .group_by(["customer_id"])
        .having("total &amp;gt; 50000")
        .order_by(["total"], ["DESC"])
        .limit(10)
        .elusion("top_customers")
        .await?;

    result.display().await?;
    Ok(())
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Raw SQL when you need it&lt;/strong&gt; - Sometimes you just want to write SQL. Now you can:&lt;/p&gt;

&lt;p&gt;There is small macro sql! to simplify usage and avpid using &amp;amp;[&amp;amp;df] for each Dataframe included in query.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;use elusion::prelude::*;
#[tokio::main]
async fn main() -&amp;gt; ElusionResult&amp;lt;()&amp;gt; {
    let sales = CustomDataFrame::new("sales.csv", "sales").await?;
    let customers = CustomDataFrame::new("customers.csv", "customers").await?;
    let products = CustomDataFrame::new("products.csv", "products").await?;

    let result = sql!(
        r#"
        WITH monthly_totals AS (
            SELECT 
                DATE_TRUNC('month', s.order_date) as month,
                c.region,
                p.category,
                SUM(s.amount) as total
            FROM sales s
            JOIN customers c ON s.customer_id = c.id
            JOIN products p ON s.product_id = p.id
            GROUP BY month, c.region, p.category
        )
        SELECT 
            month,
            region,
            category,
            total,
            SUM(total) OVER (
                PARTITION BY region, category 
                ORDER BY month
            ) as running_total
        FROM monthly_totals
        ORDER BY month DESC, total DESC
        LIMIT 100
        "#,
        "monthly_analysis",
        sales,
        customers,
        products
    ).await?;

    result.display().await?;
    Ok(())
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;COPY DATA:&lt;/strong&gt;&lt;br&gt;
Now you can read and write between files in true streaming fashion:&lt;/p&gt;

&lt;p&gt;You can do it in 2 ways: 1. Custom Configuration, 2. Simplified file conversion&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Custom Configuration
copy_data(  
    CopySource::File {
        path: "C:\\Borivoj\\RUST\\Elusion\\bigdata\\test.json",
        csv_delimiter: None,
    },
    CopyDestination::File {  
        path: "C:\\Borivoj\\RUST\\Elusion\\CopyData\\test.csv",
    },
    Some(CopyConfig {
            batch_size: 500_000, 
            compression: None,
            csv_delimiter: Some(b','), 
            infer_schema: true,  
            output_format: OutputFormat::Csv,
    }),
).await?;

// Simplified file conversion
copy_file_to_parquet(
    "input.json",
    "output.parquet",
    Some(ParquetCompression::Uncompressed), // or Snappy
).await?;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you hear for Elusion for the first time bellow are some core features:&lt;/p&gt;

&lt;p&gt;🏢 Microsoft Fabric - OneLake connectivity&lt;/p&gt;

&lt;p&gt;☁️ AzureAzure BLOB storage connectivity&lt;/p&gt;

&lt;p&gt;📁 SharePoint connectivity&lt;/p&gt;

&lt;p&gt;📡 FTP/FTPS connectivity&lt;/p&gt;

&lt;p&gt;📊 Excel file operations&lt;/p&gt;

&lt;p&gt;🐘 PostgreSQL database connectivity&lt;/p&gt;

&lt;p&gt;🐬 MySQLMySQL database connectivity&lt;/p&gt;

&lt;p&gt;🌐 HTTP API integration&lt;/p&gt;

&lt;p&gt;📈 Dashboard Data visualization&lt;/p&gt;

&lt;p&gt;⚡ CopyData High-performance streaming operations&lt;/p&gt;

&lt;p&gt;Built-in formats: &lt;strong&gt;CSV, JSON, Parquet, Delta Lake, XML, EXCEL&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Plus:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Redis caching&lt;/strong&gt; + in-memory query cache&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pipeline scheduling&lt;/strong&gt; with tokio-cron-scheduler&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Materialized views&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To learn more about the crate, visit: &lt;a href="https://github.com/DataBora/elusion" rel="noopener noreferrer"&gt;https://github.com/DataBora/elusion&lt;/a&gt;&lt;/p&gt;

</description>
      <category>dataengineering</category>
      <category>rust</category>
      <category>python</category>
    </item>
    <item>
      <title>Elusion Celebrates 50K+ Downloads: A Modern Alternative to Pandas and Polars for Data Engineering</title>
      <dc:creator>Borivoj Grujicic</dc:creator>
      <pubDate>Wed, 17 Sep 2025 08:53:36 +0000</pubDate>
      <link>https://dev.to/borivoj_grujicic_4d81cca0/elusion-celebrates-50k-downloads-a-modern-alternative-to-pandas-and-polars-for-data-engineering-2b85</link>
      <guid>https://dev.to/borivoj_grujicic_4d81cca0/elusion-celebrates-50k-downloads-a-modern-alternative-to-pandas-and-polars-for-data-engineering-2b85</guid>
      <description>&lt;p&gt;The Rust data ecosystem has reached another significant milestone with Elusion DataFrame Library surpassing 50,000 downloads on crates.io. As data engineers and analysts, that love SQL syntax, continue seeking alternatives to Pandas and Polars, Elusion has emerged as a compelling option that combines the familiarity of DataFrame operations with unique capabilities that set it apart from the competition.&lt;/p&gt;

&lt;p&gt;What Makes Elusion Different&lt;/p&gt;

&lt;p&gt;While Pandas and Polars excel in their respective domains, Elusion brings several distinctive features that address gaps in the current data processing landscape:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Native Multi-Format File Support Including XML&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;While Pandas and Polars support common formats like CSV, Excel, Parquet, and JSON, Elusion goes further by offering native XML parsing capabilities. Unlike Pandas and Polars, which require external libraries and manual parsing logic for XML files, Elusion automatically analyzes XML file structure and chooses the optimal processing strategy:&lt;/p&gt;

&lt;p&gt;`// XML files work just like any other format&lt;/p&gt;

&lt;p&gt;let xml_path = "C:\path\to\sales.xml";&lt;/p&gt;

&lt;p&gt;let df = CustomDataFrame::new(xml_path, "xml_data").await?;`&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Flexible Query Construction Without Strict Ordering&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Unlike DataFrame libraries that enforce specific operation sequences, Elusion allows you to build queries in ANY order that makes sense to your logic. Whether you want to filter before selecting, or aggregate before grouping, Elusion ensures consistent results regardless of function call order.&lt;/p&gt;

&lt;p&gt;`// Write operations in the order that makes sense to you&lt;/p&gt;

&lt;p&gt;sales_df&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;.filter("amount &amp;gt; 1000")

.join(customers_df, ["s.CustomerKey = c.CustomerKey"], "INNER")

.select(["c.name", "s.amount"])

.agg(["SUM(s.amount) AS total"])

.group_by(["c.region"])
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;`&lt;/p&gt;

&lt;p&gt;Same result is achieved with different function order:&lt;br&gt;
`sales_df&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;.join(customers_df, ["s.CustomerKey = c.CustomerKey"], "INNER")

.select(["c.name", "s.amount"])

.agg(["SUM(s.amount) AS total"])

.group_by(["c.region"])

.filter("amount &amp;gt; 1000")`
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;2. Built-in External Data Source Integration&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;While Pandas and Polars require additional libraries for cloud storage and database connectivity, Elusion provides native support for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Azure Blob Storage with SAS token authentication&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;SharePoint integration for enterprise environments&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;PostgreSQL and MySQL database connections&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;REST API data ingestion with customizable headers and pagination&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Multi-format file loading from folders with automatic schema merging&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;3. Advanced Caching Architecture&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Elusion offers sophisticated caching capabilities that go beyond what's available in Pandas or Polars:&lt;/p&gt;

&lt;p&gt;Native caching for local development and single-instance applications&lt;/p&gt;

&lt;p&gt;Redis caching for distributed systems and production environments&lt;/p&gt;

&lt;p&gt;Materialized views with TTL management&lt;/p&gt;

&lt;p&gt;Query result caching with automatic invalidation&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Production-Ready Pipeline Scheduling&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Unlike Pandas and Polars which focus primarily on data manipulation, Elusion includes a built-in pipeline scheduler for automated data engineering workflows:&lt;/p&gt;

&lt;p&gt;`let scheduler = PipelineScheduler::new("5min", || async {&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Your data pipeline logic here

let df = CustomDataFrame::from_azure_with_sas_token(url, token, None, "data").await?;

df.select(["*"]).write_to_parquet("overwrite", "output.parquet", None).await?;

Ok(())
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;}).await?;`&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Interactive Dashboard Generation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;While Pandas requires additional libraries like Plotly or Matplotlib for visualization, Elusion includes built-in interactive dashboard creation:&lt;/p&gt;

&lt;p&gt;Generate HTML reports with interactive plots (TimeSeries, Bar, Pie, Scatter, etc.)&lt;/p&gt;

&lt;p&gt;Create paginated, filterable tables with export capabilities&lt;/p&gt;

&lt;p&gt;Combine multiple visualizations in customizable layouts&lt;/p&gt;

&lt;p&gt;No additional dependencies required&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6. Streaming Processing Capabilities&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Elusion provides streaming processing options for handling large datasets for better performance while reading and writing data:&lt;/p&gt;

&lt;p&gt;`// Stream processing for large files&lt;/p&gt;

&lt;p&gt;big_file_df&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;.select(["column1", "column2"])

.filter("value &amp;gt; threshold")

.elusion_streaming("results").await?;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;// Stream writing directly to files&lt;/p&gt;

&lt;p&gt;df.elusion_streaming_write("data", "output.parquet", "overwrite").await?;`&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;7. Advanced JSON Handling&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Elusion offers specialized JSON functions for columns with json values, that simplify working with complex nested structures:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Extract values from JSON arrays with pattern matching&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Handle multiple JSON formats automatically&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Convert REST API responses to JSON files than to DataFrames&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;`let path = "C:\RUST\Elusion\jsonFile.csv";&lt;/p&gt;

&lt;p&gt;let json_df = CustomDataFrame::new(path, "j").await?;&lt;/p&gt;

&lt;p&gt;let df_extracted = json_df.json([&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"ColumnName.'$Key1' AS column_name_1",

"ColumnName.'$Key2' AS column_name_2",

"ColumnName.'$Key3' AS column_name_3"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;])&lt;/p&gt;

&lt;p&gt;.select(["some_column1", "some_column2"])&lt;/p&gt;

&lt;p&gt;.elusion("json_extract").await?;`&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Performance and Memory Management&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Elusion is built on Apache Arrow and DataFusion, providing:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Memory-efficient operations&lt;/strong&gt; through columnar storage&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Redis caching&lt;/strong&gt; for optimized query execution&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Automatic schema inference&lt;/strong&gt; across multiple file formats&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Parallel processing capabilities&lt;/strong&gt; through Rust's concurrency model&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;`let sales = "C:\RUST\Elusion\SalesData2022.csv";&lt;/p&gt;

&lt;p&gt;let products = "C:\RUST\Elusion\Products.csv";&lt;/p&gt;

&lt;p&gt;let customers = "C:\RUST\Elusion\Customers.csv";&lt;/p&gt;

&lt;p&gt;let sales_df = CustomDataFrame::new(sales, "s").await?;&lt;/p&gt;

&lt;p&gt;let customers_df = CustomDataFrame::new(customers, "c").await?;&lt;/p&gt;

&lt;p&gt;let products_df = CustomDataFrame::new(products, "p").await?;&lt;/p&gt;

&lt;p&gt;// Connect to Redis (requires Redis server running)&lt;/p&gt;

&lt;p&gt;let redis_conn = CustomDataFrame::create_redis_cache_connection().await?;&lt;/p&gt;

&lt;p&gt;// Use Redis caching for high-performance distributed caching&lt;/p&gt;

&lt;p&gt;let redis_cached_result = sales_df&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;.join_many([

    (customers_df, ["s.CustomerKey = c.CustomerKey"], "RIGHT"),

    (products_df, ["s.ProductKey = p.ProductKey"], "LEFT OUTER"),

])

.select(["c.CustomerKey", "c.FirstName", "c.LastName", "p.ProductName"])

.agg([

    "SUM(s.OrderQuantity) AS total_quantity",

    "AVG(s.OrderQuantity) AS avg_quantity"

])

.group_by(["c.CustomerKey", "c.FirstName", "c.LastName", "p.ProductName"])

.having_many([

    ("total_quantity &amp;gt; 10"),

    ("avg_quantity &amp;lt; 100")

])

.order_by_many([

    ("total_quantity", "ASC"),

    ("p.ProductName", "DESC")

])

.elusion_with_redis_cache(&amp;amp;redis_conn, "sales_join_redis", Some(3600)) // Redis caching with 1-hour TTL

.await?;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;redis_cached_result.display().await?;`&lt;/p&gt;

&lt;p&gt;Getting Started with Elusion: Easier Than You Think&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For SQL Developers&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If you write SQL queries, you already have 80% of the skills needed for Elusion. The mental model is identical - you're just expressing the same logical operations in Rust syntax:&lt;/p&gt;

&lt;p&gt;`// Your SQL thinking translates directly:&lt;/p&gt;

&lt;p&gt;df.select(["customer_name", "order_total"])    // SELECT&lt;/p&gt;

&lt;p&gt;.join(customers, ["id = customer_id"], "INNER")  // JOIN  &lt;/p&gt;

&lt;p&gt;.filter("order_total &amp;gt; 1000")                // WHERE&lt;/p&gt;

&lt;p&gt;.group_by(["customer_name"])                 // GROUP BY&lt;/p&gt;

&lt;p&gt;.agg(["SUM(order_total) AS total"])         // Aggregation&lt;/p&gt;

&lt;p&gt;.order_by(["total"], ["DESC"])              // ORDER BY&lt;br&gt;
`&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For Python/Pandas Users&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Elusion feels familiar if you're coming from Pandas:&lt;/p&gt;

&lt;p&gt;`sales_df&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;.join_many([

    (customers_df, ["s.CustomerKey = c.CustomerKey"], "INNER"),

    (products_df, ["s.ProductKey = p.ProductKey"], "INNER"),

])

.select(["c.name", "p.category", "s.amount"])

.filter("s.amount &amp;gt; 1000")

.agg(["SUM(s.amount) AS total_revenue"])

.group_by(["c.region", "p.category"]) 

.order_by(["total_revenue"], ["DESC"])

.elusion("quarterly_report")

.await?`
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Installation and Setup&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Adding Elusion to your Rust project takes just two lines:&lt;/p&gt;

&lt;p&gt;`[dependencies]&lt;/p&gt;

&lt;p&gt;elusion = "6.2.0"&lt;/p&gt;

&lt;p&gt;tokio = { version = "1.45.0", features = ["rt-multi-thread"] }&lt;br&gt;
`&lt;/p&gt;

&lt;p&gt;Enable only the features you need to keep dependencies minimal:&lt;/p&gt;

&lt;h1&gt;
  
  
  Start simple, add features as needed
&lt;/h1&gt;

&lt;p&gt;&lt;code&gt;elusion = { version = "6.2.0", features = ["postgres", "azure"] }&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Then, your first Elusion program would look like this:&lt;/strong&gt;&lt;br&gt;
`use elusion::prelude::*;&lt;/p&gt;

&lt;h1&gt;
  
  
  [tokio::main]
&lt;/h1&gt;

&lt;p&gt;async fn main() -&amp;gt; ElusionResult&amp;lt;()&amp;gt; {&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Load any file format - CSV, Excel, JSON, XML, Parquet

let df = CustomDataFrame::new("data.csv", "sales").await?;

// Write operations that make sense to you

let result = df

    .select(["customer", "amount"])

    .filter("amount &amp;gt; 100")

    .agg(["SUM(amount) AS total"])

    .group_by(["customer"])

    .elusion("analysis").await?;

result.display().await?;

Ok(())
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;}&lt;br&gt;
`&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Perfect for SQL Developers and Python Users Ready to Embrace Rust&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If you know SQL, you already understand most of Elusion's power. The library's approach mirrors SQL's flexibility - you can write operations in the order that makes logical sense to you, just like constructing SQL queries. Consider this familiar pattern:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;SQL Query:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;`SELECT c.name, SUM(s.amount) as total&lt;/p&gt;

&lt;p&gt;FROM sales s&lt;/p&gt;

&lt;p&gt;JOIN customers c ON s.customer_id = c.id  &lt;/p&gt;

&lt;p&gt;WHERE s.amount &amp;gt; 1000&lt;/p&gt;

&lt;p&gt;GROUP BY c.name&lt;/p&gt;

&lt;p&gt;ORDER BY total DESC;`&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Elusion equivalent:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;`sales_df&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;.join(customers_df, ["s.customer_id = c.id"], "INNER")

.select(["c.name"])

.agg(["SUM(s.amount) AS total"])

.filter("s.amount &amp;gt; 1000")

.group_by(["c.name"])

.order_by(["total"], ["DESC"])`
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The 50,000 download milestone reflects growing recognition that modern data processing needs tools designed for today's distributed, cloud-native environments. SQL developers and Python users that are discovering that Rust doesn't have to mean starting from scratch - it can mean taking your existing knowledge and supercharging it.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Rust DataFrame Alternatives to Polars: Meet Elusion v4.0.0</title>
      <dc:creator>Borivoj Grujicic</dc:creator>
      <pubDate>Mon, 11 Aug 2025 04:44:48 +0000</pubDate>
      <link>https://dev.to/borivoj_grujicic_4d81cca0/rust-dataframe-alternatives-to-polars-meet-elusion-v400-2o1o</link>
      <guid>https://dev.to/borivoj_grujicic_4d81cca0/rust-dataframe-alternatives-to-polars-meet-elusion-v400-2o1o</guid>
      <description>&lt;p&gt;The Rust ecosystem has seen tremendous growth in data processing libraries, with Polars leading the charge as a blazingly fast DataFrame library. &lt;/p&gt;

&lt;p&gt;However, a new contender has emerged that takes a fundamentally different approach to data engineering and analysis: &lt;strong&gt;Elusion&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;While Polars focuses on pure performance and memory efficiency with its Apache Arrow-based columnar engine, Elusion positions itself as equaly dedicated for performance and memory efficiency, also with Appache Arrow and DataFusion, as well as a comprehensive data engineering platform that prioritizes flexibility, ease of use, and integration capabilities alongside high performance.&lt;/p&gt;

&lt;h3&gt;
  
  
  Architecture Philosophy: Different Approaches to the Same Goals
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Polars: Performance-First Design
&lt;/h4&gt;

&lt;p&gt;Polars is written from scratch in Rust, designed close to the machine and without external dependencies. It's based on Apache Arrow's memory model, providing very cache efficient columnar data structures and focuses on:&lt;/p&gt;

&lt;p&gt;Ultra-fast query execution with SIMD optimizations&lt;br&gt;
Memory-efficient columnar processing&lt;br&gt;
Lazy evaluation with query optimization&lt;br&gt;
Streaming for out-of-core processing&lt;/p&gt;
&lt;h4&gt;
  
  
  Elusion: Flexibility-First Design
&lt;/h4&gt;

&lt;p&gt;Elusion takes a different approach, prioritizing developer experience and integration capabilities:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Core Philosophy: &lt;strong&gt;"Elusion wants you to be you!"&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Unlike traditional DataFrame libraries that enforce specific patterns, Elusion offers flexibility in constructing queries without enforcing specific patterns or chaining orders. You can build your queries in ANY SEQUENCE that makes sense to you, writing functions in ANY ORDER, and Elusion ensures consistent results regardless of the function call order.&lt;/p&gt;
&lt;h5&gt;
  
  
  Loading files into DataFrames:
&lt;/h5&gt;

&lt;p&gt;Regular Loading: ~4.95 seconds for complex queries on 900k rows&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CustomDataFrame::new()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Streaming Loading: ~3.62 seconds for the same operations&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CustomDataFrame::new_with_stream()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Performance improvement: 26.9% faster with streaming approach&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Polars approach:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;let df = LazyFrame::scan_csv("data.csv", ScanArgsCSV::default())?
    .filter(col("amount").gt(100))
    .select([col("customer"), col("amount")])
    .collect()?;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Elusion approach - flexible ordering:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;let df = CustomDataFrame::new("data.csv", "sales").await?
    .filter("amount &amp;gt; 100")           
    .select(["customer", "amount"]) 
    .elusion("result").await?;

// Or reorder as you find fit - same result
let df = CustomDataFrame::new("data.csv", "sales").await?
    .select(["customer", "amount"])   // Select first
    .filter("amount &amp;gt; 100")           // Filter second
    .elusion("result").await?;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Polars Basic file loading:
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;let df = LazyFrame::scan_csv("data.csv", ScanArgsCSV::default())?
    .collect()?;

// Parquet with options
let df = LazyFrame::scan_parquet("data.parquet", ScanArgsParquet::default())?
    .collect()?;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Elusion Data Loading - Comprehensive Sources:
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;use elusion::prelude::*;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Local files with auto-recognition&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;let df = CustomDataFrame::new("data.csv", "sales").await?;
let df = CustomDataFrame::new("data.xlsx", "sales").await?;  // Excel support
let df = CustomDataFrame::new("data.parquet", "sales").await?;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Streaming for large files (currently only supports .csv files)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;let df = CustomDataFrame::new_with_stream("large_data.csv", "sales").await?;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Load entire folders&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;let df = CustomDataFrame::load_folder(
    "/path/to/folder",
    Some(vec!["csv", "xlsx"]), // Filter file types or `None` for all types
    "combined_data"
).await?;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Azure Blob Storage (currently supports csv and json files)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;let df = CustomDataFrame::from_azure_with_sas_token(
    "https://account.blob.core.windows.net/container",
    "sas_token",
    Some("folder/file.csv"), //or keep `None` to take everything from folder
    "azure_data"
).await?;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;SharePoint&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;let df = CustomDataFrame::load_from_sharepoint(
    "tenant-id",
    "client-id", 
    "https://company.sharepoint.com/sites/Site",
    "Documents/data.xlsx",
    "sharepoint_data"
).await?;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;REST API to DataFrame&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;let api = ElusionApi::new();

api.from_api_with_headers(
    "https://api.example.com/data",
    headers,
    "/path/to/output.json"
).await?;

let df = CustomDataFrame::new("/path/to/output.json", "api_data").await?;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Database connections&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;let postgres_df = CustomDataFrame::from_postgres(&amp;amp;conn, query, "pg_data").await?;

let mysql_df = CustomDataFrame::from_mysql(&amp;amp;conn, query, "mysql_data").await?;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Polars: Structured Approach
&lt;/h4&gt;

&lt;p&gt;Polars requires logical ordering&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;let result = df
    .lazy()
    .filter(col("amount").gt(100))
    .group_by([col("category")])
    .agg([col("amount").sum().alias("total")])
    .sort("total", SortMultipleOptions::default())
    .collect()?;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Elusion: Any-Order Flexibility
&lt;/h4&gt;

&lt;p&gt;All of these produce the same result:&lt;/p&gt;

&lt;p&gt;Traditional order:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;let result1 = df
    .select(["category", "amount"])
    .filter("amount &amp;gt; 100")
    .agg(["SUM(amount) as total"])
    .group_by(["category"])
    .order_by(["total"], ["DESC"])
    .elusion("result").await?;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Filter first&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;let result2 = df
    .filter("amount &amp;gt; 100")
    .agg(["SUM(amount) as total"])
    .select(["category", "amount"])
    .group_by(["category"])
    .order_by(["total"], ["DESC"])
    .elusion("result").await?;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Aggregation first&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;let result3 = df
    .agg(["SUM(amount) as total"])
    .filter("amount &amp;gt; 100")
    .group_by(["category"])
    .select(["category", "amount"])
    .order_by(["total"], ["DESC"])
    .elusion("result").await?;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;All produce identical results!&lt;/p&gt;

&lt;h4&gt;
  
  
  Advanced Features: Where Elusion Shines
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Built-in Visualization and Reporting
Create interactive dashboards
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;let plots = [
    (&amp;amp;line_plot, "Sales Timeline"),
    (&amp;amp;bar_chart, "Category Performance"),
    (&amp;amp;histogram, "Distribution Analysis"),
];

let tables = [
    (&amp;amp;summary_table, "Summary Stats"),
    (&amp;amp;detail_table, "Transaction Details")
];

CustomDataFrame::create_report(
    Some(&amp;amp;plots),
    Some(&amp;amp;tables),
    "Sales Analysis Dashboard",
    "dashboard.html",
    Some(layout_config),
    Some(table_options)
).await?;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Automated Pipeline Scheduling
Schedule data engineering pipelines
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;let scheduler = PipelineScheduler::new("5min", || async {
    // Load from Azure
    let df = CustomDataFrame::from_azure_with_sas_token(
        azure_url, sas_token, Some("folder/"), "raw_data"
    ).await?;

    // Process data
    let processed = df
        .select(["date", "amount", "category"])
        .agg(["SUM(amount) as total", "COUNT(*) as transactions"])
        .group_by(["date", "category"])
        .order_by(["date"], ["ASC"])
        .elusion("processed").await?;

    // Write results
    processed.write_to_parquet(
        "overwrite",
        "output/processed_data.parquet",
        None
    ).await?;

    Ok(())
}).await?;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Advanced JSON Processing&lt;br&gt;
Can handle complex JSON structures with Arrays and Objects&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;let df = CustomDataFrame::new("complex_data.json", "json_data").await?;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you have json fields/columns in your files you can explode them:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Extract simple JSON fields:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;let simple = df.json([
    "metadata.'$timestamp' AS event_time",
    "metadata.'$user_id' AS user",
    "data.'$amount' AS transaction_amount"
]);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Extract from JSON arrays:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;let complex = df.json_array([
    "events.'$value:id=purchase' AS purchase_amount",
    "events.'$timestamp:id=login' AS login_time",
    "events.'$status:type=payment' AS payment_status"
]);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  When to Choose Which
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Choose Polars When:&lt;br&gt;
Pure performance is the top priority&lt;br&gt;
You prefer structured, optimized query patterns&lt;br&gt;
Memory efficiency is critical&lt;br&gt;
You need minimal dependencies&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose Elusion When:&lt;br&gt;
You need integration flexibility (cloud storage, APIs, databases)&lt;br&gt;
Developer experience and query flexibility matter&lt;br&gt;
You want built-in visualization and reporting&lt;br&gt;
You need automated pipeline scheduling&lt;br&gt;
Working with diverse data sources (Excel, SharePoint, REST APIs)&lt;br&gt;
You prefer intuitive, any-order query building&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Installation and Getting Started&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Polars
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[dependencies]
polars = { version = "0.50.0", features = ["lazy"] }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Elusion
[dependencies]
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;elusion = "4.0.0"
tokio = { version = "1.45.0", features = ["rt-multi-thread"] }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Elusion With specific features
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;elusion = { version = "4.0.0", features = ["dashboard", "azure", "postgres"] }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Rust version requirement:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Polars: &amp;gt;= 1.80
Elusion: &amp;gt;= 1.81
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Real-World Example: Sales Data Analysis&lt;br&gt;
Polars Implementation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;use polars::prelude::*;

let df = LazyFrame::scan_csv("sales.csv", ScanArgsCSV::default())?
    .filter(col("amount").gt(100))
    .group_by([col("category")])
    .agg([
        col("amount").sum().alias("total_sales"),
        col("amount").mean().alias("avg_sale"),
        col("customer_id").n_unique().alias("unique_customers")
    ])
    .sort("total_sales", SortMultipleOptions::default().with_order_descending(true))
    .collect()?;

println!("{}", df);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Elusion Implementation:&lt;br&gt;
use elusion::prelude::*;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#[tokio::main]
async fn main() -&amp;gt; ElusionResult&amp;lt;()&amp;gt; {
    // Load data (flexible source)
    let df = CustomDataFrame::new("sales.csv", "sales").await?;

    // Build query in any order that makes sense to you
    let analysis = df
        .filter("amount &amp;gt; 100")                
        .agg([                                    
            "SUM(amount) as total_sales",
            "AVG(amount) as avg_sale", 
            "COUNT(DISTINCT customer_id) as unique_customers"
        ])
        .group_by(["category"])                
        .order_by(["total_sales"], ["DESC"])       
        .elusion("sales_analysis").await?;

    // If you like to display result
       analysis.display().await?;

    // Create visualization
    let bar_chart = analysis.plot_bar(
        "category",
        "total_sales", 
        Some("Sales by Category")
    ).await?;

    // Generate report
    CustomDataFrame::create_report(
        Some(&amp;amp;[(&amp;amp;bar_chart, "Sales Performance")]),
        Some(&amp;amp;[(&amp;amp;analysis, "Summary Table")]),
        "Sales Analysis Report",
        "sales_report.html",
        None,
        None
    ).await?;

    Ok(())
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Conclusion
&lt;/h4&gt;

&lt;p&gt;Elusion v4.0.0 represents a paradigm shift in DataFrame libraries, prioritizing developer experience, integration flexibility, and comprehensive data engineering capabilities.&lt;br&gt;
The choice between Polars and Elusion depends on your priorities:&lt;/p&gt;

&lt;p&gt;For raw computational performance and memory efficiency: Polars&lt;br&gt;
For comprehensive data engineering with flexible development: Elusion&lt;/p&gt;

&lt;p&gt;Elusion's "any-order" query building, extensive integration capabilities, built-in visualization, and automated scheduling make it particularly attractive for teams that need to work with diverse data sources and want a more intuitive development experience.&lt;br&gt;
Both libraries showcase the power of Rust in the data processing space, offering developers high-performance alternatives to traditional Python-based solutions. The Rust DataFrame ecosystem is thriving, and having multiple approaches ensures that different use cases and preferences are well-served.&lt;/p&gt;

&lt;p&gt;Try Elusion v4.0.0 today:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cargo add elusion@4.0.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For more information and examples, visit the Elusion github repository: &lt;a href="https://github.com/DataBora/elusion" rel="noopener noreferrer"&gt;Elusion repository&lt;/a&gt; and join the growing community of Rust data engineers who are discovering the flexibility and power of any-order DataFrame operations.&lt;/p&gt;

</description>
      <category>data</category>
      <category>dataengineering</category>
      <category>rust</category>
      <category>python</category>
    </item>
  </channel>
</rss>
