DEV Community

adrian
adrian

Posted on

3

Transfer SQL-> analytics 30x faster with ConnectorX + arrow + dlt

ConnectorX + Arrow + dlt

dlt is a recently released python library for data extraction and loading, the EL in ETL. At dltHub we are big fans of optimising things and integrating those optimisations into our toolkit to enable others to re-use them.

Speed boosts and schema from arrow, dlt for loading with schema evolution

In this example, we combine ConnectorX + Arrow + dlt to extract data and load it to a strongly typed environment 30x faster than classic data transfer via sqlalchemy.

Result: Much faster, but mind the memory usage

In this example we can see 30x overall speedup on extraction and normalisation with Arrow The process took 16 seconds with arrow vs 8 minutes with sqlalchemy + dlt's JSON normaliser for 10m rows.

The output in both of methods is the same (parquet files or loaded data) with schema evolution. However, in the case of arrow, we are not iterating row by row, so we cannot perform optimisations we can while streaming from sqlalchemy, such as microbatching to keep memory use low.

Read more about it + implementation docs on our blog here

Image of Timescale

🚀 pgai Vectorizer: SQLAlchemy and LiteLLM Make Vector Search Simple

We built pgai Vectorizer to simplify embedding management for AI applications—without needing a separate database or complex infrastructure. Since launch, developers have created over 3,000 vectorizers on Timescale Cloud, with many more self-hosted.

Read more →

Top comments (0)

Postmark Image

Speedy emails, satisfied customers

Are delayed transactional emails costing you user satisfaction? Postmark delivers your emails almost instantly, keeping your customers happy and connected.

Sign up