🪝 Teaser (for the impatients)
Do you have a repository that relies on csv
files... and want to operate:
- 🛡️ Protect your data with quality intergrity checks before to corrupt your data
- 🔬 Check data quality as part of your project lifecycle
- 📊 Get operational KPIs reporting
- ♾️ Automate release process to explain your contributors what has been achieved
- 📦 Deliver data
- 🤯 Endless usages
🫵 Don't look further, this sort post will cover all these aspects with a practical and highly understandable workflow.
🍿 Demo
Enough talks, let's jump'in:
🦆💻🔁🔧♾️🦆 opt-nc/setup-duckdb-action
opt-nc / setup-duckdb-action
🦆 Blazing Fast and highly customizable Github Action to setup a DuckDb runtime
ℹ️ Setup Duckdb Action
This action installs duckdb
with the version provided in input.
📜 Inputs
version
Not Required The version you want to install. If no version defined, the latest version will be installed.
🚀 Example usage
uses: opt-nc/setup-duckdb-action@v1.0.0
with:
version: v0.8.1
uses: opt-nc/setup-duckdb-action@v1.0.0
🔭 Further 🎀
Once CI (Continuous Integration) has been done... you can also think (without a lot of efforts) to Deliver that data to third party services, as part of your DEVOPS pipeline.
A this point I see two easy options:
- Upload to minio
S3 API
w/HTTPS
- Use MotherDuck
... which make your data available for new usecases, at no additional effort:
Top comments (7)
A very cool place to watch:
doc(article) : Add demo on usecase #46