You don’t notice the problem right away.
Everything runs smoothly in MySQL… until a new report shows up. Then queries slow down, dashboards lag, and you start realizing you’re stretching the database beyond what it’s good at.
That’s usually when BigQuery enters the picture.
So the real question becomes:
How do you actually move data between them without turning it into a side project?
Let’s walk through the three approaches that people actually use — and where each one makes sense.
First, a quick reality check
Before tools, there are two ideas worth keeping in mind.
ELT > ETL (most of the time)
Old approach: transform before loading.
Modern approach: load first, transform later.
BigQuery is built for heavy lifting. MySQL isn’t. So it usually makes more sense to move raw data first and shape it inside the warehouse.
Most teams end up here anyway, even if they don’t call it ELT.
CDC is what changes everything
If you only remember one thing from this article, make it this.
Batch pipelines reload data.
CDC pipelines move only what changed.
Instead of re-running queries every hour, CDC reads MySQL’s binary log and streams updates as they happen.
That’s the difference between:
- “data updates every night”
- and “data is actually usable during the day” ## Option 1: Manual export (mysqldump)
This is the simplest way to get data across.
Export → upload → done.
When it works
If you just need to move data once, this is fine.
Same if the dataset is small or you’re just trying something out and don’t want to spend time setting things up properly.
What’s not great
It starts getting annoying the moment you try to reuse it.
Something changes in the schema and the next run just fails with no warning.
If the load dies halfway through, you’re the one fixing it.
And if you put it on a schedule, it slowly turns into that one script nobody wants to touch but everyone depends on.
Honest take
Fine for one-offs. Painful as a process.
Option 2: BigQuery Data Transfer Service
This is Google’s managed way of doing scheduled imports.
You configure it once, and it runs on a schedule.
When it works
If you’re already deep in GCP, this is usually the first thing people reach for.
It makes sense when daily updates are enough and you don’t feel like building or maintaining pipelines yourself.
What’s good
Setup is pretty quick, nothing too fancy.
Monitoring is already there, so you’re not wiring alerts from scratch.
And you don’t really have to think about infrastructure at all, which is a big plus.
Where it starts to feel limited
It’s still batch at the end of the day.
No real sense of what changed between runs.
No near real-time updates.
And once you need something more custom, you start feeling the limits pretty quickly.
Honest take
It’s a comfortable middle ground.
Works well while batch is good enough.
Once you need fresher data or more control, you’ll probably start looking elsewhere.
Option 3: CDC (real-time sync)
This is where things start to feel different.
Instead of moving full tables every time, you’re only dealing with what actually changed. New rows, updates, deletes — that’s it. No full reloads, no unnecessary queries hitting your database over and over.
How it works (in plain terms)
MySQL already keeps track of every change in its binary log. CDC tools just read from that stream and replay those changes on the BigQuery side.
So instead of constantly checking “what’s new?”, you’re just picking up changes as they come in and moving them along.
If you want to see how this looks in practice, here’s a real use case with the Skyvia platform:
https://skyvia.com/learn/mysql-and-bigquery-integration
Why teams switch to this
- Less load on MySQL
- Faster updates in BigQuery
- Smaller, more efficient pipelines
Once you’ve used this approach, batch starts to feel… slow.
Do you build it or use something?
You can build CDC pipelines yourself.
You’ll need:
- binlog readers
- state tracking
- retry logic
- schema handling
- monitoring
Or you can use something like Skyvia and skip most of that.
The idea is the same:
- connect MySQL
- connect BigQuery
- enable incremental sync
- let it run
Quick Comparison
What usually goes wrong
No matter which route you take, the same things tend to break. It’s rarely something new — just the usual suspects popping up in slightly different forms.
Connection issues
Almost always:
- credentials
- firewall
- network config
Data mismatches
Usually:
- type differences
- schema drift
- timezone issues
Slow MySQL
Caused by:
- full table scans
- missing indexes
- pulling from production instead of replicas
Data gaps
Often:
- failed incremental runs
- CDC misconfiguration
- filters excluding data
Nothing here is exotic. It’s just stuff that gets overlooked.
After the data is in BigQuery
This part matters more than people expect.
Validate early
- row counts
- key fields
- spot checks
Don’t assume it worked.
Monitor continuously
- Pipelines rarely fail loudly.
- They just stop updating.
- Set alerts. Watch freshness.
Control costs
BigQuery charges for data scanned.
- partition your tables
- cluster frequently filtered columns
- avoid SELECT *
Small changes make a big difference here.
What to use (and when)
If you strip it down:
- One-time job → manual export
- Recurring batch → DTS
- Anything that needs fresh data → CDC
The method matters, but what really determines success is everything around it:
- schema handling
- monitoring
- maintenance effort
Final thought
Most teams don’t switch approaches because they want to.
They switch because:
- data gets bigger
- queries get slower
- expectations get higher If you already know you’ll end up needing fresher data, it’s usually better to plan for it early instead of rebuilding later.

Top comments (0)