<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Andrew Tan</title>
    <description>The latest articles on DEV Community by Andrew Tan (@andrew_tan_layline).</description>
    <link>https://dev.to/andrew_tan_layline</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/andrew_tan_layline"/>
    <language>en</language>
    <item>
      <title>Financial Data Integration: A Practical Guide</title>
      <dc:creator>Andrew Tan</dc:creator>
      <pubDate>Thu, 16 Apr 2026 10:34:17 +0000</pubDate>
      <link>https://dev.to/andrew_tan_layline/why-real-time-data-integration-matters-for-modern-applications-cim</link>
      <guid>https://dev.to/andrew_tan_layline/why-real-time-data-integration-matters-for-modern-applications-cim</guid>
      <description>&lt;p&gt;&lt;em&gt;This article was originally published on the &lt;a href="https://layline.io/resources/blog/2026-04-20-financial-data-integration" rel="noopener noreferrer"&gt;layline.io blog&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Financial data integration is harder than regular ETL because the constraints are tighter, the stakes are higher, and the systems you're integrating are often decades old. At a typical mid-size bank, a data integration project gets delayed for months not because of technical problems, but because nobody can agree on what "the single source of truth" actually means.&lt;/p&gt;

&lt;p&gt;This guide covers the three integration patterns that actually work in financial services — event-driven backbones, API gateway layers, and hybrid architectures — plus the hidden challenges that catch teams off guard.&lt;/p&gt;




&lt;h2&gt;
  
  
  The compliance problem nobody talks about
&lt;/h2&gt;

&lt;p&gt;At a typical mid-size bank, a data integration project gets delayed for months. Not because of technical problems. Not because of budget. Because nobody can agree on what "the single source of truth" actually means.&lt;/p&gt;

&lt;p&gt;The trading desk has one definition. Risk management has another. Regulatory reporting needs a third. Each team has built their own pipelines over the years — some in Python, some in SQL stored procedures, one terrifying COBOL script that nobody dares touch. Getting them to agree on unified data models feels like negotiating a peace treaty.&lt;/p&gt;

&lt;p&gt;This is financial data integration in a nutshell. It's not just about moving data from A to B. It's about reconciling decades of accumulated business logic, dealing with regulatory minefields, and somehow making it all work in real-time without taking down systems that process billions in transactions daily.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why financial data is different
&lt;/h2&gt;

&lt;p&gt;Most ETL articles assume you're working with relatively clean data in modern formats, processed in batches overnight. Financial services breaks every one of those assumptions.&lt;/p&gt;

&lt;p&gt;The data formats are ancient and proprietary. While the rest of the world moved to JSON and REST APIs, financial services still runs on FIX protocol, SWIFT messages, ISO 20022 XML, and a dizzying array of vendor-specific binary formats. A single trading firm might receive market data in one format, execute orders in another, and settle trades in a third — all for the same transaction.&lt;/p&gt;

&lt;p&gt;Latency requirements are brutal. In high-frequency trading, microseconds matter. A retail bank's fraud detection system needs to score transactions in under 100 milliseconds or customers get annoyed waiting for their card to work. Traditional batch ETL, with its hourly or daily windows, simply doesn't work here.&lt;/p&gt;

&lt;p&gt;Regulatory requirements are non-negotiable. MiFID II in Europe requires trade reporting within minutes. Basel III demands real-time risk calculations. GDPR means you need to track exactly where personal data flows and be able to delete it on request. Get this wrong and you're not just debugging a pipeline — you're explaining yourself to regulators.&lt;/p&gt;

&lt;p&gt;The stakes are higher. A failed ETL job at an e-commerce company means delayed reports. A failed pipeline at a bank can mean failed trades, regulatory breaches, or incorrect risk exposure calculations. Recovery time objectives are measured in seconds, not hours.&lt;/p&gt;

&lt;h2&gt;
  
  
  The three integration patterns that actually work
&lt;/h2&gt;

&lt;p&gt;Across the financial services industry, three approaches consistently succeed. The key is matching the pattern to your actual constraints, not what you'd prefer them to be.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pattern 1: The event-driven backbone
&lt;/h3&gt;

&lt;p&gt;This is becoming the standard for modern financial infrastructure. Instead of polling databases every few minutes, you stream events as they happen.&lt;/p&gt;

&lt;p&gt;A trade executes? That's an event. A payment clears? Another event. Risk thresholds breached? Event. Each system subscribes to the events it cares about and reacts in real-time.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feltcyz782ov6lx829d0p.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feltcyz782ov6lx829d0p.jpg" alt="Event-driven architecture with CDC, Kafka, and stream processors" width="800" height="420"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The architecture usually looks like this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;CDC (Change Data Capture) connectors watch legacy databases and emit events when rows change&lt;/li&gt;
&lt;li&gt;Kafka or similar is the central nervous system, durably storing events&lt;/li&gt;
&lt;li&gt;Stream processors handle transformations, aggregations, and routing&lt;/li&gt;
&lt;li&gt;Target systems consume exactly what they need, when they need it&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Many fintechs use this pattern to connect modern microservices with legacy mainframes. The mainframe continues running the core ledger (too risky to migrate), but CDC connectors stream every transaction change to Kafka within milliseconds. New services build on this event stream without ever touching the legacy database directly.&lt;/p&gt;

&lt;p&gt;The downside? Event-driven systems are harder to reason about than batch jobs. When something goes wrong, you can't just "re-run yesterday's job." You need to understand the event topology, replay strategies, and exactly-once semantics.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pattern 2: The API gateway layer
&lt;/h3&gt;

&lt;p&gt;For teams dealing with external data sources — market data feeds, counterparty APIs, regulatory reporting services — an API gateway pattern often works better than pure streaming.&lt;/p&gt;

&lt;p&gt;The idea is simple: create a unified abstraction layer that normalizes all those different data sources into a consistent internal format. Your trading systems don't need to know that Bloomberg speaks one protocol and Refinitiv speaks another. They just call your internal API.&lt;/p&gt;

&lt;p&gt;This pattern shines when:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You're integrating with many external vendors who each have their own quirks&lt;/li&gt;
&lt;li&gt;You need to cache and fan-out data to multiple internal consumers&lt;/li&gt;
&lt;li&gt;You want to enforce security, rate limiting, and audit logging in one place&lt;/li&gt;
&lt;li&gt;You need to switch vendors without rewriting downstream systems&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Wealth management firms often use this approach for market data. They normalize feeds from multiple providers into a single internal format, add real-time validation and entitlements, then expose it via GraphQL or REST. Portfolio managers get exactly the data they need, formatted consistently, regardless of which vendor supplied the underlying feed.&lt;/p&gt;

&lt;p&gt;The catch is operational complexity. You're now running a critical piece of infrastructure that everything depends on. When the gateway has issues, everything has issues.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pattern 3: The hybrid compromise
&lt;/h3&gt;

&lt;p&gt;Most mature financial institutions end up here. You keep batch processing for the workloads that genuinely don't need real-time — regulatory reports, end-of-day reconciliation, historical analytics. You add streaming for the latency-sensitive workflows — fraud detection, risk monitoring, customer-facing dashboards.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvp1lxqpdalxzrxcqsegu.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvp1lxqpdalxzrxcqsegu.jpg" alt="Hybrid batch and streaming architecture" width="800" height="420"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The key is being intentional about the boundary. Not everything needs to be real-time, and trying to force streaming on batch-appropriate workloads just creates unnecessary complexity.&lt;/p&gt;

&lt;p&gt;Trading platforms typically keep overnight risk calculations in batch (the math is complex and doesn't need to be instant), but move position monitoring to streaming (traders need to know their exposure immediately). The two systems coexist, with the streaming layer feeding into the batch layer for end-of-day reconciliation.&lt;/p&gt;

&lt;h2&gt;
  
  
  The hidden challenges nobody talks about
&lt;/h2&gt;

&lt;p&gt;Beyond the architectural patterns, there are specific problems that catch teams off guard.&lt;/p&gt;

&lt;p&gt;Reference data is a nightmare. Every trade references securities, counterparties, and market identifiers that exist in master data systems. Those master systems update on their own schedules. If your trade data references a security that hasn't been loaded into your local cache yet, what happens? Financial data integration requires sophisticated reference data management — caching strategies, fallback logic, and tolerance for temporarily incomplete data.&lt;/p&gt;

&lt;p&gt;Time zones and market hours. A global trading operation spans Tokyo, London, and New York. Each market opens and closes at different times. Some instruments trade 24/7. Your data pipelines need to handle "end of day" concepts that vary by instrument, geography, and market regime. The simple notion of "yesterday's data" becomes surprisingly complex.&lt;/p&gt;

&lt;p&gt;Data quality at scale. When you're processing millions of transactions per hour, even 0.01% bad data is hundreds of errors to investigate. Financial data integration requires automated quality checks — schema validation, range checks, referential integrity — that can run in real-time and route suspicious data to human review queues without blocking the pipeline.&lt;/p&gt;

&lt;p&gt;Testing in production. You can't exactly spin up a copy of a global trading system to test your new pipeline. Teams often use techniques like shadow mode (run new and old pipelines in parallel, compare outputs) or synthetic transactions (inject test trades that get processed but not settled) to validate changes.&lt;/p&gt;

&lt;h2&gt;
  
  
  What good looks like
&lt;/h2&gt;

&lt;p&gt;When financial data integration works, you notice it in the operational metrics:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Reconciliation exceptions drop. When data flows consistently across systems, the daily "why don't these numbers match" investigations become rare.&lt;/li&gt;
&lt;li&gt;Time-to-insight shrinks. A risk manager can see their current exposure without waiting for the overnight batch. A compliance officer can generate regulatory reports on demand, not on schedule.&lt;/li&gt;
&lt;li&gt;System outages become isolated. When one system has issues, it doesn't cascade through brittle batch dependencies.&lt;/li&gt;
&lt;li&gt;New projects move faster. Teams spend less time figuring out how to get data and more time using it.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But getting there requires more than technology. It requires organizational agreement on data ownership, quality standards, and change management processes. The technical solution is often the easy part.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where layline.io fits in
&lt;/h2&gt;

&lt;p&gt;If you're evaluating platforms for financial data integration, here's where layline.io is worth considering:&lt;/p&gt;

&lt;p&gt;It handles both batch and streaming in the same platform. This matters because most financial institutions need both — and having separate tools for each creates unnecessary complexity and context switching.&lt;/p&gt;

&lt;p&gt;The visual workflow designer helps with the organizational challenge. When compliance, trading, and IT teams can all see and understand the data flows, agreement becomes easier. You spend less time in meetings explaining what the pipeline does and more time improving it.&lt;/p&gt;

&lt;p&gt;It includes built-in handling for the operational concerns that matter in finance: exactly-once processing guarantees, stateful operations with checkpointing, backpressure management when downstream systems slow down. These aren't afterthoughts — they're core features.&lt;/p&gt;

&lt;p&gt;The infrastructure-agnostic deployment means you can run it where your compliance team is comfortable: on-premises, in your existing cloud environment, or air-gapped if that's what your security requirements demand.&lt;/p&gt;

&lt;p&gt;For teams that need financial-grade data integration without building a dedicated platform engineering team, this is the gap it fills.&lt;/p&gt;




&lt;h2&gt;
  
  
  The bottom line
&lt;/h2&gt;

&lt;p&gt;Financial data integration is harder than regular ETL because the constraints are tighter, the stakes are higher, and the systems you're integrating are older and more complex. But the patterns that work are well understood: event-driven architectures for real-time needs, API gateways for external integration, and hybrid approaches that don't force streaming on batch-appropriate workloads.&lt;/p&gt;

&lt;p&gt;The teams that succeed focus first on understanding their actual requirements — latency needs, regulatory constraints, data quality standards — before choosing technology. They invest in reference data management and testing strategies that work at financial scale. And they accept that some problems are organizational, not technical.&lt;/p&gt;

&lt;p&gt;Start with one high-value pipeline. Prove the pattern. Then expand. Whether you build it yourself or use a platform like layline.io, the key is being intentional about where real-time actually matters and where batch is still the right answer.&lt;/p&gt;




&lt;h2&gt;
  
  
  What's next
&lt;/h2&gt;

&lt;p&gt;If you're wrestling with financial data integration, the best next step is mapping your actual data flows. Not the architecture diagrams — the real flows, including the Excel exports, the email attachments, and the scripts that run on Bob's desktop because nobody else knows how they work.&lt;/p&gt;

&lt;p&gt;Once you see the full picture, you can identify which integrations would benefit most from modernization. Start there.&lt;/p&gt;

&lt;p&gt;For &lt;a href="https://layline.io" rel="noopener noreferrer"&gt;layline.io&lt;/a&gt; users, the Community Edition is free to try — no credit card required. You can prototype a streaming pipeline against your existing data sources and see how it handles your specific formats and requirements.&lt;/p&gt;




</description>
      <category>architecture</category>
      <category>dataengineering</category>
      <category>systemdesign</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Why Real-Time Data Integration Matters for Modern Applications</title>
      <dc:creator>Andrew Tan</dc:creator>
      <pubDate>Thu, 16 Apr 2026 10:22:24 +0000</pubDate>
      <link>https://dev.to/andrew_tan_layline/why-real-time-data-integration-matters-for-modern-applications-34mf</link>
      <guid>https://dev.to/andrew_tan_layline/why-real-time-data-integration-matters-for-modern-applications-34mf</guid>
      <description>&lt;p&gt;&lt;em&gt;This article was originally published on the &lt;a href="https://layline.io/resources/blog/2026-04-13-why-real-time-data-integration-matters" rel="noopener noreferrer"&gt;layline.io blog&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The difference between "near-real-time" and actually-real-time is wider than most teams realize — and it's getting wider as customer expectations accelerate. A major European retailer lost €4.7 million on Black Friday 2024 not because their website crashed, but because their "real-time" inventory system was running four hours behind.&lt;/p&gt;

&lt;p&gt;This post explains what "real-time" actually means, why the shift from batch to streaming is accelerating, and what well-architected real-time systems look like in practice.&lt;/p&gt;




&lt;h2&gt;
  
  
  The €4.7 million delay
&lt;/h2&gt;

&lt;p&gt;A major European retailer lost €4.7 million on Black Friday 2024. Not because their website crashed. Not because they ran out of stock. Because their "real-time" inventory system was running four hours behind.&lt;/p&gt;

&lt;p&gt;340,000 customers placed orders for items that had already sold out. The system showed availability. The warehouse had none. By the time the discrepancy surfaced, the damage was done. Refunds issued. Customer service overwhelmed. Brand reputation dented. The post-mortem revealed something awkward: the pipeline was never designed for real-time. It was designed for "near-real-time," a distinction that sounded technical in architecture reviews and turned out to be catastrophic in production.&lt;/p&gt;

&lt;p&gt;I've heard versions of this story dozens of times. The gap between what "real-time" promises and what most systems deliver is wider than most teams realize. And it's getting wider, not narrower, as customer expectations accelerate.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgoc88lv8ka37ci4qjnbm.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgoc88lv8ka37ci4qjnbm.jpg" alt="Formula 1 pit crew synchronizing data streams in real-time" width="800" height="420"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Like a Formula 1 pit stop, real-time data processing requires precision, coordination, and the right infrastructure.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What "real-time" actually means (and doesn't)
&lt;/h2&gt;

&lt;p&gt;The industry has muddied this water. Three categories get conflated under the same label.&lt;/p&gt;

&lt;p&gt;Batch means hours or days between updates. Your nightly ETL job. Your weekly report. Clear boundaries, predictable windows, well-understood failure modes.&lt;/p&gt;

&lt;p&gt;Near-real-time means minutes between updates. The system checks every five, fifteen, thirty minutes. Most "real-time dashboards" fall here. Good for many use cases. Not good for the ones that matter most.&lt;/p&gt;

&lt;p&gt;Real-time means seconds or sub-second. The event happens. The system knows. The downstream action triggers immediately.&lt;/p&gt;

&lt;p&gt;The retailer didn't have a real-time problem. They had a near-real-time system marketed as real-time, and nobody questioned the difference until it cost them four million euros.&lt;/p&gt;

&lt;h2&gt;
  
  
  Three forces driving the shift
&lt;/h2&gt;

&lt;p&gt;The Amazon effect. Customers expect instant everything. Not because they analyzed the technical requirements. Because that's what they've been trained to expect. A 2022 Shopify study of 12,000 consumers found 73% expect checkout, inventory, and shipping updates in real time. Not "within the hour." Real time.&lt;/p&gt;

&lt;p&gt;Operational windows are shrinking. Fraud detection after the transaction isn't detection. It's notification. The money's already gone. Manufacturing lines that wait for batch quality reports produce bad units for hours before someone notices. The cost of delay compounds faster than most spreadsheets capture.&lt;/p&gt;

&lt;p&gt;Competitive pressure. If your competitor updates pricing every thirty seconds and you update every six hours, you're not competing. You're spectating. This isn't theoretical. E-commerce platforms, travel aggregators, financial services. The companies winning in these spaces made real-time data infrastructure a strategic priority, not a technical nice-to-have.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2lpyvs1yktkyu4ktx874.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2lpyvs1yktkyu4ktx874.jpg" alt="Formula 1 race car leaving a trail of streaming data" width="800" height="420"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Speed without control is dangerous. Real-time systems need to handle velocity while maintaining accuracy and reliability.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The hidden complexity
&lt;/h2&gt;

&lt;p&gt;Moving from batch to streaming is harder than it looks. The surface seems simple: instead of waiting, react immediately. Underneath, everything changes.&lt;/p&gt;

&lt;p&gt;State management. Batch jobs process bounded datasets. You know the input size when you start. Streaming processes unbounded streams. You need to track windows, handle late-arriving data, manage state across events that may arrive out of order.&lt;/p&gt;

&lt;p&gt;Exactly-once processing. Run a batch job twice by accident? You get duplicate output, fix it, move on. Run a streaming pipeline twice? You double-charge customers, double-count inventory, double-notify systems. The semantics matter in ways they didn't before.&lt;/p&gt;

&lt;p&gt;Backpressure. What happens when your source produces faster than your sink can consume? In batch, this shows up as a slow job. In streaming, it shows up as dropped messages, cascading failures, or systems that simply stop responding.&lt;/p&gt;

&lt;p&gt;These aren't rare edge cases. They're Tuesday. Teams that underestimate this complexity end up with pipelines that work in demos and fail in production.&lt;/p&gt;

&lt;h2&gt;
  
  
  What good looks like
&lt;/h2&gt;

&lt;p&gt;Well-architected real-time systems share traits.&lt;/p&gt;

&lt;p&gt;Resilience by default. Not bolted on. The system expects components to fail and continues operating. Circuit breakers. Graceful degradation. Bounded queues that shed load rather than crash.&lt;/p&gt;

&lt;p&gt;Observable. You need to see what's happening inside a pipeline that processes thousands of events per second. Metrics that matter. Tracing that follows events through the system. Alerting that fires on symptoms, not just component failures.&lt;/p&gt;

&lt;p&gt;Growth-ready. The system that handles ten thousand events per minute should handle ten million without a rewrite. Horizontal scaling. Partition-aware design. No single points of contention.&lt;/p&gt;

&lt;p&gt;Accessible. Real-time data integration shouldn't require a PhD in distributed systems. The tools exist. The documentation is clear. The concepts are learnable. Teams should be productive in days, not quarters.&lt;/p&gt;

&lt;p&gt;This last point matters more than the others. The teams that succeed with real-time infrastructure aren't the ones with the most sophisticated technology. They're the ones that made it approachable enough for their existing teams to operate.&lt;/p&gt;

&lt;h2&gt;
  
  
  The accessibility gap
&lt;/h2&gt;

&lt;p&gt;There's a two-tier market forming. Tier one: companies with dedicated streaming teams, Kafka expertise, infrastructure engineers who understand partition rebalancing and exactly-once semantics. Tier two: everyone else, stuck with batch because real-time seems too complex to attempt.&lt;/p&gt;

&lt;p&gt;This is backwards. Real-time data integration should be as accessible as batch processing. Same team. Same skill level. Same time-to-production. The technology is there. What's missing is the packaging. Tools that handle the complexity so teams don't have to.&lt;/p&gt;

&lt;p&gt;At layline.io, we're building for the second tier. Unified workflows that handle both batch and streaming with the same interfaces. Resilience and observability built in. Scaling that happens automatically. The goal isn't to make streaming simple. It's complex, and pretending otherwise helps nobody. The goal is to make it accessible.&lt;/p&gt;

&lt;p&gt;Because the retailers and manufacturers and financial services companies that need real-time data already have smart teams. They don't need different people. They need better tools.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;I am a founder of &lt;a href="https://layline.io" rel="noopener noreferrer"&gt;layline.io&lt;/a&gt;, building enterprise data processing infrastructure for batch and real-time workloads.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>data</category>
      <category>dataengineering</category>
      <category>systemdesign</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
