DEV Community

Discussion on: How to handle BigData?

Collapse
 
volkmarr profile image
Volkmar Rigo

Disclamer: I'm not a big data expert! For my understanding, Apache Spark is an analytics engine, that helps you with processing large datasets in "real time". Processing happens in memory, so it is fast. As your data volume grows, you can add more nodes to your spark cluster to distribute the workload. Lucky for you, there is a connector for mongodb and also a dotnet sdk.