Table schemas in data pipelines Spark: How to handle large, nested & growing ones
In this post, we describe how we built a pipeline for the type of “incoming data” situation, and how we came up with a good solution in the end.
Get to know better #VLteam! – The Data Team
In this article, we want to shed some light on how we work in the Data Space and how we managed to use our Scala experience to become proficient in Apache Spark and extend our capabilities to Python and Machine Learning over the last couple of years.
Story of importing a large dataset to Akka Cluster
Learn more about how we imported a large dataset to Akka Cluster and the pitfalls we had.
Meet us there! Online tech events we’re attending in July
We say hello to summer and invite you to our online tech events in July. Stay safe at home & enjoy the opportunity to expand your knowledge!