product range data



Kettle (PDI), Project Hop

Visually Design Your Data Pipelines

Kettle, the open source project Pentaho Data Integration or PDI, is used by thousands of  organisations of all sizes, all over the world.

In Kettle, data engineers developers visually design data pipelines using metadata.
Kettle's metadata-driven approach and integration with all components in a modern data architecture allow you to build data pipelines that are flexible and easy to develop, to test, to deploy and to maintain. 

Using Kettle's visual development IDE allows data developers to be far more productive than they would be in a code-based approach. has been a Kettle expert for many years. We'll continue to work with Kettle, but we're also investing heavily in the development of Project Hop. With Hop, we're ready to explore the future of data engineering.   

With Kettle and Hop, you can focus on what you want to do with your data instead of how it needs to be done. 



Graphs are everywhere!

When analyzing your data, there often is more value in finding out how your data is related than in the individual data points themselves.  

There are numerous use cases where graphs can provide new insights: fraud detection, social network analysis, path finding and many more. 
With Neo4j's Cypher query language and the powerful algorithms that can be applied to your graphs, you'll be ready to look at your data in entirely new ways. 

The combination of Neo4j and Kettle/PDI/Hop makes loading data to and extracting data from your graphs a breeze. is ready to help in modelling, loading and querying your data into Neo4j graphs.  


Cloud Architecture

analytics without boundaries

Cloud platform are here to stay in a modern data and analytics architecture. Whether you need to process data in (relatively) small or huge volumes, in streaming or in batch, in real-time or on regular intervals, the major cloud platforms have the right tool for each task. 
We assist you in designing and implementing analytics and machine learning projects on Amazon Web Services (AWS) and Google Cloud Platform (GCP). 



lightning fast analytics

When data volumes and the analytics that need to be performed on that data grow, so does the time that is spent loading and querying or analyzing that data. 
Vertica, as a distributed database, stores data on a cluster of machines, has seamless integration with external data (e.g. Hadoop or your data lake) . On top of that, machine learning algorithms can be applied directly on the database. All of this is done through familiar SQL syntax to avoid steep learning curves. 
Vertica is strategically used by the biggest and most data intensive companies on the planet, but can easily add value for smaller organizations as well.