Processing highly connected data as graphs becomes more and more important in many different domains. Prominent examples are social networks, e.g. facebook and Twitter, as well as information networks like the World Wide Web or biological networks. One important similarity of these domain specific data is their inherent graph structure which makes them eligible for analytics using graph algorithms. Besides that, the datasets share two more similarities: they are huge in size, making it hard or even impossible to process them on a single machine and they grow over time, which classifies them as dynamic graphs. With the objective of analyzing these large-scale, dynamic datasets, we started developing a framework called “Gradoop” (Graph Analytics on Hadoop) with the following three main objectives:
- developing a graph data model incl. operators for the definition of analytical pipelines
- data integration of heterogeneous source systems into an integrated graph and
- efficient data distribution / replication to optimize the execution of distributed graph operators.
Our prototype is build on top of Apache Flink and Apache HBase. The data model has been designed and a set of sample operators have been implemented. A first use case is the BIIIG project for graph analytics in business information networks. In our ongoing work, we will look into different methods of graph partitioning to accelerate the execution of analytical pipelines.
- Kevin Gomez
- Niklas Teichmann
Competence Center for Scalable Data Services and Solutions (ScaDS)