Skip to main content

User account menu

  • Log in
DBS-Logo

Database Group Leipzig

within the department of computer science

ScaDS-Logo Logo of the University of Leipzig

Main navigation

  • Home
  • Study
    • Exams
      • Hinweise zu Klausuren
    • Courses
      • Current
    • Modules
    • LOTS-Training
    • Abschlussarbeiten
    • Masterstudiengang Data Science
    • Oberseminare
    • Problemseminare
    • Top-Studierende
  • Research
    • Projects
      • Benchmark datasets for entity resolution
      • FAMER
      • HyGraph
      • Privacy-Preserving Record Linkage
      • GRADOOP
    • Publications
    • Prototypes
    • Annual reports
    • Cooperations
    • Graduations
    • Colloquia
    • Conferences
  • Team
    • Erhard Rahm
    • Member
    • Former employees
    • Associated members
    • Gallery

Dedoop: Entity Matching for Big Data

Breadcrumb

  • Home
  • Research
  • Projects
  • Dedoop: Entity Matching for Big Data

Duration

/

Description

Overview:

Automatically matching entities (objects) and ontologies are key technologies to semantically integrate heterogeneous data. These match techniques are needed to identify equivalent data objects (duplicates) or semantically equivalent metadata elements (ontology concepts, schema attributes). The proposed techniques demand very high resources that limit their applicability to large-scale (Big Data) problems unless a powerful cloud infrastructure can be utilized. This is because the (fuzzy) match approaches basically have a quadratic complexity to compare the all elements to be matched with each other. For sufficient match quality, multiple match algorithms need to be applied and combined within so-called match workflows adding further resource requirements as well as a significant optimization problem to select matchers and configure their combination.

Goals:

  • Efficient parallel execution of match workflows in the cloud
  • Efficient application of machine learning models for entity/ontology matching

General Entity Matching Workflow:

The common approach to improve efficiency is to reduce the search space by adopting blocking techniques. They utilize a blocking key on the values of one or several entity attributes to partition the input data into multiple partitions (called blocks) and restrict the subsequent matching to entities of the same block. For example, product entities may be partitioned by manufacturer values such that only products of the same manufacturer are evaluated to find matching entity pairs.

general workflow

Entity Matching with MapReduce:

The MapReduce model is well suited to execute blocking-based entity matching in parallel within several map and reduce tasks. In particular, several map tasks can read the input entities in parallel and redistribute them among several reduce tasks based on the blocking key. This guarantees that all entities of the same block are assigned to the same reduce task so that different blocks can be matched in parallel by multiple reduce tasks.

Skew Handling / Load Balancing: Basic MR implementation of entity matching is susceptible to severe load imbalances due to skewed blocks sizes since the match work of entire blocks is assigned to a single reduce task. We are developing effective load balancing approaches to data skew handling for MR-based entity matching (and, in general, all kind of pairwise similarity computation). For a quick overview see our CIKM 2011 poster (right).

Redundancy-free comparisons: Modern entity matching approaches assign entities to more than one block. For example, multi-pass approaches use several blocking keys to still achieve high recall in the presence of noisy data. Similarily, token-based matching approaches (e.g., PPJoin) generate a list of tokens (i.e., blocks) for each entity. Such overlapping blocks may lead to mutliple comparisons of the same pair of entity which in turn decreases efficieny. We therefore develop redundancy-free approaches that ensure that every entity pair is processed by one reduce task only.

Poster

Dedoop:

We summarized our latest research results in a prototype, called Dedoop (Deduplication with Hadoop). It is a powerful and easy-to-use tool for MapReduce-based entity resolution of large datasets.

 

dedoop slideshow
dedoop paper
dedoop poster

 

 

 

 

 

 

 

 

dedoop presentation
dedoop parallel er

 

 

 

 

 

 

Dedoop…

  • Provides a Web interface to specify entity resolution strategies for match tasks
  • Automatically transforms the specification into an executable MapReduce workflow and manages its submission and execution on Hadoop clusters
  • Is designed to serve multiple users that may simultaneously execute multiple workflows on the same or different Hadoop clusters
  • Provides load balancing strategies to evenly utilize all nodes of the cluster
  • Includes a graphical HDFS and S3 file manager
  • Supports the repeated execution of jobs within a MapReduce workflow until a stop condition if fulfilled
  • Automatically spawns and terminates SOCKS proxy servers to pass connections to Hadoop nodes on Amazon EC2
  • Supports to configure and launch new Hadoop clusters on Amazon EC2
  • Can be adapted to configure, schedule, and monitor arbitrary MapReduce workflows

Project members

  • Dr. Lars Kolb
  • Prof. Dr. Andreas Thor
  • Prof. Dr. Erhard Rahm

Funding / Cooperation

This project was supported by an AWS in Education research grant award.

Publikationen (22)

Dateien Cover Beschreibung Jahr
A Clustering-Based Framework to Control Block Sizes for Entity Resolution
Fisher, J. ; Christen, P. ; Wang, Q. ; Rahm, E.
Proc. 21st ACM SIGKDD Conf. on Knowledge Discovery and Mining (KDD), 279-288, 2015
2015 / 8
Privacy Preserving Record Linkage with PPJoin
Sehili, Z. ; Kolb, L. ; Borgs, C. ; Schnell, R. ; Rahm, E.
Proc. of 16. GI-Fachtagung für Datenbanksysteme in Business, Technologie und Web (BTW), 2015
2015 / 3
Effiziente MapReduce-Parallelisierung von Entity Resolution-Workflows
Kolb, L.
Dissertation, Universität Leipzig
2014 / 9
Iterative Computation of Connected Graph Components with MapReduce
Kolb, L. ; Sehili, Z. ; Rahm, E.
Datenbank-Spektrum 14 (2), 2014
2014 / 7
Evolution von ontologiebasierten Mappings in den Lebenswissenschaften
Groß, A.
Dissertation, Universität Leipzig
2014 / 3
Optimizing Similarity Computations for Ontology Matching - Experiences from GOMMA
Hartung, M. ; Kolb, L. ; Groß, A. ; Rahm, E.
Proc. 9th Intl. Conference on Data Integration in the Life Sciences (DILS), 2013
2013 / 7
Don't Match Twice: Redundancy-free Similarity Computation with MapReduce
Kolb, L. ; Thor, A. ; Rahm, E.
Proc. 2nd Intl. Workshop on Data Analytics in the Cloud (DanaC), 2013
2013 / 6
When to Reach for the Cloud: Using Parallel Hardware for Link Discovery
Kolb, L. ; Heino, N. ; Hartung, M. ; Auer, S. ; Rahm, E.
Proc. 10th Intl. Extended Semantic Web Conference (ESWC), 2013
2013 / 5
Parallel Entity Resolution with Dedoop
Kolb, L. ; Rahm, E.
Datenbank-Spektrum 13 (1), 2013
2013 / 2
Dedoop: Efficient Deduplication with Hadoop
Kolb, L. ; Thor, A. ; Rahm, E.
Proc. 38th Intl. Conference on Very Large Databases (VLDB) / Proc. of the VLDB Endowment 5(12), 2012
2012 / 8

Pagination

  • Current page 1
  • Page 2
  • Page 3
  • Next page Next ›
  • Last page Last »

Recent publications

  • 2025 / 8: Slice it up: Unmasking User Identities in Smartwatch Health Data
  • 2025 / 6: SecUREmatch: Integrating Clerical Review in Privacy-Preserving Record Linkage
  • 2025 / 5: Federated Learning With Individualized Privacy Through Client Sampling
  • 2025 / 3: Assessing the Impact of Image Dataset Features on Privacy-Preserving Machine Learning
  • 2025 / 3: Automated Configuration of Schema Matching Tools: A Reinforcement Learning Approach

Footer menu

  • Directions
  • Contact
  • Impressum