Skip to main content

User account menu

  • Log in
DBS-Logo

Database Group Leipzig

within the department of computer science

ScaDS-Logo Logo of the University of Leipzig

Main navigation

  • Home
  • Study
    • Exams
      • Hinweise zu Klausuren
    • Courses
      • Current
    • Modules
    • LOTS-Training
    • Abschlussarbeiten
    • Masterstudiengang Data Science
    • Oberseminare
    • Problemseminare
    • Top-Studierende
  • Research
    • Projects
      • Benchmark datasets for entity resolution
      • FAMER
      • HyGraph
      • Privacy-Preserving Record Linkage
      • GRADOOP
    • Publications
    • Prototypes
    • Annual reports
    • Cooperations
    • Graduations
    • Colloquia
    • Conferences
  • Team
    • Erhard Rahm
    • Member
    • Former employees
    • Associated members
    • Gallery

Dedoop: Entity Matching for Big Data

Breadcrumb

  • Home
  • Research
  • Projects
  • Dedoop: Entity Matching for Big Data

Duration

/

Description

Overview:

Automatically matching entities (objects) and ontologies are key technologies to semantically integrate heterogeneous data. These match techniques are needed to identify equivalent data objects (duplicates) or semantically equivalent metadata elements (ontology concepts, schema attributes). The proposed techniques demand very high resources that limit their applicability to large-scale (Big Data) problems unless a powerful cloud infrastructure can be utilized. This is because the (fuzzy) match approaches basically have a quadratic complexity to compare the all elements to be matched with each other. For sufficient match quality, multiple match algorithms need to be applied and combined within so-called match workflows adding further resource requirements as well as a significant optimization problem to select matchers and configure their combination.

Goals:

  • Efficient parallel execution of match workflows in the cloud
  • Efficient application of machine learning models for entity/ontology matching

General Entity Matching Workflow:

The common approach to improve efficiency is to reduce the search space by adopting blocking techniques. They utilize a blocking key on the values of one or several entity attributes to partition the input data into multiple partitions (called blocks) and restrict the subsequent matching to entities of the same block. For example, product entities may be partitioned by manufacturer values such that only products of the same manufacturer are evaluated to find matching entity pairs.

general workflow

Entity Matching with MapReduce:

The MapReduce model is well suited to execute blocking-based entity matching in parallel within several map and reduce tasks. In particular, several map tasks can read the input entities in parallel and redistribute them among several reduce tasks based on the blocking key. This guarantees that all entities of the same block are assigned to the same reduce task so that different blocks can be matched in parallel by multiple reduce tasks.

Skew Handling / Load Balancing: Basic MR implementation of entity matching is susceptible to severe load imbalances due to skewed blocks sizes since the match work of entire blocks is assigned to a single reduce task. We are developing effective load balancing approaches to data skew handling for MR-based entity matching (and, in general, all kind of pairwise similarity computation). For a quick overview see our CIKM 2011 poster (right).

Redundancy-free comparisons: Modern entity matching approaches assign entities to more than one block. For example, multi-pass approaches use several blocking keys to still achieve high recall in the presence of noisy data. Similarily, token-based matching approaches (e.g., PPJoin) generate a list of tokens (i.e., blocks) for each entity. Such overlapping blocks may lead to mutliple comparisons of the same pair of entity which in turn decreases efficieny. We therefore develop redundancy-free approaches that ensure that every entity pair is processed by one reduce task only.

Poster

Dedoop:

We summarized our latest research results in a prototype, called Dedoop (Deduplication with Hadoop). It is a powerful and easy-to-use tool for MapReduce-based entity resolution of large datasets.

 

dedoop slideshow
dedoop paper
dedoop poster

 

 

 

 

 

 

 

 

dedoop presentation
dedoop parallel er

 

 

 

 

 

 

Dedoop…

  • Provides a Web interface to specify entity resolution strategies for match tasks
  • Automatically transforms the specification into an executable MapReduce workflow and manages its submission and execution on Hadoop clusters
  • Is designed to serve multiple users that may simultaneously execute multiple workflows on the same or different Hadoop clusters
  • Provides load balancing strategies to evenly utilize all nodes of the cluster
  • Includes a graphical HDFS and S3 file manager
  • Supports the repeated execution of jobs within a MapReduce workflow until a stop condition if fulfilled
  • Automatically spawns and terminates SOCKS proxy servers to pass connections to Hadoop nodes on Amazon EC2
  • Supports to configure and launch new Hadoop clusters on Amazon EC2
  • Can be adapted to configure, schedule, and monitor arbitrary MapReduce workflows

Project members

  • Dr. Lars Kolb
  • Prof. Dr. Andreas Thor
  • Prof. Dr. Erhard Rahm

Funding / Cooperation

This project was supported by an AWS in Education research grant award.

Publikationen (22)

Dateien Cover Beschreibung Jahr
Load Balancing for MapReduce-based Entity Resolution
Kolb, L. ; Thor, A. ; Rahm, E.
Proc. 28th Intl. Conference on Data Engineering (ICDE), 2012
2012 / 4
Multi-pass Sorted Neighborhood Blocking with MapReduce
Kolb, L. ; Thor, A. ; Rahm, E.
Computer Science - Research and Development 27(1), 2012
2012 / 2
GOMMA: A Component-based Infrastructure for managing and analyzing Life Science Ontologies and their Evolution
Kirsten, T. ; Groß, A. ; Hartung, M. ; Rahm, E.
Journal of Biomedical Semantics 2011, 2:6
2011 / 11
Block-based Load Balancing for Entity Resolution with MapReduce
Kolb, L. ; Thor, A. ; Rahm, E.
Proc. 20th Intl. Conference on Information and Knowledge Management (CIKM), 2011
2011 / 10
Learning-based Entity Resolution with MapReduce
Kolb, L. ; Köpcke, H. ; Thor, A. ; Rahm, E.
Proc. 3rd Intl. Workshop on Cloud Data Management (CloudDB), 2011
2011 / 10
Mapping Composition for Matching Large Life Science Ontologies
Groß, A. ; Hartung, M. ; Kirsten, T. ; Rahm, E.
2nd International Conference on Biomedical Ontology (ICBO 2011)
2011 / 7
Parallel Sorted Neighborhood Blocking with MapReduce
Kolb, L. ; Thor, A. ; Rahm, E.
Proc. 14th GI-Fachtagung für Datenbanksysteme in Business, Technologie und Web (BTW), 2011
2011 / 3
Towards large-scale schema and ontology matching
Rahm, E.
Schema Matching and Mapping, Springer-Verlag
2011 / 2
Evaluation of entity resolution approaches on real-world match problems
Köpcke, H. ; Thor, A. ; Rahm, E.
Proc. 36th Intl. Conference on Very Large Databases (VLDB) / Proceedings of the VLDB Endowment 3(1), 2010
2010 / 9
Data Partitioning for Parallel Entity Matching
Kirsten, T. ; Kolb, L. ; Hartung, M. ; Groß, A. ; Köpcke, H. ; Rahm, E.
Proc. 8th Intl. Workshop on Quality in Databases (QDB), 2010
2010 / 9

Pagination

  • First page « First
  • Previous page ‹ Previous
  • Page 1
  • Current page 2
  • Page 3
  • Next page Next ›
  • Last page Last »

Recent publications

  • 2025 / 9: Generating Semantically Enriched Mobility Data from Travel Diaries
  • 2025 / 8: Slice it up: Unmasking User Identities in Smartwatch Health Data
  • 2025 / 7: MPGT: Multimodal Physics-Constrained Graph Transformer Learning for Hybrid Digital Twins
  • 2025 / 6: Leveraging foundation models and goal-dependent annotations for automated cell confluence assessment
  • 2025 / 6: SecUREmatch: Integrating Clerical Review in Privacy-Preserving Record Linkage

Footer menu

  • Directions
  • Contact
  • Impressum