Skip to main content

User account menu

  • Log in
DBS-Logo

Database Group Leipzig

within the department of computer science

ScaDS-Logo Logo of the University of Leipzig

Main navigation

  • Home
  • Study
    • Exams
      • Hinweise zu Klausuren
    • Courses
      • Current
    • Modules
    • LOTS-Training
    • Abschlussarbeiten
    • Masterstudiengang Data Science
    • Oberseminare
    • Problemseminare
    • Top-Studierende
  • Research
    • Projects
      • Benchmark datasets for entity resolution
      • FAMER
      • HyGraph
      • Privacy-Preserving Record Linkage
      • GRADOOP
    • Publications
    • Prototypes
    • Annual reports
    • Cooperations
    • Graduations
    • Colloquia
    • Conferences
  • Team
    • Erhard Rahm
    • Member
    • Former employees
    • Associated members
    • Gallery

Dedoop: Efficient Deduplication with Hadoop

Breadcrumb

  • Home
  • Dedoop: Efficient Deduplication with Hadoop

Kolb, L. ; Thor, A. ; Rahm, E.

Dedoop: Efficient Deduplication with Hadoop

Proc. 38th Intl. Conference on Very Large Databases (VLDB) / Proc. of the VLDB Endowment 5(12), 2012

2012 / 08

Paper

Abstract

<p style="text-align:justify;">
<a href="/research/projects/large_scale_object_matching" style="float:right; margin-left:2em;"><img title="Project Website" src="/file/dedoop.png" width="180" height="124"/></a>We demonstrate a powerful and easy-to-use tool called Dedoop (<u>De</u>duplication with Ha<u>doop</u>) for MapReduce-based entity resolution (ER) of large datasets. Dedoop supports a browser-based specification of complex ER workflows including blocking and matching steps as well as the optional use of machine learning for the automatic generation of match classifiers. Specified workflows are automatically translated into MapReduce jobs for parallel execution on different Hadoop clusters. To achieve high performance Dedoop supports several advanced load balancing strategies.<br/><br/>Please visit our <a href="/dedoop#dedoop">project website</a> for further informations about Dedoop.
</p>
<div style="clear:right"/>

<!--
<a href="/file/dedoop_slideshow/dedoop_slideshow.html" target="_blank" style="margin-left:20px; margin-right:20px; float:left;">
<img title="Slideshow &amp; Features" src="file/dedoop_slideshow.png" width="405" height="295" alt="Slideshow"/>
</a>
-->

<a href="/file/PosterDedoop.pdf" style="float:left; margin-right:20px;">
<img title="Poster@VLDB 2012" src="file/PosterDedoop.png" width="162" height="234" alt="Poster" style="border:1px solid grey;"/>
</a>

<!--
<a href="/file/Dedoop_Data_Integration_Day_2012.pptx">
<img title="Presentation@Data Integration Day 2012" src="file/Dedoop_Data_Integration_Day_2012.png" width="180" height="135" alt="Presentation" style="border:1px solid grey;"/>
</a>
-->
<br style="clear:left"/>

<h2>Keywords</h2>
<ul>
<li>MapReduce, Hadoop</li>
<li>Entity Resolution, Object matching, Similarity Join, Pair-wise comparison</li>
<li>Clustering, Blocking</li>
<li>Overlapping Clusters, Redundant-free comparisons</li>
<li>Data Skew, Load Balancing</li>
</ul>

<h2 id="bibtex_heading">BibTex</h2>
<pre id="bibtex_listing">
@article{DBLP:journals/pvldb/KolbTR12,
author = {Lars Kolb and Andreas Thor and Erhard Rahm},
title = {{Dedoop: Efficient Deduplication with Hadoop}},
journal = {PVLDB},
volume = {5},
number = {12},
year = {2012},
pages = {1878-1881},
ee = {http://vldb.org/pvldb/vol5/p1878_larskolb_vldb2012.pdf},
bibsource = {DBLP, http://dblp.uni-trier.de}
}
</pre>

Recent publications

  • 2025 / 9: Generating Semantically Enriched Mobility Data from Travel Diaries
  • 2025 / 8: Slice it up: Unmasking User Identities in Smartwatch Health Data
  • 2025 / 7: MPGT: Multimodal Physics-Constrained Graph Transformer Learning for Hybrid Digital Twins
  • 2025 / 6: Leveraging foundation models and goal-dependent annotations for automated cell confluence assessment
  • 2025 / 6: SecUREmatch: Integrating Clerical Review in Privacy-Preserving Record Linkage

Footer menu

  • Directions
  • Contact
  • Impressum