The heterogeneity of Knowledge Graphs is problematic for conventional data integration frameworks. A
possible solution to this issue is using Knowledge Graph Embeddings (KGEs) to encode entities into a lowerdimensional embedding space. However, recent findings suggest that KGEs suffer from the so-called hubness phenomenon. A dataset that suffers from hubness has a few popular entities that are nearest neighbors of a highly disproportionate amount of other entities. Because the calculation of nearest neighbors is an integral part of entity alignment with KGEs, hubness reduces the accuracy of the matching result. We therefore investigate a variety of hubness reduction techniques and utilize approximate nearest neighbor (ANN) approaches to offset the increase in time complexity stemming from the hubness reduction. Our results suggest, that hubness reduction in combination with ANN techniques improves the quality of nearest neighbor results significantly compared to using no hubness reduction and exact nearest neighbor approaches. Furthermore, this advantage comes without losing the speed advantage of ANNs on large datasets.