NLP-KG
Semantic Search

Publication:

Better Hit the Nail on the Head than Beat around the Bush: Removing Protected Attributes with a Single Projection

P. HaghighatkhahAntske FokkensPia SommerauerB. SpeckmannKevin Verbeek • @Conference on Empirical Methods in Natural Language Processing • 08 December 2022

TLDR: A comparison between MP and INLP shows that one MP projection removes linear separability based on the target and MP has less impact on the overall space, and applying random projections after MP leads to the same overall effects on the embedding space as the multiple projections of INLP.

Citations: 5
Abstract: Bias elimination and recent probing studies attempt to remove specific information from embedding spaces. Here it is important to remove as much of the target information as possible, while preserving any other information present. INLP is a popular recent method which removes specific information through iterative nullspace projections.Multiple iterations, however, increase the risk that information other than the target is negatively affected.We introduce two methods that find a single targeted projection: Mean Projection (MP, more efficient) and Tukey Median Projection (TMP, with theoretical guarantees). Our comparison between MP and INLP shows that (1) one MP projection removes linear separability based on the target and (2) MP has less impact on the overall space.Further analysis shows that applying random projections after MP leads to the same overall effects on the embedding space as the multiple projections of INLP. Applying one targeted (MP) projection hence is methodologically cleaner than applying multiple (INLP) projections that introduce random effects.

Related Fields of Study

loading

Citations

Sort by
Previous
Next

Showing results 1 to 0 of 0

Previous
Next

References

Sort by
Previous
Next

Showing results 1 to 0 of 0

Previous
Next