Wide range screening of algorithmic bias in word embedding models using large sentiment lexicons reveals underreported bias types.
Concerns about gender bias in word embedding models have captured substantial attention in the algorithmic bias research literature.Other bias types however have received lesser amounts of scrutiny.This work describes a large-scale analysis of sentiment associations in popular word embedding models along the lines of gender and ethnicity but also a