Women in Technology Series: Gender preserving debiasing for pre-trained word embedding

1:00pm - 2:30pm / Wednesday 23rd October 2019 / Venue: ELT, First Floor Electrical Engineering & Electronics
Type: Other / Category: Research / Series: WiT (Women in Technology) Lecture Series
  • Suitable for: Staff and students in the School, Faculty or University with an interest in the subject or gender equality
  • Admission: Free https://www.eventbrite.co.uk/e/women-in-technology-lecture-series-professor-danushka-bollegala-tickets-75121679967
  • Add this event to my calendar
    (?)

    When you click on "Add this event to my calendar" your browser will download an ics file.

    Microsoft Outlook: Download the file, then you may be able to click on "Save & Close" to save it to your calendar. If that doesn't work go into Outlook, click on the File tab, then on Open, then Import. Select "Import an iCalendar (.ic or vCalendar file (.vcs)" then click on Next. Find the .ics file and click on OK.

    Google Calendar: download the file, then go into your calendar. On the right where it says "Other calendars" click on the arrow icon and then click on Import calendar. Click on Browse and select the .ics file, then click on Import.

    Apple Calendar: download the file, then you can either drag it to Calendar or import the file by going to File > Import > Import and choosing the .ics file.

Overview: The event will be hosted by the Chair of Athena SWAN, Dr. Munira Raja (Electrical Engineering and Electronics). Prof. Danushka Bollegala (Computer Science) will be giving a talk on his recent paper "Gender-Preserving Biasing for Pre-Trained Word Embeddings". Please see further details below:
Authors: Masahiro Kaneko (Tokyo Metropolitan University, Japan) and Danushka Bollegala (University of Liverpool, UK)
Abstract: Word embeddings learnt from massive text collections have demonstrated significant levels of discriminative biases such as gender, racial or ethnic biases, which in turn bias the down-stream NLP applications that use those word embeddings. Taking gender-bias as a working example, we propose a debiasing method that preserves non-discriminative gender-related information, while removing stereotypical discriminative gender biases from pre-trained word embeddings. Specifically, we consider four types of information: feminine, masculine, gender-neutral and stereotypical, which represent the relationship between gender vs. bias, and propose a debiasing method that (a) preserves the gender-related information in feminine and masculine words, (b) preserves the neutrality in gender-neutral words, and (c) removes the biases from stereotypical words. Experimental results on several previously proposed benchmark datasets show that our proposed method can debias pre-trained word embeddings better than existing SoTA methods proposed for debiasing word embeddings while preserving gender-related but non-discriminative information.
Discussions: Following the talk, the audience will have the opportunity to direct questions to the speaker and also to members of the discussion panel, the confirmed panel members are as follows:
Prof. Simon Maskell (Electrical Engineering and Electronics)
Dr. Rebecca Davnall (Philosophy)
Dr. Zainab Hussain (Health Sciences), Chair of BAME Staff Network