243 Mp4 -

This 2023 paper by Wan et al. investigates how large language models (LLMs) may perpetuate social biases when writing recommendation letters. It is highly regarded for its systematic approach to examining language style and lexical content.

: Critically examines gender biases in reference letters generated by LLMs like GPT. 243 mp4

: "How Good is Your Tokenizer? On the Monolingual Performance of Multilingual Language Models" – A study comparing pretrained multilingual models against monolingual ones. This 2023 paper by Wan et al

: "Crossroads, Buildings and Neighborhoods: A Dataset for Fine-grained Location Recognition" – A 2022 paper introducing a new dataset for improved location identification in text. : Critically examines gender biases in reference letters

Recommended Paper: "Gender Biases in LLM-Generated Reference Letters"

: The authors found that LLMs often use different descriptive terms based on gender—for example, describing female candidates as "warm" while calling male candidates "role models".

: Uses social science-inspired evaluation methods to track bias propagation across language style and lexical content. Resources : Read the Full Paper (PDF) Watch the Presentation (243.mp4) (Direct Video Link) Other Related Papers (Index 243)