243 - Mp4

: Critically examines gender biases in reference letters generated by LLMs like GPT.

Recommended Paper: "Gender Biases in LLM-Generated Reference Letters" 243 mp4

This 2023 paper by Wan et al. investigates how large language models (LLMs) may perpetuate social biases when writing recommendation letters. It is highly regarded for its systematic approach to examining language style and lexical content. : Critically examines gender biases in reference letters

: "A Tale of Pronouns: Interpretability Informs Gender Bias Mitigation" – A 2023 paper addressing gender bias specifically in machine translation. It is highly regarded for its systematic approach

: Uses social science-inspired evaluation methods to track bias propagation across language style and lexical content. Resources : Read the Full Paper (PDF) Watch the Presentation (243.mp4) (Direct Video Link) Other Related Papers (Index 243)

In academic circles, "243" often refers to a paper's identifier in a specific conference track. Depending on your interest, you might also be looking for:

: "How Good is Your Tokenizer? On the Monolingual Performance of Multilingual Language Models" – A study comparing pretrained multilingual models against monolingual ones.