Home
Publications
News
Projects
Teaching
Talks
Quotes
Posts
CV
Light
Dark
Automatic
Recent Talks
2023
ferret: A Framework for Benchmarking Explainers on Transformers
This talk provides a comprehensive overview of ferret, a novel python library for benchmarking interpretability techniques on …
Mar 23, 2023 12:00 AM
Turin, Italy
Giuseppe Attanasio
,
Eliana Pastor
,
Chiara Di Bonaventura
,
Debora Nozza
Code
Slides
From Recurrent Models to the advent of Attention
All recent language technologies, such as BERT, GPT3, or ChatGPT, rely on complex neural networks called Transformers. But, what …
Feb 15, 2023 6:30 PM
eDreams ODIGEO Tech Hub
Slides
2022
Social Biases in LMs: Why they are there, how do we measure and mitigate them
In the blooming age of language technologies, the impact of social biases encoded in our models has surged to unprecedented levels. In this half-talk, half-discussion, we go through fundamental definitions of social bias in computer systems, specifically language models and social harm.
Oct 28, 2022 1:00 PM — 1:00 PM
Slides
Contrastive Language–Image Pre-training for the Italian Language
Feb 25, 2022 6:38 PM — 6:38 PM
Slides
2020
From Recurrent Models to the Advent of Attention: A Recap
In this talk, I present a chronological overview of modeling solutions for time series and natural language. I go through seminal papers on the applications of recurrent models to tasks like language modeling, neural machine translation, image captioning, or multi-model text generation.
Dec 18, 2020 5:00 PM
Giuseppe Attanasio
Slides
Video
0001
Reading Group talks
List of papers I presented at reading groups (latest first). Slides are free to use upon attribution. Sharing and mixing under CC BY-SA 4.0. Toolformer: Language Models Can Teach Themselves to Use Tools Training language models to follow instructions with human feedback The Tail Wagging the Dog: Dataset Construction Biases of Social Bias Benchmarks How Gender Debiasing Affects Internal Model Representations, and Why It Matters Multitask Prompted Training Enables Zero-Shot Task Generalization How does the pre-training objective affect what large language models learn about linguistic properties Vision-and-Language or Vision-for-Language?
Jan 1, 0001 12:00 AM
Cite
×