Ciao visitor, this is Cristina! 👋

I’m a senior researcher at Meta Superintelligence Labs where I’m part of the Preparedness team and lead loss of control risk assessments for frontier AI models. Working in AI now is not just exciting, but also a huge responsibility. My work aims at steering AI development in the right direction and assessing catastrophic risks that might arise from AI deployment in both the short and long run.
I joined Meta recently, after the Scale AI investment. During my time at Scale, I focused on building evaluations, among which Remote Labor Index (RLI), MASK, EnigmaEval, VISTA, and broadly led the development of SEAL leaderboards.
Previously, I did my postdoc at the BATS lab 🦇 and the Data Science Institute at Brown University 🇺🇸, where I worked with Prof. Stephen Bach on model adaptation with limited labeled data.
I arrived at Brown as Visiting Ph.D. student and worked with Eli Upfal and Matteo Riondato. In July 2021, I defended my Ph.D. thesis in Computer Engineering from Sapienza University 🇮🇹, advised by Aris Anagnostopoulos and Stefano Leonardi.
I earned my master’s degree in Data Science at Sapienza University, after a one-year exchange in the School of Computer and Communication Sciences at EPFL 🇨🇭, where I joined Data Science Lab led by Robert West. My life after EPFL would have note been the same without meeting Tiziano Piccardi and Michele Catasta. Before that, I got a bachelor’s degree in Statistics, Economics, and Finance at Sapienza University.
📻 News
- (Summer ‘25) I joined Meta Superintelligence Lab as senior research scientist. Super excited to keep working with a lot of teammates from Scale AI, Summer Yue, and Julian Michael
- (Winter ‘25) MASK and EnigmaEval are out - work done with CAIS!
- (Fall ‘24) VISTA the first rubric-based visual reasoning evaluation is launched as part of the SEAL Leaderboard!
- (Summer ‘24) Very proud to see Nat’s intership concluding with such good work: LLM Defenses Are Not Robust to Multi-Turn Human Jailbreaks Yet
- (Spring ‘24) If CLIP could talk is accepted at EMNLP! Great work with Reza and Steve!
- (Spring ‘24) LexC-Gen: Generating Data for Extremely Low-Resource Languages with Large Language Models and Bilingual Lexicons has been accepted to EMNLP Findings! Super fun work with Yong Zheng-Xin and Stephen Bach!
- (Feb ‘24) I joined Scale AI as research scientist!
- (Sept ‘23) We found another good reason why we shouldn’t leave low-resource languages behind: they jailbreak GPT-4!
- (Sept ‘23) I’ll see you all in New Orleans! Our work on exploring strategies for using CLIP as a pseudolabeler for prompt tuning will appear in NeurIPS 2023!
- (Sept ‘23) I joined the Data Science Institute at Brown University as postdoctoral research associate!
- (May ‘23) I studied for a while how we can exploit pseudolabels in many learning settings to improve vision-language models like CLIP. Check out what we found in this new paper!
📝 Publications
Need to update the list :)
-
Low-Resource Languages Jailbreak GPT-4
NeurIPS 2023, SoLaR Workshop - 🏆 Best Paper Award (Spotlight)
Z.-X. Yong, C. Menghini, S. H. Bach
[pdf] -
Enhancing CLIP with CLIP: Exploring Pseudolabeling for Limited-Label Prompt Tuning
NeurIPS 2023
C. Menghini, A. Delworth, S. H. Bach
[pdf][code] -
Reducing polarization and increasing diverse navigability in graphs by inserting edges and swapping edge weights
Data Mining and Knowledge Discovery 2022
S. Haddadan, C. Menghini, M. Riondato, E. Upfal
[pdf] -
Tight Lower Bounds on Worst-Case Guarantees for Zero-Shot Learning with Attributes
Neurips 2022
A. Mazzetto *, C. Menghini *, A. Yuan, E. Upfal, S. H. Bach
[pdf] -
The Drift of #MyBodyMyChoice Discourse on Twitter
WebSci 2022 - 🏆 Best Paper Award Honorable Mention
C. Menghini, J. Uhr, S. Haddadan, A. Champagne, B. Sandstede, S. Ramachandran
[pdf] -
TAGLETS: A System for Automatic Semi-Supervised Learning with Auxiliary Data
Machine Learning and Systems 2022
W. Piriyakulkij, C. Menghini, R. Briden, N. V. Nayak, J. Zhu, E. Raisi, S. H. Bach
[pdf] -
Algorithms for fair k-clustering with multiple protected attributes
Operations Research Letters 2021
M. Bohm, A. Fazzone, S. Leonardi, C. Menghini, C. Schwiegelshohn
[pdf] -
RePBubLik: Reducing polarized bubble radius with link insertions
WSDM 2021 - 🏆 Best Paper Award Honorable Mention
S. Haddadan, C. Menghini, M. Riondato, E. Upfal
[pdf] -
How Inclusive Are Wikipedia’s Hyperlinks in Articles Covering Polarizing Topics?
Big Data 2021
C. Menghini, A. Anagnostopoulos, E. Upfal
[pdf] -
Spectral Relaxations and Fair Densest Subgraphs
CIKM 2021
A. Anagnostopoulos, L. Becchetti, A. Fazzone, C. Menghini, C. Schwiegelshohn
[pdf] -
Wikipedia Polarization and Its Effects on Navigation Paths
Big Data 2019
C. Menghini, A. Anagnostopoulos, E. Upfal
[pdf]
📻 Past news
- (Feb ‘23) Register to the Woman in Data Science datathon organized by DSI at Brown University!
- (Dec ‘22) I gave a talk about Wikipedia’s structural bias at COSC-355 Network Science @ Amherst College!
- (Sept ‘22) Our paper on theoretical limitis of zero-shot learning has been accepted at NeurIPS 2022!
- (Aug ‘22) I presented TAGLETS at MLSys 2022 in Santa Clara!
- (Jun ‘22) Our work demonstrating that #MyBodyMyChoice is not uniquely associated to women’s rights after Covid-19 has received an honorable mention for the best paper award at WebSci 2022! 🏆
- Compiling Questions into Balanced Quizzes about Documents
CIKM 2018
C. Menghini, J. Dehler-Zufferey, R. West
[pdf]