Lecture 15: Large language models and their implications
Notes
Recording
Readings
- On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜, Bender et al., 2021
- Are Emergent Abilities of Large Language Models a Mirage?, Schaeffer et al., ICMI workshop 2023
- Universal and Transferable Adversarial Attacks on Aligned Language Models, Zou et al., 2023
- Carbon Emissions and Large Neural Network Training, Patterson et al., 2021
- Energy and Policy Considerations for Modern Deep Learning Research, Strubell et al., AAAI 2020
Useful Links
- You are not a Parrot (article about Emily Bender)
- Unpredictable Black Boxes are Terrible Interfaces by Maneesh Agrawala
- What Can You Do When A.I. Lies About You?, New York Times 2023
- How to Prepare for the Deluge of Generative AI on Social Media, Kapoor and Narayanan
- Universal and Transferable Adversarial Attacks on Aligned Language Models project page
- On the Genealogy of Machine Learning Datasets: A Critical History of ImageNet, Denton et al., 2021
- Datasheets for Datasets, Gebru et al., 2018
- Model Cards for Model Reporting, Mitchell et al., 2019
OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic, Time Magazine 2023
- ‘Thirsty’ AI: Training ChatGPT Required Enough Water to Fill a Nuclear Reactor’s Cooling Tower, Study Finds (Gizmodo)
- The Carbon Footprint of Artificial Intelligence, Kirkpatrick CACM 2023