Lex Fridman Podcast
#452 – Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity
Chris Olah
So people, when it comes to AI alignment, will ask things like, well, whose values should it be aligned to? What does alignment even mean? And there's a sense in which I have all of that in the back of my head. I'm like, you know, there's like social choice theory. There's all the impossibility results there.
0
💬
0
Comments
Log in to comment.
There are no comments yet.