Roman Yampolskiy is an AI safety researcher and author of a new book titled AI: Unexplainable, Unpredictable, Uncontrollable. Please support this podcast by checking out our sponsors:
– Yahoo Finance: https://yahoofinance.com
– MasterClass: https://masterclass.com/lexpod to get 15% off
– NetSuite: http://netsuite.com/lex to get free product tour
– LMNT: https://drinkLMNT.com/lex to get free sample pack
– Eight Sleep: https://eightsleep.com/lex to get $350 off
Transcript: https://lexfridman.com/roman-yampolskiy-transcript
EPISODE LINKS:
Roman’s X: https://twitter.com/romanyam
Roman’s Website: http://cecs.louisville.edu/ry
Roman’s AI book: https://amzn.to/4aFZuPb
PODCAST INFO:
Podcast website: https://lexfridman.com/podcast
Apple Podcasts: https://apple.co/2lwqZIr
Spotify: https://spoti.fi/2nEwCF8
RSS: https://lexfridman.com/feed/podcast/
YouTube Full Episodes: https://youtube.com/lexfridman
YouTube Clips: https://youtube.com/lexclips
SUPPORT & CONNECT:
– Check out the sponsors above, it’s the best way to support this podcast
– Support on Patreon: https://www.patreon.com/lexfridman
– Twitter: https://twitter.com/lexfridman
– Instagram: https://www.instagram.com/lexfridman
– LinkedIn: https://www.linkedin.com/in/lexfridman
– Facebook: https://www.facebook.com/lexfridman
– Medium: https://medium.com/@lexfridman
OUTLINE:
Here’s the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time.
() – Introduction
() – Existential risk of AGI
() – Ikigai risk
() – Suffering risk
() – Timeline to AGI
() – AGI turing test
() – Yann LeCun and open source AI
() – AI control
() – Social engineering
() – Fearmongering
() – AI deception
() – Verification
() – Self-improving AI
() – Pausing AI development
() – AI Safety
() – Current AI
() – Simulation
() – Aliens
() – Human mind
() – Neuralink
() – Hope for the future
() – Meaning of life