Lex Fridman Podcast
#459 – DeepSeek, China, OpenAI, NVIDIA, xAI, TSMC, Stargate, and AI Megaclusters
Nathan Lambert
caches which are shared between more compute elements then you have like memory right like hbm or dram like ddr memory or whatever it is and that's shared between the whole chip and then you can have you know pools of memory that are shared between many chips right and then storage and it keep you keep zoning out right the access latency across data centers across within the data center within a chip is different so like you're obviously always you're always going to have different programming paradigms for this it's not going to be easy programming this stuff is going to be hard maybe i can help right
Comments
Log in to comment.
There are no comments yet.