Guillaume Verdon
👤 PersonAppearances Over Time
Podcast Appearances
And the world is jiggling about, there's cosmic radiation from outer space that usually flips your quantum bits, and there what you do is you encode information non-locally through a process called quantum error correction.
And the world is jiggling about, there's cosmic radiation from outer space that usually flips your quantum bits, and there what you do is you encode information non-locally through a process called quantum error correction.
And the world is jiggling about, there's cosmic radiation from outer space that usually flips your quantum bits, and there what you do is you encode information non-locally through a process called quantum error correction.
And by encoding information non-locally, any local fault, hitting some of your quantum bits with a hammer, proverbial hammer, if your information is sufficiently delocalized, it is protected from that local fault. And to me, I think that humans fluctuate, right? They can get corrupted, they can get bought out.
And by encoding information non-locally, any local fault, hitting some of your quantum bits with a hammer, proverbial hammer, if your information is sufficiently delocalized, it is protected from that local fault. And to me, I think that humans fluctuate, right? They can get corrupted, they can get bought out.
And by encoding information non-locally, any local fault, hitting some of your quantum bits with a hammer, proverbial hammer, if your information is sufficiently delocalized, it is protected from that local fault. And to me, I think that humans fluctuate, right? They can get corrupted, they can get bought out.
And if you have a top-down hierarchy where very few people control many nodes of many systems in our civilization, that is not a fault tolerance system. You corrupt a few nodes and suddenly you've corrupted the whole system. Just like we saw at OpenAI, it was a couple board members and they had enough power to potentially collapse the organization.
And if you have a top-down hierarchy where very few people control many nodes of many systems in our civilization, that is not a fault tolerance system. You corrupt a few nodes and suddenly you've corrupted the whole system. Just like we saw at OpenAI, it was a couple board members and they had enough power to potentially collapse the organization.
And if you have a top-down hierarchy where very few people control many nodes of many systems in our civilization, that is not a fault tolerance system. You corrupt a few nodes and suddenly you've corrupted the whole system. Just like we saw at OpenAI, it was a couple board members and they had enough power to potentially collapse the organization.
And at least to me, you know, I think making sure that power for this AI revolution doesn't concentrate in the hands of the few is one of our top priorities so that we can maintain progress in AI and we can maintain a nice, stable economy. adversarial equilibrium of powers, right?
And at least to me, you know, I think making sure that power for this AI revolution doesn't concentrate in the hands of the few is one of our top priorities so that we can maintain progress in AI and we can maintain a nice, stable economy. adversarial equilibrium of powers, right?
And at least to me, you know, I think making sure that power for this AI revolution doesn't concentrate in the hands of the few is one of our top priorities so that we can maintain progress in AI and we can maintain a nice, stable economy. adversarial equilibrium of powers, right?
I think we get painted as reckless, trying to go as fast as possible. I mean, the reality is that whoever deploys an AI system is liable for or should be liable for what it does. And so if the organization or person deploying an AI system does something terrible, they're liable.
I think we get painted as reckless, trying to go as fast as possible. I mean, the reality is that whoever deploys an AI system is liable for or should be liable for what it does. And so if the organization or person deploying an AI system does something terrible, they're liable.
I think we get painted as reckless, trying to go as fast as possible. I mean, the reality is that whoever deploys an AI system is liable for or should be liable for what it does. And so if the organization or person deploying an AI system does something terrible, they're liable.
And ultimately, the thesis is that the market will induce sort of, will positively select for AIs that are more reliable, more safe, and more tend to be aligned. They do what you want them to do, right? Because customers, right, if they're liable for the product they put out that uses this AI, they won't want to buy AI products that are unreliable.
And ultimately, the thesis is that the market will induce sort of, will positively select for AIs that are more reliable, more safe, and more tend to be aligned. They do what you want them to do, right? Because customers, right, if they're liable for the product they put out that uses this AI, they won't want to buy AI products that are unreliable.
And ultimately, the thesis is that the market will induce sort of, will positively select for AIs that are more reliable, more safe, and more tend to be aligned. They do what you want them to do, right? Because customers, right, if they're liable for the product they put out that uses this AI, they won't want to buy AI products that are unreliable.
So we're actually, for reliability engineering, we just think that the market is much more efficient at achieving this sort of reliability optimum than sort of heavy-handed regulations that are written by the incumbents and in a subversive fashion, serves them to achieve regulatory capture.
So we're actually, for reliability engineering, we just think that the market is much more efficient at achieving this sort of reliability optimum than sort of heavy-handed regulations that are written by the incumbents and in a subversive fashion, serves them to achieve regulatory capture.