Tim Sweeney
👤 PersonAppearances Over Time
Podcast Appearances
And so the challenge is how to simplify every component of the rendering, the geometry, the lighting, and so on, down to real-time techniques. They're efficient. They capture a realistic view of what's around you. And so when an object is up close to you, you want to render it with a lot more polygons than when it's far away.
And so the challenge is how to simplify every component of the rendering, the geometry, the lighting, and so on, down to real-time techniques. They're efficient. They capture a realistic view of what's around you. And so when an object is up close to you, you want to render it with a lot more polygons than when it's far away.
And so the challenge is how to simplify every component of the rendering, the geometry, the lighting, and so on, down to real-time techniques. They're efficient. They capture a realistic view of what's around you. And so when an object is up close to you, you want to render it with a lot more polygons than when it's far away.
But one of the cool principles of mathematics is the Nyquist sampling theorem. It says if you're trying to reconstruct a signal, there's a limit to the amount of data you need to bother capturing. If you want to render a texture at a certain resolution, then you never need more than twice the pixels in the texture that you have on the screen. And that's called the Nyquist limit.
But one of the cool principles of mathematics is the Nyquist sampling theorem. It says if you're trying to reconstruct a signal, there's a limit to the amount of data you need to bother capturing. If you want to render a texture at a certain resolution, then you never need more than twice the pixels in the texture that you have on the screen. And that's called the Nyquist limit.
But one of the cool principles of mathematics is the Nyquist sampling theorem. It says if you're trying to reconstruct a signal, there's a limit to the amount of data you need to bother capturing. If you want to render a texture at a certain resolution, then you never need more than twice the pixels in the texture that you have on the screen. And that's called the Nyquist limit.
One of the challenges of computer graphics is given the need to render objects at extreme close-up distances and extreme faraway distances. You always want to be able to generate the right amount of geometry so that you have enough to be indistinguishable from reality, but not any more than necessary.
One of the challenges of computer graphics is given the need to render objects at extreme close-up distances and extreme faraway distances. You always want to be able to generate the right amount of geometry so that you have enough to be indistinguishable from reality, but not any more than necessary.
One of the challenges of computer graphics is given the need to render objects at extreme close-up distances and extreme faraway distances. You always want to be able to generate the right amount of geometry so that you have enough to be indistinguishable from reality, but not any more than necessary.
And with geometry, the idea is that if you render two triangles per pixel, you should get an image that is indistinguishable from reality. thousands of triangles per pixel. If you render less than two triangles per pixel, you're going to start to see visible artifacts of the loss. And GPUs have this amazing hardware in a lot of different pipelines, but it's all very fixed function.
And with geometry, the idea is that if you render two triangles per pixel, you should get an image that is indistinguishable from reality. thousands of triangles per pixel. If you render less than two triangles per pixel, you're going to start to see visible artifacts of the loss. And GPUs have this amazing hardware in a lot of different pipelines, but it's all very fixed function.
And with geometry, the idea is that if you render two triangles per pixel, you should get an image that is indistinguishable from reality. thousands of triangles per pixel. If you render less than two triangles per pixel, you're going to start to see visible artifacts of the loss. And GPUs have this amazing hardware in a lot of different pipelines, but it's all very fixed function.
There's pixel shader hardware, there's geometry processing hardware, and then there's triangle rasterization hardware. One of the limits of GPUs is that the triangle rasterizers are built for pretty large triangles. If you're building a triangle or rendering a triangle with 10 pixels, that's pretty efficient. But if you're building or rendering a triangle with one pixel, it's very inefficient.
There's pixel shader hardware, there's geometry processing hardware, and then there's triangle rasterization hardware. One of the limits of GPUs is that the triangle rasterizers are built for pretty large triangles. If you're building a triangle or rendering a triangle with 10 pixels, that's pretty efficient. But if you're building or rendering a triangle with one pixel, it's very inefficient.
There's pixel shader hardware, there's geometry processing hardware, and then there's triangle rasterization hardware. One of the limits of GPUs is that the triangle rasterizers are built for pretty large triangles. If you're building a triangle or rendering a triangle with 10 pixels, that's pretty efficient. But if you're building or rendering a triangle with one pixel, it's very inefficient.
So one of the breakthroughs Brian made was to design an entire pipeline for avoiding the rasterization hardware in the GPU and just going straight to pixels and calculating what should be done with that pixel as a result of some ray tracing and geometry intersection calculations done in a pixel shader.
So one of the breakthroughs Brian made was to design an entire pipeline for avoiding the rasterization hardware in the GPU and just going straight to pixels and calculating what should be done with that pixel as a result of some ray tracing and geometry intersection calculations done in a pixel shader.
So one of the breakthroughs Brian made was to design an entire pipeline for avoiding the rasterization hardware in the GPU and just going straight to pixels and calculating what should be done with that pixel as a result of some ray tracing and geometry intersection calculations done in a pixel shader.
So instead of using the triangle pipeline, we're just using pixel pipeline and getting a better result.
So instead of using the triangle pipeline, we're just using pixel pipeline and getting a better result.