A few weeks ago, I tried tackling this issue and came up with a solution inspired by this, which proposed more than one solution. At that time I went with RTC (relative to center). This approach works fine up to a certain world size. If we want to go larger, we need something else. Additionally, RTC does not fit well with the plans I have for the future viewer. In the near future we’ll change the batching system completely, and aggressively batch together geometries. An RTC implementation requires the position attributes for the vertices to be defined in the mesh’s local origin, which is fine, since all you would need to bring back the geometry in the original world space would be a simple transformation. However, because we’ll be batching geometries, we won’t be able to make use of any additional transformation to bring them into the original world space. Unless we do crazy stuff like trying to find the batch’s center of mass, and bake the positions in relation to that, but that’s just too much work, and a complication we neither want nor need.
Right, so RTC is off the table. We’ll be turning towards RTE(relative to eye). This technique is also known as floating origin. With RTE, you don’t need to alter the original vertex positions in any way. Instead, you consider the viewer (camera) as the stationary origin of space, and everything else moves around it. Unity uses this approach to render large worlds. In essence, what RTE does, is reduce the size of the numbers involved in the typical GPU transformation pipeline when the camera is relatively close to the affected (by jitter) meshes. When the camera is far away, the numbers are again large and one would expect the geometry to jitter, but here’s the catch: when the camera is far away, the projection of the mesh on the screen is much smaller, this means the inherent precision you need is also much smaller, hence, jittering no longer happens.
So let’s see how things work. Let’s start from the original problem. Here’s how the jitter problem originally manifested itself:
bandicam 2022-05-16 14-36-37-392.mp4
Now lets use simple RTE. We need to change the vertex program from three’s stock to:
// Note: This is a simplification. I kept only what's relevant for example's sake.
// There's more stuff going on, but not relevant for this.
attribute vec3 position; // The vertex position attribute
uniform vec3 uViewer; // The camera position
// vec4 mvPosition = vec4(position.xyz, 1.); Three.js stock
vec4 mvPosition = vec4(position.xyz - uViewer.xyz, 1.) // RTE. Inverse translate the world
mvPosition = modelViewMatrix * mvPosition;
gl_Position = projectionMatrix * mvPosition;
Additionally, we need a modelView matrix which holds the viewer matrix at position 0,0,0. Remember, the camera is the world origin in RTE.
// Indices 12, 13, 14 hold the translation component
object.modelViewMatrix.elements[12] = 0
object.modelViewMatrix.elements[13] = 0
object.modelViewMatrix.elements[14] = 0
Let’s see what we get:
bandicam 2022-05-16 14-37-04-097.mp4
Right, so there is less jitter, however the jitter is not completely gone! We need to augment the simple RTE method. According to the main reference we can go a bit further. We can increase the precision of our positions by encoding their double value into two floats. In order to do this we’ll have to change more things. First, we’ll need to split the position attribute into two separate attributes, low and high, corresponding to the two floats computed as per the article. We’ll also need to compute the camera position using two floats.
// Note: This is a simplification. I kept only what's relevant for example's sake.
// There's more stuff going on, but not relevant for this.
attribute vec3 position_low; // The vertex position low attribute
attribute vec3 position_high; // The vertex position high attribute
uniform vec3 uViewer_low; // The camera position low component
uniform vec3 uViewer_high; // The camera position high component
// vec4 mvPosition = vec4(position.xyz, 1.); Three.js stock
vec3 highDifference = vec3(position_high.xyz - uViewer_high);
vec3 lowDifference = vec3(position_low.xyz - uViewer_low);
vec4 mvPosition = vec4(highDifference.xyz + lowDifference.xyz , 1.);
mvPosition = modelViewMatrix * mvPosition;
gl_Position = projectionMatrix * mvPosition;
With this change we get:
bandicam 2022-05-16 15-26-21-082.mp4
The jitter is completely gone!