Conversation
|
The API in #18371 sure is prettier... |
|
The Any other ideas for names instead of |
|
Why not adding the volume to the scene like in #18371?
|
|
Hmm, that'll be better indeed. I guess the way to do it would be by checking what volume is the object being rendered inside of. |
|
|
||
| if ( _shMaterial !== null ) _shMaterial.dispose(); | ||
|
|
||
| _shMaterial = new ShaderMaterial( { |
There was a problem hiding this comment.
Instead of using a new shader material, could you also use the existing method LightProbeGenerator.fromCubeRenderTarget() instead like in #18371? The input render target would be the cube camera render target as in the existing code.
There was a problem hiding this comment.
LightProbeGenerator.fromCubeRenderTarget() computes only one light probe and it's on the CPU.
This PR computes many of them on the GPU. The sponza one computes 147 light probes (7x7x3).
There was a problem hiding this comment.
LightProbeGenerator.fromCubeRenderTarget() computes only one light probe and it's on the CPU.
Ah right, it indeed evaluates the data on the CPU side.
The light probe volume iterates through all its light probes, updates the cube camera and then extracts the light probe data into the batch render target. You have here a render call per light probe and the logic for that could be placed in LightProbeGenerator. However, given how the data are organized in the batch render target, it's maybe too use case specific for LightProbeGenerator.
There was a problem hiding this comment.
I wonder how useful LightProbe and LightProbeGenerator are... 🤔
There was a problem hiding this comment.
I think without something like LightProbeVolume, their utility is indeed questionable. If they are not integrated in this PR, they maybe become obsolete.
That aside #18371 was never finished because the baking in LDR was considered as incorrect in context of PBR, see #18371 (comment). Hence a baking solution in Blender or an external tool was intended so LightProbeVolume would just read in the exported baked lighting.
There happened a lot of discussion around https://github.com/gillesboisson/threejs-probes-test but I'm not sure about its state.
In general, asking the users to do the baking in an external tool isn't ideal, imo. It would better fit to three.js if this could be done directly with LightProbeVolume.
This PR has a different approach but I'm not yet sure if HDR is correctly implemented. The feedback of @donmccurdy and @WestLangley is important here.
There was a problem hiding this comment.
This PR has a different approach but I'm not yet sure if HDR is correctly implemented.
I've searched around and also asked a colleague from my former university. He recommended to read An Efficient Representation for Irradiance Environment Maps.
This paper is some sort of gold standard for SH-based diffuse light probes. The link to the PDF is: https://graphics.stanford.edu/papers/envmap/envmap.pdf
One outcome of this analysis is: The implementation of this PR uses an L1 spherical harmonics which is insufficient for HDR (and diffuse light in general). The industry standard for diffuse GI with SHs is based on the above paper and requires L2 spherical harmonics like in our SphericalHarmonics3 math class and thus by our current LightProbe implementation. L2 can retain almost all characteristics of an irradiance environment map while L1 can't.
If you ask Gemini why L1 does not properly work with HDR, you get this:
Gemini
While L1 Spherical Harmonics (SH) can store HDR data and provide a basic directional sense to your lighting, they are generally considered insufficient for high-quality real-time rendering, even for indirect diffuse light.
Here is the breakdown of why L2 is the industry standard and where L1 falls short.
1. The "Ramamoorthi" Standard
In their seminal 2001 paper, Ravi Ramamoorthi and Pat Hanrahan demonstrated that the diffuse reflection (Lambertian BRDF) acts as a low-pass filter on the environment.
L0 (1 coefficient): Represents the average ambient color (DC term).
L1 (3 additional coefficients): Represents the "linear" directionality. It tells you which side is brighter.
L2 (5 additional coefficients): Adds the quadratic "shape."
Ramamoorthi showed that L2 SH (9 coefficients total) captures roughly 99% of the energy of the diffuse irradiance map. Dropping down to L1 loses the subtle "clamping" and shaping that makes indirect light look grounded.
2. Why L1 is usually "Not Enough"
The "Ringing" and Negative Light Problem
L1 SH effectively models lighting as a constant term plus a vector. If your HDR light probe has a very bright localized light source (like a window or a sun-bounce), the L1 representation can often result in negative radiance on the opposite side of the sphere. This leads to "black holes" or weird artifacts in your shadows unless you apply heavy-handed (and often inaccurate) rescaling or windowing.
Lack of "Flatness"
Diffuse irradiance isn't just a directional gradient; it has a specific "lobed" shape caused by the cosine term in the rendering equation:
L1 is too "pointy." It struggles to represent the way light wraps around a curved surface, often making objects look more like they are lit by a single weak directional light rather than a rich indirect environment.
There was a problem hiding this comment.
I'll give it a go!
Asked Claude about performance implications:
The cost breaks down into three parts:
Bake time: ~same. The dominant cost is rendering cubemaps at each probe position — that's unchanged. The SH projection shader does 9 multiply-adds instead of 4 per texel, but that's trivial compared to the cubemap renders. Repacking writes 7 slices instead of 3, also negligible.
Memory: ~2.3x more texture memory, but tiny in absolute terms. 7 RGBA float 3D textures instead of 3. For a 6×6×6 grid, that's ~24KB vs ~10KB. Even a 12×12×12 grid would be under 200KB. Not a concern.
Per-frame shader cost: the only real difference. The evaluation shader goes from 3 texture() calls + 4 multiply-adds to 7 texture() calls + 9 multiply-adds. The 3D texture samples with LinearFilter (trilinear interpolation) are the bottleneck — going from 3 to 7 samples per fragment. That said, this runs only on surfaces inside a probe volume, and it's just one part of the overall lighting calculation.
In practice, the per-frame cost increase is modest. The 4 extra 3D texture samples are the only thing that matters, and trilinear 3D lookups are well-optimized on modern GPUs. Most game engines (Unreal, Unity) use L2 for their probes without concern — the quality improvement far outweighs the cost.
There was a problem hiding this comment.
Sure does look better:
L1
https://raw.githack.com/mrdoob/three.js/gi-l1/examples/webgl_lightprobes.html
https://raw.githack.com/mrdoob/three.js/gi-l1/examples/webgl_lightprobes_complex.html
https://raw.githack.com/mrdoob/three.js/gi-l1/examples/webgl_lightprobes_sponza.html
L2
https://raw.githack.com/mrdoob/three.js/gi/examples/webgl_lightprobes.html
https://raw.githack.com/mrdoob/three.js/gi/examples/webgl_lightprobes_complex.html
https://raw.githack.com/mrdoob/three.js/gi/examples/webgl_lightprobes_sponza.html
There was a problem hiding this comment.
Performance seems pretty good and sponza looks much better yep!
c8fb251 to
f12f6a8
Compare
📦 Bundle sizeFull ESM build, minified and gzipped.
🌳 Bundle size after tree-shakingMinimal build including a renderer, camera, empty scene, and dependencies.
|
b1bb579 to
dc0fbdd
Compare
GPU-resident L1 SH probe grid with hardware trilinear interpolation. Cubemap rendering, SH projection and texture packing run entirely on the GPU with zero CPU readback. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Make LightProbeVolume extend Object3D so it can be added to the scene graph with scene.add(), enabling multiple volumes per scene and per-object volume lookup based on bounding box containment. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
| #ifdef USE_LIGHT_PROBE_VOLUME | ||
| vec3 probeWorldPos = ( inverse( viewMatrix ) * vec4( geometryPosition, 1.0 ) ).xyz; |
There was a problem hiding this comment.
This can be written without the inverse since the view matrix is orthogonal.
point_transformed_with_inverse = vec4( vec3( ( point - matrix[ 3 ] ) * matrix ), 1.0 );However, to be glTF-compliant, the scale of the view matrix pushed to the GPU is now 1, so the inverse here is not necessarily cameraMatrixWorld. /ping @Mugen87
There was a problem hiding this comment.
This can be written without the inverse since the view matrix is orthogonal.
point_transformed_with_inverse = vec4( vec3( ( point - matrix[ 3 ] ) * matrix ), 1.0 );However, to be glTF-compliant, the scale of the view matrix pushed to the GPU is now 1, so the inverse here is not necessarily
cameraMatrixWorld.
I think this should address this? bd1f65f
@mrdoob see x chat
Checking 👀
e77f366 to
efa28bd
Compare
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Related issue: #16228 #18371
Description
Adds
LightProbeVolume, a 3D grid of Spherical Harmonic irradiance probes that provides position-dependent diffuse global illumination for WebGLRenderer.LinearFilterfor hardware trilinear interpolationscene.add( volume )Usage
Examples
http://raw.githack.com/mrdoob/three.js/gi/examples/webgl_lightprobes.html
http://raw.githack.com/mrdoob/three.js/gi/examples/webgl_lightprobes_complex.html
http://raw.githack.com/mrdoob/three.js/gi/examples/webgl_lightprobes_sponza.html