Skip to content

LightProbeVolume: Add position-dependent diffuse Global Illumination#33125

Draft
mrdoob wants to merge 9 commits intodevfrom
gi
Draft

LightProbeVolume: Add position-dependent diffuse Global Illumination#33125
mrdoob wants to merge 9 commits intodevfrom
gi

Conversation

@mrdoob
Copy link
Owner

@mrdoob mrdoob commented Mar 5, 2026

Related issue: #16228 #18371

Description

Adds LightProbeVolume, a 3D grid of Spherical Harmonic irradiance probes that provides position-dependent diffuse global illumination for WebGLRenderer.

  • Baking is fully GPU-resident: cubemap rendering, SH projection, and texture packing all run on the GPU with zero CPU readback
  • Probe data is stored in three RGBA 3D textures with LinearFilter for hardware trilinear interpolation
  • Integrates into the existing lighting pipeline via scene.add( volume )

Usage

const bounds = new THREE.Box3(
    new THREE.Vector3( -5, 0, -5 ),
    new THREE.Vector3( 5, 5, 5 )
);

const volume = new LightProbeVolume( bounds, new THREE.Vector3( 8, 4, 8 ) );
volume.bake( renderer, scene );

scene.add( volume );

Examples

http://raw.githack.com/mrdoob/three.js/gi/examples/webgl_lightprobes.html
http://raw.githack.com/mrdoob/three.js/gi/examples/webgl_lightprobes_complex.html
http://raw.githack.com/mrdoob/three.js/gi/examples/webgl_lightprobes_sponza.html

@mrdoob mrdoob added this to the r184 milestone Mar 5, 2026
@mrdoob
Copy link
Owner Author

mrdoob commented Mar 5, 2026

The API in #18371 sure is prettier...
I mostly focused on learning the the subject as well as the limitations (light leaking is hard to solve).

@mrdoob mrdoob changed the title IrradianceProbeGrid: Add position-dependent diffuse Global Illumination LightProbeVolume: Add position-dependent diffuse Global Illumination Mar 5, 2026
@mrdoob
Copy link
Owner Author

mrdoob commented Mar 5, 2026

The LightProbeVolume API is still wip.

Any other ideas for names instead of scene.lightProbeVolume though?

@Mugen87
Copy link
Collaborator

Mugen87 commented Mar 5, 2026

Why not adding the volume to the scene like in #18371?

scene.lightProbeVolume would only allow you to define a single volume per scene. But what if you have a scene with multiple rooms and you want a different volume per room?

@mrdoob
Copy link
Owner Author

mrdoob commented Mar 5, 2026

Hmm, that'll be better indeed.

I guess the way to do it would be by checking what volume is the object being rendered inside of.


if ( _shMaterial !== null ) _shMaterial.dispose();

_shMaterial = new ShaderMaterial( {
Copy link
Collaborator

@Mugen87 Mugen87 Mar 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of using a new shader material, could you also use the existing method LightProbeGenerator.fromCubeRenderTarget() instead like in #18371? The input render target would be the cube camera render target as in the existing code.

Copy link
Owner Author

@mrdoob mrdoob Mar 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LightProbeGenerator.fromCubeRenderTarget() computes only one light probe and it's on the CPU.

This PR computes many of them on the GPU. The sponza one computes 147 light probes (7x7x3).

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LightProbeGenerator.fromCubeRenderTarget() computes only one light probe and it's on the CPU.

Ah right, it indeed evaluates the data on the CPU side.

The light probe volume iterates through all its light probes, updates the cube camera and then extracts the light probe data into the batch render target. You have here a render call per light probe and the logic for that could be placed in LightProbeGenerator. However, given how the data are organized in the batch render target, it's maybe too use case specific for LightProbeGenerator.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder how useful LightProbe and LightProbeGenerator are... 🤔

Copy link
Collaborator

@Mugen87 Mugen87 Mar 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think without something like LightProbeVolume, their utility is indeed questionable. If they are not integrated in this PR, they maybe become obsolete.

That aside #18371 was never finished because the baking in LDR was considered as incorrect in context of PBR, see #18371 (comment). Hence a baking solution in Blender or an external tool was intended so LightProbeVolume would just read in the exported baked lighting.

There happened a lot of discussion around https://github.com/gillesboisson/threejs-probes-test but I'm not sure about its state.

In general, asking the users to do the baking in an external tool isn't ideal, imo. It would better fit to three.js if this could be done directly with LightProbeVolume.

This PR has a different approach but I'm not yet sure if HDR is correctly implemented. The feedback of @donmccurdy and @WestLangley is important here.

Copy link
Collaborator

@Mugen87 Mugen87 Mar 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This PR has a different approach but I'm not yet sure if HDR is correctly implemented.

I've searched around and also asked a colleague from my former university. He recommended to read An Efficient Representation for Irradiance Environment Maps.

This paper is some sort of gold standard for SH-based diffuse light probes. The link to the PDF is: https://graphics.stanford.edu/papers/envmap/envmap.pdf

One outcome of this analysis is: The implementation of this PR uses an L1 spherical harmonics which is insufficient for HDR (and diffuse light in general). The industry standard for diffuse GI with SHs is based on the above paper and requires L2 spherical harmonics like in our SphericalHarmonics3 math class and thus by our current LightProbe implementation. L2 can retain almost all characteristics of an irradiance environment map while L1 can't.

If you ask Gemini why L1 does not properly work with HDR, you get this:

Gemini

While L1 Spherical Harmonics (SH) can store HDR data and provide a basic directional sense to your lighting, they are generally considered insufficient for high-quality real-time rendering, even for indirect diffuse light.

Here is the breakdown of why L2 is the industry standard and where L1 falls short.


1. The "Ramamoorthi" Standard

In their seminal 2001 paper, Ravi Ramamoorthi and Pat Hanrahan demonstrated that the diffuse reflection (Lambertian BRDF) acts as a low-pass filter on the environment.

  • L0 (1 coefficient): Represents the average ambient color (DC term).

  • L1 (3 additional coefficients): Represents the "linear" directionality. It tells you which side is brighter.

  • L2 (5 additional coefficients): Adds the quadratic "shape."

Ramamoorthi showed that L2 SH (9 coefficients total) captures roughly 99% of the energy of the diffuse irradiance map. Dropping down to L1 loses the subtle "clamping" and shaping that makes indirect light look grounded.


2. Why L1 is usually "Not Enough"

The "Ringing" and Negative Light Problem

L1 SH effectively models lighting as a constant term plus a vector. If your HDR light probe has a very bright localized light source (like a window or a sun-bounce), the L1 representation can often result in negative radiance on the opposite side of the sphere. This leads to "black holes" or weird artifacts in your shadows unless you apply heavy-handed (and often inaccurate) rescaling or windowing.

Lack of "Flatness"

Diffuse irradiance isn't just a directional gradient; it has a specific "lobed" shape caused by the cosine term in the rendering equation:

$$E(\mathbf{n}) = \int_{\Omega} L(\omega) (\mathbf{n} \cdot \omega) d\omega$$

L1 is too "pointy." It struggles to represent the way light wraps around a curved surface, often making objects look more like they are lit by a single weak directional light rather than a rich indirect environment.


Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll give it a go!

Asked Claude about performance implications:


The cost breaks down into three parts:

Bake time: ~same. The dominant cost is rendering cubemaps at each probe position — that's unchanged. The SH projection shader does 9 multiply-adds instead of 4 per texel, but that's trivial compared to the cubemap renders. Repacking writes 7 slices instead of 3, also negligible.

Memory: ~2.3x more texture memory, but tiny in absolute terms. 7 RGBA float 3D textures instead of 3. For a 6×6×6 grid, that's ~24KB vs ~10KB. Even a 12×12×12 grid would be under 200KB. Not a concern.

Per-frame shader cost: the only real difference. The evaluation shader goes from 3 texture() calls + 4 multiply-adds to 7 texture() calls + 9 multiply-adds. The 3D texture samples with LinearFilter (trilinear interpolation) are the bottleneck — going from 3 to 7 samples per fragment. That said, this runs only on surfaces inside a probe volume, and it's just one part of the overall lighting calculation.

In practice, the per-frame cost increase is modest. The 4 extra 3D texture samples are the only thing that matters, and trilinear 3D lookups are well-optimized on modern GPUs. Most game engines (Unreal, Unity) use L2 for their probes without concern — the quality improvement far outweighs the cost.

Copy link
Owner Author

@mrdoob mrdoob Mar 6, 2026

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Performance seems pretty good and sponza looks much better yep!

@mrdoob mrdoob force-pushed the gi branch 2 times, most recently from c8fb251 to f12f6a8 Compare March 6, 2026 09:40
@github-actions
Copy link

github-actions bot commented Mar 6, 2026

📦 Bundle size

Full ESM build, minified and gzipped.

Before After Diff
WebGL 359.27
85.31
362.96
86.16
+3.69 kB
+845 B
WebGPU 630.13
174.94
630.13
174.94
+0 B
+0 B
WebGPU Nodes 628.72
174.69
628.72
174.69
+0 B
+0 B

🌳 Bundle size after tree-shaking

Minimal build including a renderer, camera, empty scene, and dependencies.

Before After Diff
WebGL 491.13
119.74
494.83
120.58
+3.7 kB
+838 B
WebGPU 703.8
190.02
703.8
190.02
+0 B
+0 B
WebGPU Nodes 653.03
177.44
653.03
177.44
+0 B
+0 B

@mrdoob mrdoob force-pushed the gi branch 2 times, most recently from b1bb579 to dc0fbdd Compare March 6, 2026 14:40
mrdoob and others added 6 commits March 6, 2026 23:41
GPU-resident L1 SH probe grid with hardware trilinear interpolation.
Cubemap rendering, SH projection and texture packing run entirely on
the GPU with zero CPU readback.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Make LightProbeVolume extend Object3D so it can be added to the scene
graph with scene.add(), enabling multiple volumes per scene and
per-object volume lookup based on bounding box containment.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
#ifdef USE_LIGHT_PROBE_VOLUME
vec3 probeWorldPos = ( inverse( viewMatrix ) * vec4( geometryPosition, 1.0 ) ).xyz;
Copy link
Collaborator

@WestLangley WestLangley Mar 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This can be written without the inverse since the view matrix is orthogonal.

point_transformed_with_inverse = vec4( vec3( ( point - matrix[ 3 ] ) * matrix ), 1.0 );

However, to be glTF-compliant, the scale of the view matrix pushed to the GPU is now 1, so the inverse here is not necessarily cameraMatrixWorld. /ping @Mugen87

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mrdoob see x chat

Copy link
Owner Author

@mrdoob mrdoob Mar 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This can be written without the inverse since the view matrix is orthogonal.

point_transformed_with_inverse = vec4( vec3( ( point - matrix[ 3 ] ) * matrix ), 1.0 );

However, to be glTF-compliant, the scale of the view matrix pushed to the GPU is now 1, so the inverse here is not necessarily cameraMatrixWorld.

I think this should address this? bd1f65f

@mrdoob see x chat

Checking 👀

@mrdoob mrdoob force-pushed the gi branch 2 times, most recently from e77f366 to efa28bd Compare March 6, 2026 22:21
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
mrdoob and others added 2 commits March 7, 2026 08:12
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants