Loading Stories...
Loading Stories...
X = dot(L, x)
Y = dot(L, y)
Z = dot(L, z)
The spectrum of the light reflected from a surface (Lᵣ) is the pointwise product of the incoming spectral radiance (L) and the spectral reflectance curve of the surface (R): Lᵣ = L * R
What RGB rendering does, is it replaces this equation (for x, y, z): X = dot(Lᵣ, x) = dot(L * R, x) (eq I)
with this (for x, y, z): X = dot(L, x) * dot(R, x) (eq II)
Which are not equal in the general case. On the other hand, they are equal up to a scaling factor if L is a constant function, since in that case: dot(L * R, x) = C * dot(R, x) (from eq I)
dot(L, x) * dot(R, x) = D * dot(R, x) (from eq II)
where C and D are scaling factors that depend solely on L. The interpretation of this is that as long as the light source has a sufficiently flat spectrum (like sunlight!), the difference between spectral rendering and RGB rendering will be negligible.This is correct, however, not everything is lit by sunlight, especially onset in the motion picture industry, where solid state lighting and its narrow band irradiance sources, e.g. led lights or led walls, are prevalent. Those emission sources induce a lot of unexpected appearance problems, e.g. metameric failures that RGB rendering cannot possibly solve. Our spectral renderer, i.e. Manuka [1], was specifically designed to address those issues whilst imaging with a virtual motion picture camera.
One recent example I have in mind is The Batman (2022): There are sequences/shots lit with high-pressure sodium lights (HPS), those have a flat spectrum with spikes between 500 and 600nm, the appearance of objects lit under them is very specific and can only be accurately reproduced with a spectral renderer such as Manuka, Mitsuba [2] or Art [3].
- [1] https://www.wetafx.co.nz/research-and-tech/technology/manuka...
Just to spark discussion, here is one example of research aiming to reduce that gap and bring more physicality: https://ssteinberg.xyz/2023/03/27/rtplt/
I do consulting work in this area, in case you've ever got questions or could use a hand developing something.
(Great comment, nitpick ahead)
...to the human eye, that is. For industrial applications spectral rendering can still be useful.
I played my part in this back in the 2010s maintaining the blender integration, fun times :)
But both the renderer and the integrations got pretty much entirely re-written in the move to GPU compute shortly after that time.
Our eyes have three different cone cells with different responses[2], so the fact that three basis functions[3] that roughly align with the cone cells work well as an approximation isn't that surprising to me.
That said, way back when I was doing rendering I was toying with the idea of trying to find a optimal basis functions for rendering, like if scene-specific ones could improve things.
[1]: https://www.psych.mcgill.ca/misc/fda/ex-basis-a1.html
Show me! I want to see the difference!
> ... I accept that it is just a domain-specific term that shouldn't be taken too literally.
Actually it is supposed to be taken literally. That's why it's physically based rendering and not physically correct rendering. The goal is to create a system based on real world physics to get as close as possible within the constraints of current computer hardware.
I believe this article gets it wrong:
> The primary goal of physically-based rendering (PBR) is to create a simulation that accurately reproduces the imaging process of electro-magnetic spectrum radiation incident to an observer. This simulation should be indistinguishable from reality for a similar observer.
I don't think the stated goal of PBR is to be indistinguishable from reality for a similar observer. At least, I have never seen this stated anywhere.
However, it's worth noting that PBR is a loaded term and includes both physically based lighting, and physically based shading (surface properties). If we are purely talking about lighting, that is, modelling the way light bounces in a physical way, it might be possible to get an exact match to physical reality. Light bounces in a relatively simple way.
The problem is that surfaces are not simple. They absorb, reflect, diffract, scatter, and so on. So even if the equations modelling the reaction of light are 100% physically correct, it's an entirely different story to model a complex surface like skin in a physically correct way.
This article is about light, so it's talking about physically based lighting. In which case the phrase this simulation should be indistinguishable from reality for a similar observer is more reasonable, but I think it should still be qualified for clarity.
Differentiable rendering: http://rgl.epfl.ch/publications/Jakob2022DrJit
Volumetric rendering: http://rgl.epfl.ch/publications/NimierDavid2022Unbiased
Optimization of light path choice via AD: http://rgl.epfl.ch/publications/Vicini2021PathReplay
I'll certainly keep you in mind if I need/want some help in the future.
But Mitsuba is not really complete system to make pretty pictures, it is more of a framework for researchers to prototype algorithms etc. So while many state of the art methods have been implemented on top of Mitsuba, not nearly all of them have been merged to the mainline. If you want comparisons then you probably need to look up specific papers to see comparisons for the specific problems they solve.
For example, in the early days people created reflection models without considering energy reciprocity for example. Well that's a physical principle that's very well established in physics[2] so any physically based renderer should only have reflection models that obey that principle.
> I don't think the stated goal of PBR is to be indistinguishable from reality for a similar observer. At least, I have never seen this stated anywhere.
I’m, admittedly, highly biased as I work for Wētā FX, a high-end visual effects vendor. We have been developing for over a decade a spectral renderer, i.e. Manuka [1], whose design principles and philosophy are aimed at recreating how a motion picture camera, its lenses, filters, and even colour science, image the real world so that we can integrate virtual cg elements into plates as if they were part of the original photography. This hopefully gives some context to your quote. Whether we are successful at this is a different topic but this is a core business offering we are hired for.
- [1] https://www.wetafx.co.nz/research-and-tech/technology/manuka...
I think we could do better, even in RGB by treating translucency as a 3x3 matrix instead of a single alpha scalar. Per-channel transparency could be represented by a diagonal matrix, but there is no reason that the current RGB basis would be adequate for representing transparent materials, hence the generalization to a 3x3 matrix.
Then a tint wouldn't need to be defined additively, a green tinted glass could simply not let most of red and blue through.
However as you point out, very few materials and lights are like this, hence why the approximation works well.
> I’m, admittedly, highly biased as I work for Wētā FX, a high-end visual effects vendor
That's fair. I can see it being a stated goal of PBR at Weta. However, I'm at the extreme other end of the rendering spectrum. I work with WebGL (I contribute a lot to three.js) and being indistinguishable from reality is most assuredly NOT a goal of a real time in browser rendering engine. I mean, we can dream. But it will be a long, long time before we could realize that dream so our time is better spent finding approximations that can work even on a midrange mobile device.
I wonder what are the pathological cases here. I'm imagining some setup where an object could appear variably green, yellow, or red; this could be maybe accomplished by different mixes of different red/green lasers and maybe white light. The paint might be bigger problem :)
But the key, in my view, to PBR is that the whole pipeline is based on physical principles and math, while the old renderers were not. So previous renderers would not bother properly normalizing the Lambertian coeffients for example, leading to violation of energy conservation.
[1]: https://en.wikipedia.org/wiki/Color_constancy
[2]: https://vimeo.com/11932120 (sadly poor quality, better rips can be found)