Delivering physically correct lighting computations (it is real!)

Talk about anything related to Unvanquished.
Post Reply
User avatar
illwieckz
Project Head
Posts: 768
Joined: Sat Aug 11, 2012 7:22 pm UTC
Location: France
Contact:

Delivering physically correct lighting computations (it is real!)

Post by illwieckz »

For 10 years I did research and spent time and effort to improve our lighting. We are close to finally achieve something I was pursuing since a long time…

We are close to get linear lighting being working.

This is something that required changes in two places:

  • In the map compiler, those changes were already implemented by Xonotic guys in q3map2.
  • In the game engine, this is what we are now close to merge in Dæmon.

I think we may follow this roadmap:

  1. We test, review and merge Dæmon#1687.
  2. Once Dæmon#1687 is merged, we publish a point release to distribute the feature to users as soon as possible, it would only ship the feature in the engine, the stock maps would not be re-released (to make it simple to do).
  3. I re-release NetRadiant and include an updated build menu enabling correct light computations, so mappers can start distribute maps using the new correct lighting computations, having people hosting those maps, knowing players have a compatible engine to display them.
  4. We prepare the 0.56.0 release and release it, including all stock maps being rebuilt with the correct lighting computations and maybe some other asset change (like including atcshd map and texture set as stock map, with new heightmaps, etc.).

I'm currently working on making a new NetRadiant release very soon, but it would be stupid to release a new NetRadiant right before releasing the feature in the Dæmon engine, that would require me to re-publish a new NetRadiant release right after the engine release…

So I'm OK to put the NetRadiant release on hold, waiting for the 0.55.5 point-release, but for that we need to merge that Dæmon#1687 pull request soon.

This comment is licensed under cc ​​by 4 and antecedent. The Crunch tool is awesome!

User avatar
killing time
Programmer
Posts: 178
Joined: Wed Jul 04, 2012 7:55 am UTC

Re: Delivering physically correct lighting computations (it is real!)

Post by killing time »

There is the issue that letting the map determine the color space for the entire rendering pipeline can mess up rendering of our game assets (i.e. players, buildables, weapons, and their effects). I don't think it's nice for a model to look totally different depending on which map is loaded. Last time we discussed sRGB, I found the example of the rifle smoke puff appearing much brighter than it should when in sRGB mode. Any multi-stage shaders, as well as dynamic lights would likely be affected. In any case it would be nice to merge some code soon so that we can more easily tested, but IMO it's not production ready if the game assets don't look consistent and good.

I still think it would be interesting to try the following algorithm to partially support mixing content designed for different color spaces:

  1. Render all opaque sRGB-unaware surfaces without doing sRGB-to-linear conversion on image textures. Dynamic lights would need to be adjusted to make sure the result is the same as if the surface were sRGB-aware.
  2. Run a screen-space shader to convert the color buffer from sRGB to linear space.
  3. Render all opaque sRGB-aware surfaces, with sRGB-to-linear conversion of image textures enabled.
  4. Render all translucent surfaces with sRGB-to-linear conversion of image textures enabled.
  5. cameraEffects shader converts color buffer to sRGB in preparation for 2d rendering.

Note that this means translucent sRGB-unaware surfaces would not be rendered correctly. In our goal state where all our game assets are sRGB-aware, this would mean that just the translucent surfaces in legacy maps have incorrect blending. But hey, transparent surfaces are FUBAR anyway, between being rendered in the wrong order and using wacked-out blending modes which don't play nicely with the floating point framebuffer. I think accepting potentially wrong blending for those would be superior to the alternatives of having to eternally maintain our game assets such that they look good in both color space pipelines, or accepting that they are blended way differently depending on the map. To be specific, multi-stage shaders and all dynamic lights would work differently depending on the map, if we let the map's color space determine how blending works for the entire pipeline.

We could implement a q3shader parameter to mark a shader as being sRGB-ready or not. But that would probably not be useful since it would only affect opaque shaders with multiple stages. Such shaders are common in legacy maps (e.g. lightmap stages which we don't manage to auto-collapse, or light styles), but probably don't occur hardly at all in our game assets. Probably we would just assume either that all game assets are ready or that all are unready.

For "turning on/off" sRGB-to-linear conversion for image texture reads, we would need to be able to register more than one instance of an image, since the conversion is effected by setting the GL image format. But this ability already exists with SHADER_2D / SHADER_3D_DYNAMIC / SHADER_3D_STATIC. In fact I believe that will need to be used even for a straight sRGB pipeline since we want linearization to occur for 3D rendering, but not 2D.

If we're too lazy to migrate the game assets at first, we could also consider the obverse of the algorithm described above:

  1. Render all opaque sRGB-aware surfaces, with sRGB-to-linear conversion of images enabled. Dynamic lights would need to be adjusted to make sure the result is the same as if the surface were sRGB-unaware.
  2. Run a screen-space shader to convert the color buffer from linear to sRGB space.
  3. Render all opaque sRGB-unaware surfaces, with sRGB-to-linear conversion of images disabled.
  4. Render all translucent surfaces, with sRGB-to-linear conversion of images disabled.
    In that case the translucent surfaces of new sRGB maps are what would have blending that does not work as it should.
User avatar
illwieckz
Project Head
Posts: 768
Joined: Sat Aug 11, 2012 7:22 pm UTC
Location: France
Contact:

Re: Delivering physically correct lighting computations (it is real!)

Post by illwieckz »

killing time wrote: Tue Jul 08, 2025 4:19 am UTC

There is the issue that letting the map determine the color space for the entire rendering pipeline can mess up rendering of our game assets (i.e. players, buildables, weapons, and their effects). I don't think it's nice for a model to look totally different depending on which map is loaded. Last time we discussed sRGB, I found the example of the rifle smoke puff appearing much brighter than it should when in sRGB mode

The rifle smoke puff was probably too bright because I did not take care of that kind of surface yet.

Any multi-stage shaders, as well as dynamic lights would likely be affected. In any case it would be nice to merge some code soon so that we can more easily tested, but IMO it's not production ready if the game assets don't look consistent and good.

Yes, but for what I have seen, the difference is usually very minor. Only the nano light flare really displeases me and looks obviously not enough translucent.

Other things may be adjusted in time, but I haven't noticed something obviously wrong in the base game.

I still think it would be interesting to try the following algorithm to partially support mixing content designed for different color spaces:

  1. Render all opaque sRGB-unaware surfaces without doing sRGB-to-linear conversion on image textures. Dynamic lights would need to be adjusted to make sure the result is the same as if the surface were sRGB-aware.
  2. Run a screen-space shader to convert the color buffer from sRGB to linear space.
  3. Render all opaque sRGB-aware surfaces, with sRGB-to-linear conversion of image textures enabled.
  4. Render all translucent surfaces with sRGB-to-linear conversion of image textures enabled.
  5. cameraEffects shader converts color buffer to sRGB in preparation for 2d rendering.

Note that this means translucent sRGB-unaware surfaces would not be rendered correctly. In our goal state where all our game assets are sRGB-aware, this would mean that just the translucent surfaces in legacy maps have incorrect blending. But hey, transparent surfaces are FUBAR anyway, between being rendered in the wrong order and using wacked-out blending modes which don't play nicely with the floating point framebuffer. I think accepting potentially wrong blending for those would be superior to the alternatives of having to eternally maintain our game assets such that they look good in both color space pipelines, or accepting that they are blended way differently depending on the map. To be specific, multi-stage shaders and all dynamic lights would work differently depending on the map, if we let the map's color space determine how blending works for the entire pipeline.

We could implement a q3shader parameter to mark a shader as being sRGB-ready or not. But that would probably not be useful since it would only affect opaque shaders with multiple stages. Such shaders are common in legacy maps (e.g. lightmap stages which we don't manage to auto-collapse, or light styles), but probably don't occur hardly at all in our game assets. Probably we would just assume either that all game assets are ready or that all are unready.

For "turning on/off" sRGB-to-linear conversion for image texture reads, we would need to be able to register more than one instance of an image, since the conversion is effected by setting the GL image format. But this ability already exists with SHADER_2D / SHADER_3D_DYNAMIC / SHADER_3D_STATIC. In fact I believe that will need to be used even for a straight sRGB pipeline since we want linearization to occur for 3D rendering, but not 2D.

If we're too lazy to migrate the game assets at first, we could also consider the obverse of the algorithm described above:

  1. Render all opaque sRGB-aware surfaces, with sRGB-to-linear conversion of images enabled. Dynamic lights would need to be adjusted to make sure the result is the same as if the surface were sRGB-unaware.
  2. Run a screen-space shader to convert the color buffer from linear to sRGB space.
  3. Render all opaque sRGB-unaware surfaces, with sRGB-to-linear conversion of images disabled.
  4. Render all translucent surfaces, with sRGB-to-linear conversion of images disabled.
    In that case the translucent surfaces of new sRGB maps are what would have blending that does not work as it should.

I would prefer that we avoid complex heuristics if we can. I find it better that starting with some incoming release we declare that a proper colorspace support is the official way the game is expected to be rendered. Then, we would consider OK that official assets like our players models, buildables and weapons may not really look as expected when playing legacy maps, as much as it is not obviously ugly.

Then, I would not oppose a simple enough trick that would improve the looking of official assets when playing legacy maps, but that's not something we are obligated to do.

This comment is licensed under cc ​​by 4 and antecedent. The Crunch tool is awesome!

User avatar
illwieckz
Project Head
Posts: 768
Joined: Sat Aug 11, 2012 7:22 pm UTC
Location: France
Contact:

Re: Delivering physically correct lighting computations (it is real!)

Post by illwieckz »

So my idea is that the next major release will have the colorspace code shipped and it will be officially supported to load maps built the new way with the -sRGB q3map2 switch. We may ship all our stock maps rebuilt.

This doesn't prevent us to ship an engine-only point release before that enabling the feature, but without it being officially supported yet, so people may start explore and test the new option. I'm not afraid of having people hosting new maps on clients using older engine builds because rendering a newly made map with an old engine doesn't look that bad and is not unplayable. In fact the math used to mistakenly render such new map the old way isn't fundamentaly more broken than what we always did until now (that was always broken anyway).

So having people host test servers with newly built map even if the major release is not published yet isn't something that I consider we should avoid.

This comment is licensed under cc ​​by 4 and antecedent. The Crunch tool is awesome!

User avatar
killing time
Programmer
Posts: 178
Joined: Wed Jul 04, 2012 7:55 am UTC

Re: Delivering physically correct lighting computations (it is real!)

Post by killing time »

illwieckz wrote: Fri Jul 25, 2025 8:54 pm UTC

I find it better that starting with some incoming release we declare that a proper colorspace support is the official way the game is expected to be rendered. Then, we would consider OK that official assets like our players models, buildables and weapons may not really look as expected when playing legacy maps, as much as it is not obviously ugly.

In practice, online players spend only a fraction of their time playing official maps. There is no prospect of getting the majority of maps using sRGB-aware builds. It doesn't make sense to design our game models so that they will look bad 80% of the time and good 20% of the time. If we are going to have the map determine the naive/linear rendering mode for all surfaces, then our assets need to look good in both modes. Of course if we don't change any of our shaders, our assets will just look better in naive mode since that's what they were designed against.

Supposing we want assets to look good in both modes, we probably need some q3shader directive to conditionally choose shaders or shader stages based on the colorspace. For example a stage-level colorspace naive or colorspace linear could disable the stage if the respective colorspace is not being used.

If we want to get maps out using the sRGB-aware precomputed lighting as fast as possible, it would make sense to start with an approach like the slipher/srgb-map-old-colorspace branch. Then we wouldn't have to migrate any shaders at all to start with. We could implement a worldspawn key to request naive blending, add it to maps for their initial sRGB-aware builds, and remove it from a map once its shaders are migrated.

Post Reply