Delivering physically correct lighting computations (it is real!)

Talk about anything related to Unvanquished.
User avatar
illwieckz
Project Head
Posts: 777
Joined: Sat Aug 11, 2012 7:22 pm UTC
Location: France
Contact:

Delivering physically correct lighting computations (it is real!)

Post by illwieckz »

For 10 years I did research and spent time and effort to improve our lighting. We are close to finally achieve something I was pursuing since a long time…

We are close to get linear lighting being working.

This is something that required changes in two places:

  • In the map compiler, those changes were already implemented by Xonotic guys in q3map2.
  • In the game engine, this is what we are now close to merge in Dæmon.

I think we may follow this roadmap:

  1. We test, review and merge Dæmon#1687.
  2. Once Dæmon#1687 is merged, we publish a point release to distribute the feature to users as soon as possible, it would only ship the feature in the engine, the stock maps would not be re-released (to make it simple to do).
  3. I re-release NetRadiant and include an updated build menu enabling correct light computations, so mappers can start distribute maps using the new correct lighting computations, having people hosting those maps, knowing players have a compatible engine to display them.
  4. We prepare the 0.56.0 release and release it, including all stock maps being rebuilt with the correct lighting computations and maybe some other asset change (like including atcshd map and texture set as stock map, with new heightmaps, etc.).

I'm currently working on making a new NetRadiant release very soon, but it would be stupid to release a new NetRadiant right before releasing the feature in the Dæmon engine, that would require me to re-publish a new NetRadiant release right after the engine release…

So I'm OK to put the NetRadiant release on hold, waiting for the 0.55.5 point-release, but for that we need to merge that Dæmon#1687 pull request soon.

This comment is licensed under cc ​​by 4 and antecedent. The Crunch tool is awesome!

User avatar
killing time
Programmer
Posts: 180
Joined: Wed Jul 04, 2012 7:55 am UTC

Re: Delivering physically correct lighting computations (it is real!)

Post by killing time »

There is the issue that letting the map determine the color space for the entire rendering pipeline can mess up rendering of our game assets (i.e. players, buildables, weapons, and their effects). I don't think it's nice for a model to look totally different depending on which map is loaded. Last time we discussed sRGB, I found the example of the rifle smoke puff appearing much brighter than it should when in sRGB mode. Any multi-stage shaders, as well as dynamic lights would likely be affected. In any case it would be nice to merge some code soon so that we can more easily tested, but IMO it's not production ready if the game assets don't look consistent and good.

I still think it would be interesting to try the following algorithm to partially support mixing content designed for different color spaces:

  1. Render all opaque sRGB-unaware surfaces without doing sRGB-to-linear conversion on image textures. Dynamic lights would need to be adjusted to make sure the result is the same as if the surface were sRGB-aware.
  2. Run a screen-space shader to convert the color buffer from sRGB to linear space.
  3. Render all opaque sRGB-aware surfaces, with sRGB-to-linear conversion of image textures enabled.
  4. Render all translucent surfaces with sRGB-to-linear conversion of image textures enabled.
  5. cameraEffects shader converts color buffer to sRGB in preparation for 2d rendering.

Note that this means translucent sRGB-unaware surfaces would not be rendered correctly. In our goal state where all our game assets are sRGB-aware, this would mean that just the translucent surfaces in legacy maps have incorrect blending. But hey, transparent surfaces are FUBAR anyway, between being rendered in the wrong order and using wacked-out blending modes which don't play nicely with the floating point framebuffer. I think accepting potentially wrong blending for those would be superior to the alternatives of having to eternally maintain our game assets such that they look good in both color space pipelines, or accepting that they are blended way differently depending on the map. To be specific, multi-stage shaders and all dynamic lights would work differently depending on the map, if we let the map's color space determine how blending works for the entire pipeline.

We could implement a q3shader parameter to mark a shader as being sRGB-ready or not. But that would probably not be useful since it would only affect opaque shaders with multiple stages. Such shaders are common in legacy maps (e.g. lightmap stages which we don't manage to auto-collapse, or light styles), but probably don't occur hardly at all in our game assets. Probably we would just assume either that all game assets are ready or that all are unready.

For "turning on/off" sRGB-to-linear conversion for image texture reads, we would need to be able to register more than one instance of an image, since the conversion is effected by setting the GL image format. But this ability already exists with SHADER_2D / SHADER_3D_DYNAMIC / SHADER_3D_STATIC. In fact I believe that will need to be used even for a straight sRGB pipeline since we want linearization to occur for 3D rendering, but not 2D.

If we're too lazy to migrate the game assets at first, we could also consider the obverse of the algorithm described above:

  1. Render all opaque sRGB-aware surfaces, with sRGB-to-linear conversion of images enabled. Dynamic lights would need to be adjusted to make sure the result is the same as if the surface were sRGB-unaware.
  2. Run a screen-space shader to convert the color buffer from linear to sRGB space.
  3. Render all opaque sRGB-unaware surfaces, with sRGB-to-linear conversion of images disabled.
  4. Render all translucent surfaces, with sRGB-to-linear conversion of images disabled.
    In that case the translucent surfaces of new sRGB maps are what would have blending that does not work as it should.
User avatar
illwieckz
Project Head
Posts: 777
Joined: Sat Aug 11, 2012 7:22 pm UTC
Location: France
Contact:

Re: Delivering physically correct lighting computations (it is real!)

Post by illwieckz »

killing time wrote: Tue Jul 08, 2025 4:19 am UTC

There is the issue that letting the map determine the color space for the entire rendering pipeline can mess up rendering of our game assets (i.e. players, buildables, weapons, and their effects). I don't think it's nice for a model to look totally different depending on which map is loaded. Last time we discussed sRGB, I found the example of the rifle smoke puff appearing much brighter than it should when in sRGB mode

The rifle smoke puff was probably too bright because I did not take care of that kind of surface yet.

Any multi-stage shaders, as well as dynamic lights would likely be affected. In any case it would be nice to merge some code soon so that we can more easily tested, but IMO it's not production ready if the game assets don't look consistent and good.

Yes, but for what I have seen, the difference is usually very minor. Only the nano light flare really displeases me and looks obviously not enough translucent.

Other things may be adjusted in time, but I haven't noticed something obviously wrong in the base game.

I still think it would be interesting to try the following algorithm to partially support mixing content designed for different color spaces:

  1. Render all opaque sRGB-unaware surfaces without doing sRGB-to-linear conversion on image textures. Dynamic lights would need to be adjusted to make sure the result is the same as if the surface were sRGB-aware.
  2. Run a screen-space shader to convert the color buffer from sRGB to linear space.
  3. Render all opaque sRGB-aware surfaces, with sRGB-to-linear conversion of image textures enabled.
  4. Render all translucent surfaces with sRGB-to-linear conversion of image textures enabled.
  5. cameraEffects shader converts color buffer to sRGB in preparation for 2d rendering.

Note that this means translucent sRGB-unaware surfaces would not be rendered correctly. In our goal state where all our game assets are sRGB-aware, this would mean that just the translucent surfaces in legacy maps have incorrect blending. But hey, transparent surfaces are FUBAR anyway, between being rendered in the wrong order and using wacked-out blending modes which don't play nicely with the floating point framebuffer. I think accepting potentially wrong blending for those would be superior to the alternatives of having to eternally maintain our game assets such that they look good in both color space pipelines, or accepting that they are blended way differently depending on the map. To be specific, multi-stage shaders and all dynamic lights would work differently depending on the map, if we let the map's color space determine how blending works for the entire pipeline.

We could implement a q3shader parameter to mark a shader as being sRGB-ready or not. But that would probably not be useful since it would only affect opaque shaders with multiple stages. Such shaders are common in legacy maps (e.g. lightmap stages which we don't manage to auto-collapse, or light styles), but probably don't occur hardly at all in our game assets. Probably we would just assume either that all game assets are ready or that all are unready.

For "turning on/off" sRGB-to-linear conversion for image texture reads, we would need to be able to register more than one instance of an image, since the conversion is effected by setting the GL image format. But this ability already exists with SHADER_2D / SHADER_3D_DYNAMIC / SHADER_3D_STATIC. In fact I believe that will need to be used even for a straight sRGB pipeline since we want linearization to occur for 3D rendering, but not 2D.

If we're too lazy to migrate the game assets at first, we could also consider the obverse of the algorithm described above:

  1. Render all opaque sRGB-aware surfaces, with sRGB-to-linear conversion of images enabled. Dynamic lights would need to be adjusted to make sure the result is the same as if the surface were sRGB-unaware.
  2. Run a screen-space shader to convert the color buffer from linear to sRGB space.
  3. Render all opaque sRGB-unaware surfaces, with sRGB-to-linear conversion of images disabled.
  4. Render all translucent surfaces, with sRGB-to-linear conversion of images disabled.
    In that case the translucent surfaces of new sRGB maps are what would have blending that does not work as it should.

I would prefer that we avoid complex heuristics if we can. I find it better that starting with some incoming release we declare that a proper colorspace support is the official way the game is expected to be rendered. Then, we would consider OK that official assets like our players models, buildables and weapons may not really look as expected when playing legacy maps, as much as it is not obviously ugly.

Then, I would not oppose a simple enough trick that would improve the looking of official assets when playing legacy maps, but that's not something we are obligated to do.

This comment is licensed under cc ​​by 4 and antecedent. The Crunch tool is awesome!

User avatar
illwieckz
Project Head
Posts: 777
Joined: Sat Aug 11, 2012 7:22 pm UTC
Location: France
Contact:

Re: Delivering physically correct lighting computations (it is real!)

Post by illwieckz »

So my idea is that the next major release will have the colorspace code shipped and it will be officially supported to load maps built the new way with the -sRGB q3map2 switch. We may ship all our stock maps rebuilt.

This doesn't prevent us to ship an engine-only point release before that enabling the feature, but without it being officially supported yet, so people may start explore and test the new option. I'm not afraid of having people hosting new maps on clients using older engine builds because rendering a newly made map with an old engine doesn't look that bad and is not unplayable. In fact the math used to mistakenly render such new map the old way isn't fundamentaly more broken than what we always did until now (that was always broken anyway).

So having people host test servers with newly built map even if the major release is not published yet isn't something that I consider we should avoid.

This comment is licensed under cc ​​by 4 and antecedent. The Crunch tool is awesome!

User avatar
killing time
Programmer
Posts: 180
Joined: Wed Jul 04, 2012 7:55 am UTC

Re: Delivering physically correct lighting computations (it is real!)

Post by killing time »

illwieckz wrote: Fri Jul 25, 2025 8:54 pm UTC

I find it better that starting with some incoming release we declare that a proper colorspace support is the official way the game is expected to be rendered. Then, we would consider OK that official assets like our players models, buildables and weapons may not really look as expected when playing legacy maps, as much as it is not obviously ugly.

In practice, online players spend only a fraction of their time playing official maps. There is no prospect of getting the majority of maps using sRGB-aware builds. It doesn't make sense to design our game models so that they will look bad 80% of the time and good 20% of the time. If we are going to have the map determine the naive/linear rendering mode for all surfaces, then our assets need to look good in both modes. Of course if we don't change any of our shaders, our assets will just look better in naive mode since that's what they were designed against.

Supposing we want assets to look good in both modes, we probably need some q3shader directive to conditionally choose shaders or shader stages based on the colorspace. For example a stage-level colorspace naive or colorspace linear could disable the stage if the respective colorspace is not being used.

If we want to get maps out using the sRGB-aware precomputed lighting as fast as possible, it would make sense to start with an approach like the slipher/srgb-map-old-colorspace branch. Then we wouldn't have to migrate any shaders at all to start with. We could implement a worldspawn key to request naive blending, add it to maps for their initial sRGB-aware builds, and remove it from a map once its shaders are migrated.

User avatar
illwieckz
Project Head
Posts: 777
Joined: Sat Aug 11, 2012 7:22 pm UTC
Location: France
Contact:

Re: Delivering physically correct lighting computations (it is real!)

Post by illwieckz »

Our models were done at a time the software used to make them were already colorspace aware. Our models are already expected to be rendered with linear operations and what happens is that we should expect that those models aren't rendered correctly yet.

Of course we cannot exclude that we may have broken a bit some of our model textures as a way to workaround our naive renderer, but we should expect those models textures to have been made with a colorspace aware tools.

Our best example may be the human model, which looks ugly in our engine since the day we have it. It is ugly because it is meant to be rendered the linear way. I don't care if after we merge the srgb-native branch this model still look as bad as it is looking bad since 10 years when playing a legacy tremulous map that had never been rebuilt since. We endured that for 10 years, I don't mind if that continue. But I want this model to look correctly after we merged the code making our engine colorspace aware.

On all following screenshots, screenshots were done in this order:

  • 0.55.4 release with naive renderer, currently released map build, half lambert lighting off
  • 0.55.4 release with naive renderer, currently released map build, half lambert lighting on
  • srgb-native branch with colorspace aware renderer, preview map build, half lambert lighting off
  • srgb-native branch branch with colorspace aware renderer, preview map build, half lambert lighting on

In all cases I disabled the rim lighting, that's something to evaluate separately.

I know that half lambert lighting is mostly a workaround for lighting models in a naive renderer, but I still wanted to show how that displays with the linear renderer, the difference is subtle enough to not care about making half lambert conditional for now.

The human model

So let's look at that human model, first in vega, the human looks very bad:

Image

Image

Of course it's better with the naive renderer:

Image

Image

But you may say this is an unfair comparison because we know the vega map suffers a lot from unwanted darkness with the naive renderer, so let's look at the same human model in atcshd, outdoor under a strong sunlight. And with the naive renderer it's even worse under such strong sunlight, at least on vega the darkness could fool our mind, but here the wrongness is seen in plain light, no pun intended:

Image

Image

And with the linear renderer, finally, our human model has a human color skin, and the sun stops shining on the forehead like if it was wax and not skin, and the clothing doesn't look anymore like fish scales but actual fabric, and the armour is not anymore some chromium blingbling bumper from a tuning meeting in the parking lot of the local supermarket, but some sturdy material made to withstand both shocks and scratches:

Image

Image

The firebomb

So now let's look at our lovely firebomb under the atcshd sun. That firebomb in the naive renderer has some weird reflections, it looks like it is made from chocolate, yum yum! …but you're not supposed to eat it:

Image

Image

Now it looks like the firebomb is made from a solid material whose parts it's made with you definitely don't want them to be sent to your face as fast as possible. I recmind that the body of the firebomb uses the same specular map than the regular grenade:

Image

Image

The armoury

Now let's look at the armoury, it looks so bad with the naive renderer in atcshd:

Image

Image

With he linear renderer, it looks good:

Image

Image

Some human base

In maps like plat23, such human base look acceptable with both naive and linear renderers:

Image

Image

We already know the blending should be reworked for the medistation effect, that's the only problem so far:

Image

Image

Some Alien base

In maps like parpax, here is how looks some alien base with the naive renderer:

Image

Image

Now with the linear renderer, it looks like this. And in fact it's funny because here the half lambert lighting still helps to improve the looking of the overmind:

Image

Image

This comment is licensed under cc ​​by 4 and antecedent. The Crunch tool is awesome!

User avatar
illwieckz
Project Head
Posts: 777
Joined: Sat Aug 11, 2012 7:22 pm UTC
Location: France
Contact:

Re: Delivering physically correct lighting computations (it is real!)

Post by illwieckz »

So the previous message mostly answered to that:

killing time wrote: Sun Jul 27, 2025 2:40 am UTC

Supposing we want assets to look good in both modes, we probably need some q3shader directive to conditionally choose shaders or shader stages based on the colorspace. For example a stage-level colorspace naive or colorspace linear could disable the stage if the respective colorspace is not being used.

What is missing from my answer is that I'm not against the idea of implementing specific shader switches for improving the look of models in legacy maps, but that's not something to make mandatory for merging the linear renderer and reaching that long-awaited milestone.

So to sum it up:

  • we have to merge the linear renderer and make the linear pipeline the only official supported pipeline as soon as possible,
  • we may accept workarounds to make assets made for the linear pipeline to look better with the legacy naive renderer.

We have to merge the linear renderer and make the linear pipeline the only official supported pipeline as soon as possible because we should not continue the current way, when producing and maintaining maps we are wasting time and efforts in testing lighting that is broken, this waste should stop, as soon as possible, and once for all.

We should reach the point where someone producing something with our engine, being a map or a whole game, should use a linear pipeline without having to mind any broken pipeline from the year 1999. And we should not expect from new contributors to learn the undocumented dark wizard tricks that we required to get maps that doesn't look too much obviously broken. We need to reach that very important milestone as soon as possible.

In practice, online players spend only a fraction of their time playing official maps. There is no prospect of getting the majority of maps using sRGB-aware builds. It doesn't make sense to design our game models so that they will look bad 80% of the time and good 20% of the time. If we are going to have the map determine the naive/linear rendering mode for all surfaces, then our assets need to look good in both modes. Of course if we don't change any of our shaders, our assets will just look better in naive mode since that's what they were designed against.

The stock maps aren't the only maps that are actually rebuilt. Our mapping scene is active and a lot of the can be rebuilt to benefit from the new pipeline. And the one not rebuilt would just look like before. But anyway, our models look better in the linear pipeline, so old maps will look broken like before, and models will look broken in old maps like before. Merging the linear pipeline changes nothing.

If we want to get maps out using the sRGB-aware precomputed lighting as fast as possible, it would make sense to start with an approach like the slipher/srgb-map-old-colorspace branch. Then we wouldn't have to migrate any shaders at all to start with. We could implement a worldspawn key to request naive blending, add it to maps for their initial sRGB-aware builds, and remove it from a map once its shaders are migrated.

The srgb-native branch is ready to merge. There is nothing to wait for before merging it. We don't need to do extra development before merging it.

If we want to get maps out using the sRGB-aware precomputed lighting as fast as possible, we just have to merge that branch, nothing more.

We can then spend time to polish the experience a bit, like adjusting some blending materials.

We may even create some new shader keywords if needed.

But nothing prevents to merge the srgb-native branch. All it is needed for a mapper to produce a new map for the linear pipeline and get it rendered with the linear pipeline is to merge that engine branch and to add the -sRGB option to the q3map2 light stage.

The q3map2 code is already available in current builds, the code in engine is ready to merge.

This comment is licensed under cc ​​by 4 and antecedent. The Crunch tool is awesome!

User avatar
Redsky
Posts: 10
Joined: Sat Apr 28, 2012 6:47 pm UTC

Re: Delivering physically correct lighting computations (it is real!)

Post by Redsky »

You described here less distinct highlights and less harsh shadows.

Both are effects of more diffuse light. It can be the expected effect and wanted effect, based on how light/textures are treated, but here's the question: are we able to recreate harsh dynamic lighting in sRGB branch?

I don't want to throw water onto the fire, but this post undercuts goal of showing off importance of accurate non-naive accounting for sRGB assets. That's because with any major change to renderer assets become de-synced from engine.

After tonemapping got introduced by Reaper geometry of maps is less obscured by the shadows and lights become much more saturated. The feature is a big win in my book, but I don't know if I could say that this is how maps were "supposed to look". The way I see the update providing a way for improvement upon the assets, that old rendering system didn't. To be more in line with original spirit of those maps one might start with desaturating lights and perhaps tweaking their strength and see what can be done from there, but the matter of fact is assets (maps) need to be updated.

You assert sRGB branch fixes lighting errors of assets. I'm not convinced these 'lighting errors' aren't quirks of naive implementation and matter of taste. Even if textures were created 100% correctly by trustworthy process they still may have been intentionally tweaked(eyeballed) to match visual representations in the old engine 10+ years ago. I can say for sure that some of the textures were definitely put together haphazardly and deemed 'good enough' while others were never finished.

To conclude presenting full potential of changes you introduced requires assets tailor-made for the task.
A proposed solution:

  1. produce or procure asset with trusted specular map

  2. find light/exposure conditions in which naive implementation would obviously fail

  3. optionally recreate renders in analogous conditions in a software that is capable of rendering without mangling sRGB textures/light

User avatar
illwieckz
Project Head
Posts: 777
Joined: Sat Aug 11, 2012 7:22 pm UTC
Location: France
Contact:

Re: Delivering physically correct lighting computations (it is real!)

Post by illwieckz »

Hmm, sorry @Redsky, it looks like I misclicked and clicked the “edit” button instead of the “quote” one. 🤦‍♀️ I restored your post based on my quotes.


So here was my first post

You described here less distinct highlights and less harsh shadows.

As a preamble I want to remind we are delivering a game to users, we are not delivering screenshots to users.
Screenshots seen out of the gaming context and displayed in reduced form in a forum may bring a different feeling than experiencing the actual roaming within the rendered world while playing.

I didn't described less distinct highlights and less harsh shadows, I described the result of a computation.

To make a comparison, I'm not discussing the fact people may experience the dress as white and gold, but the fact it is blue and black so we should sample it blue and black before applying the light over it. Even if the result of the computation could produce the same pixel color we could get by sampling gold, we should implement the computation that produces that pixel color by sampling black and not gold.

Someone wanting more distinct higlights or more harsh shadows should set-up the lighting accordingly in the source materials, not rely on broken computes that may eventually look pleasing when framed in a selective screenshot while the render break everywhere else as soon as we look elsewhere.

Both are effects of more diffuse light. It can be the expected effect and wanted effect, based on how light/textures are treated, but here's the question: are we able to recreate harsh dynamic lighting in sRGB branch?

We never had harsh dynamic lighting, we had broken short light falloff, broken light bouncing, and broken light attenuation, (and we still have broken light backsplash as-a-point-light, but this is more minor)

We sometime had the lucky perception of harsh dynamic lighting when we aligned stars with the sacrifice of a goat, and it only worked if we didn't looked around or behind.

Q3map2 provides multiple lighting options when making maps, some options for hard direct sunlight (like under a sunny day), some options for more diffuse lighting (like under a cloudy day) and the ability to mix them, and things like that. We also have in engine a color grader that allows us some artistic creativity like modifying the saturation, the contrast or the tone of a whole scene. Those are the mechanism a creator should leverage to get what he has in mind.

I don't want to throw water onto the fire, but this post undercuts goal of showing off importance of accurate non-naive accounting for sRGB assets. That's because with any major change to renderer assets become de-synced from engine.

It is true that some of our assets were tested with a broken engine, it is obvious for the transparent textures.

I also suspect that some of specular maps from our oldest texture packs may have been. Or maybe I'm still too much accustomed to brokenness, like most of us, to fully accept the truth yet.

For example I may still believe something is wrong because it's not like it was before, while before was wrong but I'm accustomed to it. A model like the overmind falls into that range where I still have doubts. The overmind doesn't look broken in the new linear pipeline if I unlearn the way I look at it, but I'm accustomed to see it overly shiny sine years, so my first thought that the linear pipeline was breaking it. Then I spent more time experimenting with the branch and comparing with more models, and now I'm starting to feel the old overmind over shininess as buggy.

Also I know we modified some textures that did not looked in game like they looked in modeler's software, so we may have some select assets that may even be in some in-between state.

After tonemapping got introduced by Reaper geometry of maps is less obscured by the shadows and lights become much more saturated. The feature is a big win in my book, but I don't know if I could say that this is how maps were "supposed to look". The way I see the update providing a way for improvement upon the assets, that old rendering system didn't. To be more in line with original spirit of those maps one might start with desaturating lights and perhaps tweaking their strength and see what can be done from there, but the matter of fact is assets (maps) need to be updated.

I share the feeling of “win in the book, mixed opinion on the result”. I like the idea of tonemapping on the paper, but my experience of our current implementation is that it makes me feel it breaks both maps done for the non-linear and linear pipelines, I suspect it is not calibrated properly yet. I also believe tonemapping is a good thing to have because if I'm right this is a required foundation for the upcoming adaptative lighting. But right now there is something that really feels unnatural when looking at maps when the tonemapper is enabled. I'll probably start a dedicated thread on the tonemapper.

You assert sRGB branch fixes lighting errors of assets. I'm not convinced these 'lighting errors' aren't quirks of naive implementation and matter of taste. Even if textures were created 100% correctly by trustworthy process they still may have been intentionally tweaked(eyeballed) to match visual representations in the old engine 10+ years ago. I can say for sure that some of the textures were definitely put together haphazardly and deemed 'good enough' while others were never finished.

Of course many of our assets were just “put together haphazardly and deemed 'good enough' while others were never finished”, but something like the human model is obviously wrong with the old pipeline, it always has been, and my brain was never fooled to believe that the human model was meant to look like this.

Also, we have to keep in mind that even when a texture is not very good, it's already a win if the lighting itself is good, whatever the texture.

Some of our textures may look less good with the linear pipeline, but that is not an argument in favor of breaking the lighting. Once we get a proper lighting, we have a first win, when we fix such texture, we have a second win.

To conclude presenting full potential of changes you introduced requires assets tailor-made for the task.
A proposed solution:

  1. produce or procure asset with trusted specular map

  2. find light/exposure conditions in which naive implementation would obviously fail

  3. optionally recreate renders in analogous conditions in a software that is capable of rendering without mangling sRGB textures/light

So,

produce or procure asset with trusted specular map

Yes, we should favor trusted specular maps or better, PBR maps (we definitely need to fix our PBR pipeline too)

find light/exposure conditions in which naive implementation would obviously fail

The Vega map is a good example where the naive implementation is obviously failing.

optionally recreate renders in analogous conditions in a software that is capable of rendering without mangling sRGB textures/light

Do you mean something like rendering in blender our map to check if the newly implemented pipeline in our engine does it right?

I'm concerned by the ability to cross-check what I implemented (I may have introduced bugs and take for granted something that is unknowingly broken). That's why I used Xonotic maps in my testing process because I can compare the same Xonotic map build in both DarkPlaces and the Dæmon engine. This only allows me to compare the pipeline in the renderer, not in the lightmapper though.


And here was my second post

I want to remind that the purpose of the engine is to bring newer pipelines to the game for newly produced content.

It doesn't conflict with our other goal to support legacy assets done the old way.

What is not a purpose of the engine and is not part of our goals, is to provide a way to produce newer assets the old way rendered by a newer pipeline and mixing fixes from today with brokenness from before. That goal would conflict with both the goal of bringing newer pipelines for newly produced content the way they are done today, and with the goal of supporting old assets like they were done before.

So to rephrase it, we have a goal to bring a modern and accurate linear pipeline supporting PBR workflows etc., and we have a goal to load the existing maps done with techniques from 1999, but we don't have as a goal to mix a PBR workflow with a broken lighting from 1999.

Regarding old assets, we maintain the legacy behavior, and I'm known to be strongly advocating for that, but when producing new content, we should not handicap ourselves with legacy brokenness, and we should not handicap contributors and newcomers with such legacy brokenness.

Someone coming tomorrow and starting to create a map, a mod, or a whole game on the Unvanquished game or Dæmon engine should have to never care about the legacies. Such one should be able to start with a fully linear pipeline and PBR workflow without having to know about anything legacy.

So yes, after merging the linear pipeline branch and we start using it with newly created maps or rebuilt ones, we would have to do some iterative minor changes to improve the looking of our assets, but we have to keep in mind those changes may even not be fixes, but just us leveraging the engine to finally be able to do things we could not do before because of brokenness.

The matter is not new. The same concern arose when we switched to non-fast broken light fall off, and the concern will arise again when we will switch to backsplash lights the area light instead of the broken point light way like Q3 did in 1999. Of course, every time we fix something in the lighting, we may have to reconsider what we have. But we will end there anyway, today or tomorrow, so we better do it as soon as we can. We can't avoid the fate, so we better not delay it as long as we are alive. The only way to avoid such fate is to have the project dying before we achieved the correctness of the engine, something that is not what we want.

This comment is licensed under cc ​​by 4 and antecedent. The Crunch tool is awesome!

User avatar
Redsky
Posts: 10
Joined: Sat Apr 28, 2012 6:47 pm UTC

Re: Delivering physically correct lighting computations (it is real!)

Post by Redsky »

Can you restore my original comment, please. If you cannot, please repost your commentary under your own name as I do not recall cosigning it.
Plus, it was already hard to read and understand post (on account of me being not that good at English) without recreating from pieces being interspersed within your response.

A TL;DR from my previous comment:

  1. I claim assets were created for broken implementation, therefore can't be used for proving correctness

  2. I call on preparing assets that are actively created for correct sRGB handling, and IF you wanted to show off usefulness of it finding test/conditions where old style assets would be obviously broken

  3. I suggest new look of old is matter of taste and asked if someone wanted to recreated old look with harsh lighting could they do it in new implementation or is it permanently in soft diffuse lighting

Edit: Thank you for restoring my original post.

Post Reply