There is the issue that letting the map determine the color space for the entire rendering pipeline can mess up rendering of our game assets (i.e. players, buildables, weapons, and their effects). I don't think it's nice for a model to look totally different depending on which map is loaded. Last time we discussed sRGB, I found the example of the rifle smoke puff appearing much brighter than it should when in sRGB mode. Any multi-stage shaders, as well as dynamic lights would likely be affected. In any case it would be nice to merge some code soon so that we can more easily tested, but IMO it's not production ready if the game assets don't look consistent and good.
I still think it would be interesting to try the following algorithm to partially support mixing content designed for different color spaces:
- Render all opaque sRGB-unaware surfaces without doing sRGB-to-linear conversion on image textures. Dynamic lights would need to be adjusted to make sure the result is the same as if the surface were sRGB-aware.
- Run a screen-space shader to convert the color buffer from sRGB to linear space.
- Render all opaque sRGB-aware surfaces, with sRGB-to-linear conversion of image textures enabled.
- Render all translucent surfaces with sRGB-to-linear conversion of image textures enabled.
- cameraEffects shader converts color buffer to sRGB in preparation for 2d rendering.
Note that this means translucent sRGB-unaware surfaces would not be rendered correctly. In our goal state where all our game assets are sRGB-aware, this would mean that just the translucent surfaces in legacy maps have incorrect blending. But hey, transparent surfaces are FUBAR anyway, between being rendered in the wrong order and using wacked-out blending modes which don't play nicely with the floating point framebuffer. I think accepting potentially wrong blending for those would be superior to the alternatives of having to eternally maintain our game assets such that they look good in both color space pipelines, or accepting that they are blended way differently depending on the map. To be specific, multi-stage shaders and all dynamic lights would work differently depending on the map, if we let the map's color space determine how blending works for the entire pipeline.
We could implement a q3shader parameter to mark a shader as being sRGB-ready or not. But that would probably not be useful since it would only affect opaque shaders with multiple stages. Such shaders are common in legacy maps (e.g. lightmap stages which we don't manage to auto-collapse, or light styles), but probably don't occur hardly at all in our game assets. Probably we would just assume either that all game assets are ready or that all are unready.
For "turning on/off" sRGB-to-linear conversion for image texture reads, we would need to be able to register more than one instance of an image, since the conversion is effected by setting the GL image format. But this ability already exists with SHADER_2D / SHADER_3D_DYNAMIC / SHADER_3D_STATIC. In fact I believe that will need to be used even for a straight sRGB pipeline since we want linearization to occur for 3D rendering, but not 2D.
If we're too lazy to migrate the game assets at first, we could also consider the obverse of the algorithm described above:
- Render all opaque sRGB-aware surfaces, with sRGB-to-linear conversion of images enabled. Dynamic lights would need to be adjusted to make sure the result is the same as if the surface were sRGB-unaware.
- Run a screen-space shader to convert the color buffer from linear to sRGB space.
- Render all opaque sRGB-unaware surfaces, with sRGB-to-linear conversion of images disabled.
- Render all translucent surfaces, with sRGB-to-linear conversion of images disabled.
In that case the translucent surfaces of new sRGB maps are what would have blending that does not work as it should.