Naive, non-accurate atmospheric scattering technique, together with non-accurate shadow volumes, in forward rendering. That means that all happens in the fragment shader in 1 pass.
All started with the need to place the sun in the sky, see the direction of the source light and confirm the accuracy of a procedural displacement mapping technique, with regard to addictional generated normals. That's why in this video there is still a close focus on rocks, pebbles, cracks, that appears correctly displaced and lit. The camera flies up to few centimeters from surfaces and yet triangles/texels are scarcely visible.
This procedural displacement mapping is a technique that resembles the notable "micropolygons" technique in use at Pixar for cinematographic rendering. The difference is that here these micropolygons cover a screen area averagely close to the pixel area but not less than the pixel area. This little difference makes a huge difference in the correct per-vertex normal calculation, that can't be simplified as in the original micropolygons technique. Neverstanding, the quality of displacement is still appreciable and performances affordable in modern gaming video cards. Yes, another huge difference is that this procedural (on-the-fly) displacement mapping technique is real-time.
The sky is currently rendered as a quad. Part of the code of light scattering is present in both sky shader and terrain shader. This solution allows 1 pass, although it's not possible to take advantage of deferred rendering for other interesting effects, such as glow or lens flare (maybe). However, even if in form non strictly accurate, gdevice rendering currently has 4 important features of light: direct radiation, indirect radiation, reflected radiation, volumetric shadows. Cubemap may come furtherly, together with procedural clouds generation, that I have in mind to be volumetric too (that's why it will plausibly come up to a 3D texture instead).
At this point gdevide runs only on OpenGL 4.30 profile. Other profiles have been dismissed and code branches progressively eliminated. However the terrain engine is still the one I used on OpenGL 1.2 for multitextured terrain, up to a 30 km view, 16x16 vertices per squared meter (total 10^11 vertices coverage), with no impostering and not any culling (not yet, until this LOD technique suffices).
Terrain is now GPGPU generated by means of a compute shader. The pipeline is a pure attributeless pipeline. And yes, I'm slowly getting my feet wet with modern rendering techniques.
All started with the need to place the sun in the sky, see the direction of the source light and confirm the accuracy of a procedural displacement mapping technique, with regard to addictional generated normals. That's why in this video there is still a close focus on rocks, pebbles, cracks, that appears correctly displaced and lit. The camera flies up to few centimeters from surfaces and yet triangles/texels are scarcely visible.
This procedural displacement mapping is a technique that resembles the notable "micropolygons" technique in use at Pixar for cinematographic rendering. The difference is that here these micropolygons cover a screen area averagely close to the pixel area but not less than the pixel area. This little difference makes a huge difference in the correct per-vertex normal calculation, that can't be simplified as in the original micropolygons technique. Neverstanding, the quality of displacement is still appreciable and performances affordable in modern gaming video cards. Yes, another huge difference is that this procedural (on-the-fly) displacement mapping technique is real-time.
The sky is currently rendered as a quad. Part of the code of light scattering is present in both sky shader and terrain shader. This solution allows 1 pass, although it's not possible to take advantage of deferred rendering for other interesting effects, such as glow or lens flare (maybe). However, even if in form non strictly accurate, gdevice rendering currently has 4 important features of light: direct radiation, indirect radiation, reflected radiation, volumetric shadows. Cubemap may come furtherly, together with procedural clouds generation, that I have in mind to be volumetric too (that's why it will plausibly come up to a 3D texture instead).
At this point gdevide runs only on OpenGL 4.30 profile. Other profiles have been dismissed and code branches progressively eliminated. However the terrain engine is still the one I used on OpenGL 1.2 for multitextured terrain, up to a 30 km view, 16x16 vertices per squared meter (total 10^11 vertices coverage), with no impostering and not any culling (not yet, until this LOD technique suffices).
Terrain is now GPGPU generated by means of a compute shader. The pipeline is a pure attributeless pipeline. And yes, I'm slowly getting my feet wet with modern rendering techniques.