10 September 2011

Behind the engine

I don’t even remember when it actually started, maybe many years ago, when I still was a student and covered the distance from the village to the city by train. The landscape that flowed beyond the window made me always fantasize, from the perspective of a 3D graphics programmer, about rendering of huge open spaces.

On late 2007, and in my very spare time, I started to place the first seeds of a framework for 3D rendering and some time later started to play with mesh and level of details. I had to face a number of subjects and every little door opened into new worlds to explore. Also, as the framework grew, in place of answers, some new questions about OOP effectivity, reusing or reinventing, scrum in R&D etc., rised up in my mind.

The idea for a level-of-detail algorithm for terrains came on late 2008 and has been tested together with the algorithm to generate the terrain itself on 2009. There I also started to leave some messages in a bottle by means of this blog and some videos on youtube/vimeo, but nothing. The world seems to be crowded by pioneers of procedural generation, some of notable excellence, but all working as isolated cells into their own procedural worlds.

Since 2010 I could no longer work on gdevice, except for some bug fixing, minor changes, making a couple of new videos. What’s happening? Hard to explain. Crysis (?) turned jobs harsher and squeezing. And also, it seems that in my area there is no money to hire senior programmers. Nice. Anyway..

What is currently in gdevice?

A number of components in embryonal form that are common to frameworks for 3D rendering. To name a few: windowing system (like GLFW), GLSL compliant BLAS (like GLM), scene graph management (like OSG), a 3D engine (aka graphic engine), a piece of code that eases procedural generation of assets and that I wanted to name "procedural engine". And so worth.
Everything has been rewritten from the ground up, everything is neat, compact, fast, but nothing that stands out. Except maybe for one thing:

Gdevice is OpenGL SC compatible

Most (if not all) LOD mechanics for terrain, that won’t precompute and won’t sandbox the world, and thus are suitable for continuous streamlined real-time applications (such as SVS), take advantage of pipeline programmability. That explains why 2000s were so proliferous of new LOD algorithms for terrains. However those algorithms won’t generally fit very well into an OpenGL 1 fixed function.

I can tell, only after having read lots of related literature, that the algorithm used in gdevice somewhat resembles the well known Geometry Clipmaps, with some critical modifications, the major of which is that the whole terrain tiles are in place of the quads and this fact gives worse visual poppings, better performances (VBO and imposters affinity) and, above all, it gives OpenGL 1 compliance, and thus it can apply to resource-constrained devices, embedded devices and military devices (most of which still must use OpenGL SC).

The video

At that time gdevice had the first version of the algorithm with its main defects: cracks and visual poppings.

In the video above, you’ve watched the rendering of a huge and dense terrain in realtime, on OpenGL 1 profile, on a very modest hardware (a laptop equipped with old Intel GMA graphics card).

How huge and how dense is this terrain?

The observer is in the middle point of a squared scene that is, according to textures, about 32km sized (so you have a 16km distance view) and every squared meter contains 1024 samples of terrain. That would mean 1012 samples. Considering that each sample comes with vertex position, normal, color, material params, it’d be an enormous amount of data that wouldn’t fit in any storage device nowaday.

While the observer moves, new stripes of terrain enter into the scene and some corresponding stripes at the opposite side get discarded. The cool thing is that these stripes of terrain are procedural generated at runtime, either height maps and texture maps. A few minutes of exploration (say by flight) can make the engine to generate (and discard) several tens of gigabytes of terrain data.
At this point I can state that gdevice turns explicitly meant for procedural generation of assets and rendering itself is rather an accessory feature so far.
Another cool thing is that the executable is made by uncompressed 64KB and there are not any dependencies except for opengl.dll and libc. Think at it like a sort of demoscene product.

Why the terrain is procedural generated?

Because once setup the graphic engine, I soon realized that I could feed it with over 1 million of vertices and still keep the average frame rate above 20 per second and on the very modest hardware. So I needed data. Where to find terrain data? There are lots of maps out there actually, some detailed, some others more detailed if one can afford to pay them. I considered that, in case I get one of these, then I have to convert those formats to gdevice formats and someway supply the missing data.

I've found convenient to write a procedural terrain generator instead. That is simply a formula z = f(x,y) that returns the altitude z at the location x,y. The artwork of making this generated terrain as much as possible believable, together with the power of formulas to hold (compress?) all this fine information, became soon a ground of interest for my mind.

The procedural engine can (could already 1 year ago actually) generate terrain with several erosion features, as well as water and snow. It is not the best you can find out there but it’s one noticeable thing.
There is an entire world made of continents, flatlands, mountains, snowy peaks, rivers, valleys.. All coming from one single formula and a few dozens of parameters. You change one parameter and the world changes.

What is done, what is to do

At the time that I write, cracks and visual poppings have been solved in a neat, natural way, still preserving performances and compliance to OpenGL 1.
I plan the making of a new video only at the next stage where I wish to show you procedural displacement mapping and procedural bump mapping. The overall architecture will hopefully feature deferred rendering to enable, eventually, shadow mapping, atmospheric phenomena and any sort of post-processing effects. Also procedural generation of content will be switched to GPU instead of CPU. Addictional objects such as vegetation, foliage and human artifacts will be set in a later stage. Can't say when.