01 April 2014


"Perfecting oneself is as much unlearning as it is learning."
Edsger W. Dijkstra

16 November 2013

Atmospheric scattering

Naive, non-accurate atmospheric scattering technique, together with non-accurate shadow volumes, in forward rendering. That means that all happens in the fragment shader in 1 pass.

All started with the need to place the sun in the sky, see the direction of the source light and confirm the accuracy of a procedural displacement mapping technique, with regard to addictional generated normals. That's why in this video there is still a close focus on rocks, pebbles, cracks, that appears correctly displaced and lit. The camera flies up to few centimeters from surfaces and yet triangles/texels are scarcely visible.

This procedural displacement mapping is a technique that resembles the notable "micropolygons" technique in use at Pixar for cinematographic rendering. The difference is that here these micropolygons cover a screen area averagely close to the pixel area but not less than the pixel area. This little difference makes a huge difference in the correct per-vertex normal calculation, that can't be simplified as in the original micropolygons technique. Neverstanding, the quality of displacement is still appreciable and performances affordable in modern gaming video cards. Yes, another huge difference is that this procedural (on-the-fly) displacement mapping technique is real-time.

The sky is currently rendered as a quad. Part of the code of light scattering is present in both sky shader and terrain shader. This solution allows 1 pass, although it's not possible to take advantage of deferred rendering for other interesting effects, such as glow or lens flare (maybe). However, even if in form non strictly accurate, gdevice rendering currently has 4 important features of light: direct radiation, indirect radiation, reflected radiation, volumetric shadows. Cubemap may come furtherly, together with procedural clouds generation, that I have in mind to be volumetric too (that's why it will plausibly come up to a 3D texture instead).

At this point gdevide runs only on OpenGL 4.30 profile. Other profiles have been dismissed and code branches progressively eliminated. However the terrain engine is still the one I used on OpenGL 1.2 for multitextured terrain, up to a 30 km view, 16x16 vertices per squared meter (total 10^11 vertices coverage), with no impostering and not any culling (not yet, until this LOD technique suffices).

Terrain is now GPGPU generated by means of a compute shader. The pipeline is a pure attributeless pipeline. And yes, I'm slowly getting my feet wet with modern rendering techniques.

12 May 2013

Procedural displacement mapping

Game players may not care about details. But I always wanted to come close to surfaces and see fine displacement mapping instead of large and flat polygons.
It’s not hard to believe that the Pixar’s micropolygons technique is about to get feasible at real-time in a near future.
But I could not wait. And I’ve tried out my rough near-micropolygons technique, working in real-time, at present.

The following video shows how, with this technique, every single stone textured gets extruded and lit properly, with its own geometry and proper vertex normals. Every crack gets carved automagically, by means of a fine increase of geometry detail, so that not a polygon is any more visible at close distances:

This technique works on current gaming video cards, such as GeForce GTX 560, OpenGL 4.0 profile. The tessellation shaders tessellate and also displace vertices procedurally, accordingly to an analitically interpreted shape of the texture. And this texture can be either a procedurally generated or a regular human-made.

Unlike the Pixar’s micropolygons, this “near-micropolygons” technique doesn’t split triangles so that their projected areas to screen are about 1 pixel, and thus can’t conveniently approximate vertex normals to facet normals (which would be a big saving of maths headaches and GPU power). Yet it works. And it is, in my opinion, a great improvement of rendering. I wish some technique like this will be exposed soon on modern games.

This feature currently is in gdevice and comes with a 30% of FPS loss. There is a bunch of data yet computed in-shader that can be wisely precomputed, therefore some margin for optimization. This will be food for my mind in the next days.

10 September 2011

Behind the engine

I don’t even remember when it actually started, maybe many years ago, when I still was a student and covered the distance from the village to the city by train. The landscape that flowed beyond the window made me always fantasize, from the perspective of a 3D graphics programmer, about rendering of huge open spaces.

On late 2007, and in my very spare time, I started to place the first seeds of a framework for 3D rendering and some time later started to play with mesh and level of details. I had to face a number of subjects and every little door opened into new worlds to explore. Also, as the framework grew, in place of answers, some new questions about OOP effectivity, reusing or reinventing, scrum in R&D etc., rised up in my mind.

The idea for a level-of-detail algorithm for terrains came on late 2008 and has been tested together with the algorithm to generate the terrain itself on 2009. There I also started to leave some messages in a bottle by means of this blog and some videos on youtube/vimeo, but nothing. The world seems to be crowded by pioneers of procedural generation, some of notable excellence, but all working as isolated cells into their own procedural worlds.

Since 2010 I could no longer work on gdevice, except for some bug fixing, minor changes, making a couple of new videos. What’s happening? Hard to explain. Crysis (?) turned jobs harsher and squeezing. And also, it seems that in my area there is no money to hire senior programmers. Nice. Anyway..

What is currently in gdevice?

A number of components in embryonal form that are common to frameworks for 3D rendering. To name a few: windowing system (like GLFW), GLSL compliant BLAS (like GLM), scene graph management (like OSG), a 3D engine (aka graphic engine), a piece of code that eases procedural generation of assets and that I wanted to name "procedural engine". And so worth.
Everything has been rewritten from the ground up, everything is neat, compact, fast, but nothing that stands out. Except maybe for one thing:

Gdevice is OpenGL SC compatible

Most (if not all) LOD mechanics for terrain, that won’t precompute and won’t sandbox the world, and thus are suitable for continuous streamlined real-time applications (such as SVS), take advantage of pipeline programmability. That explains why 2000s were so proliferous of new LOD algorithms for terrains. However those algorithms won’t generally fit very well into an OpenGL 1 fixed function.

I can tell, only after having read lots of related literature, that the algorithm used in gdevice somewhat resembles the well known Geometry Clipmaps, with some critical modifications, the major of which is that the whole terrain tiles are in place of the quads and this fact gives worse visual poppings, better performances (VBO and imposters affinity) and, above all, it gives OpenGL 1 compliance, and thus it can apply to resource-constrained devices, embedded devices and military devices (most of which still must use OpenGL SC).

The video

At that time gdevice had the first version of the algorithm with its main defects: cracks and visual poppings.

In the video above, you’ve watched the rendering of a huge and dense terrain in realtime, on OpenGL 1 profile, on a very modest hardware (a laptop equipped with old Intel GMA graphics card).

How huge and how dense is this terrain?

The observer is in the middle point of a squared scene that is, according to textures, about 32km sized (so you have a 16km distance view) and every squared meter contains 1024 samples of terrain. That would mean 1012 samples. Considering that each sample comes with vertex position, normal, color, material params, it’d be an enormous amount of data that wouldn’t fit in any storage device nowaday.

While the observer moves, new stripes of terrain enter into the scene and some corresponding stripes at the opposite side get discarded. The cool thing is that these stripes of terrain are procedural generated at runtime, either height maps and texture maps. A few minutes of exploration (say by flight) can make the engine to generate (and discard) several tens of gigabytes of terrain data.
At this point I can state that gdevice turns explicitly meant for procedural generation of assets and rendering itself is rather an accessory feature so far.
Another cool thing is that the executable is made by uncompressed 64KB and there are not any dependencies except for opengl.dll and libc. Think at it like a sort of demoscene product.

Why the terrain is procedural generated?

Because once setup the graphic engine, I soon realized that I could feed it with over 1 million of vertices and still keep the average frame rate above 20 per second and on the very modest hardware. So I needed data. Where to find terrain data? There are lots of maps out there actually, some detailed, some others more detailed if one can afford to pay them. I considered that, in case I get one of these, then I have to convert those formats to gdevice formats and someway supply the missing data.

I've found convenient to write a procedural terrain generator instead. That is simply a formula z = f(x,y) that returns the altitude z at the location x,y. The artwork of making this generated terrain as much as possible believable, together with the power of formulas to hold (compress?) all this fine information, became soon a ground of interest for my mind.

The procedural engine can (could already 1 year ago actually) generate terrain with several erosion features, as well as water and snow. It is not the best you can find out there but it’s one noticeable thing.
There is an entire world made of continents, flatlands, mountains, snowy peaks, rivers, valleys.. All coming from one single formula and a few dozens of parameters. You change one parameter and the world changes.

What is done, what is to do

At the time that I write, cracks and visual poppings have been solved in a neat, natural way, still preserving performances and compliance to OpenGL 1.
I plan the making of a new video only at the next stage where I wish to show you procedural displacement mapping and procedural bump mapping. The overall architecture will hopefully feature deferred rendering to enable, eventually, shadow mapping, atmospheric phenomena and any sort of post-processing effects. Also procedural generation of content will be switched to GPU instead of CPU. Addictional objects such as vegetation, foliage and human artifacts will be set in a later stage. Can't say when.

19 June 2010

Texture mapping

At last I had to walk my way to texture blending, in order to dress the naked gouraud-shaded thingy. But I wanted to experience a compositing approach to textures where 4 luma-only patterns (say “desaturated” textures) get blended together within a weighted sum, and then colorized. This way less memory is touched (only 3 fetches). It's thus faster, yet enables a variety of materials.
This weird choise is not casual. Indeed I had in mind to stick this stuff into a fixed function configuration, in order to push the old OpenGL API (1.2.1) to its limits and see what I can get. Well, the texture blending I can get is shown in this video:

So now there is another 3D engine in the world, that would look pretty good if it were developed 15 years ago, but is kind of ugly compared with modern today stuff. It is tiny, coarse, fast. It is built around a procedural terrain generator and a LOD algorithm for terrains that doesn't require precomputed data and enables run-time continuous streamlined exploration.

Here are a couple of further videos for the archive:

05 May 2010


It's been hard to persuade myself to drop down a few words about last progresses. I've never been a good speaker, and in special way by means of human languages.
Indeed very few progresses have been achieved and many weeks have been spent for whichever thing demanded by day-by-day life, including rest. I will ever complain for this one not being my job, because my actual job does actually take almost all my resources away, making this a occasional snatchy hobby, and practically a lost cause.

This has its bright side though: this research is not subordinated to production schedule and there is way to mind the what and the how.

These few progresses have been yet a chance to think over things, to take a stock of the situation, to review the road map and also to realize how the long way is even longer than it could be imagined. I'm sensing how this will be a constant.

However this post is here as a pit-stop around the structure and design of this little piece of code. As I always stated, I do believe that a good design is the first important feature to mind of for better addressing efforts. And a so poor, lacking of resources project (1 developer, 2-3 hours per month) couldn't be more needy of good design.

In the very beginning, for a brief while I've bred a rather ambitious structure which separated frameworks, engines and libraries. This because I wanted to fantasize about all elements I could be putting up with in a prosperous future, such as game application, editing application, web application, audio engine, physic engine, AI..

But this layer of abstraction was disturbing the original simplicity, and my inspiration as well. So I soon reverted back to the original structure where there is a unique framework (the windowed application, designed for interactive real-time 3D graphics) playing with a unique engine (the 3D engine) and a few general purpose libraries (currently the I/O and the Math libraries).

My care has then lain on extricating elements in such a way to make them independent entities, as much as possible abstract and machine independent. Special care I have taken on circumscribing those parts which are strictly OS dependent and the ones that are renderer dependent. The rest is code completely unaware of the underlying hardware.

It's total 287 KB of C++ code, still not cross-platform but well arranged to become. In detail:

  • 5% of platform dependent code (Win32)
  • 45% of renderer dependent code (OpenGL)
  • 50% of cross-platform code

The Win32 code is for Mutex, Thread, Timer, Window.. The Window object is also the gateway to interaction with OS, basically for handling events and operations such as inputs, drawing etc. This part is quite stable and only needs to be translated to the other platforms.

The OpenGL part is the renderer dependent part of the 3D engine which, though is still an early version, already includes most of the key features of a 3D engine, such as camera, scene manager, lighting, LOD systems, VBO, shaders (GLSL, ASM, ARB multitexturing).. This part is also depositary of all my knowledge about the old and new OpenGL API so far.

The rest is plain cross-platform code for application logic (including the terrain generation) and libraries. And it's indeed the only part meant to be increasing.

08 March 2010


"We are all shaped by the tools we use, in particular: the formalisms we use shape our thinking habits, for better or for worse, and that means that we have to be very careful in the choice of what we learn and teach, for unlearning is not really possible."
Edsger W. Dijkstra

04 January 2010

Chunked LOD

Some mean wind is still breathing over me resulting in much less spare time and a bad start of the new year. However, during these holidays, I could fulfill a wish expressed on last post: I did implement the new LOD system into gdevice.

I have ported the essential Java code of the prototype and it was quite easy due to the fact that my way to code C++ in gdevice is similar to Java/C# in some key aspect. Anyway, details: there are two threads, one for receiving user input and handling the rendering process, the other one for generating terrain chunks (tiles) by means of fractional Brownian motion processes, still computing derivatives as well.. In a first test with only one single fBm made of 4 octaves, and then with a mild multifractal system, computationally equivalent to a 6 octaves fBm, but still 4 octaves.

Indeed this test is intended to evaluate the new LOD system with terrains. Plausibility of terrain itself is not crucial at this stage. Also there is no texture mapping, no shaders, no occlusion culling, not even frustum culling here.

Watch the video:

The demo ran at 25 fps on my machine, showing a runtime generated terrain made of over 500k vertices shaded with gouraud on a video card with very basic graphics capabilities. Runtime generated means that while camera proceeds a piece of the world is removed from the scene and a new generated piece is added. So you are enabled to roam ad infinitum. I definitely would not try to find periodicity in this function :)

Why chunked? Because without baking VBOs I don't feel to be getting the best from nowaday video cards. Why this chunked LOD? Because I wanted to reduce complexity to O(log(N)) still approaching dynamic geometry data without precomputing anything.
Considerations about the name: this LOD system has no name, not by me, as far as I know. It is similar to Chunked LOD and Geomipmapping, yet it is none of them.

02 December 2009


Yes, better do the job with the right tools. Actually, dealing with 3D stuff without using a neat set of types and operators for linear algebra, such as in GLSL, would be an endless pain writing, improving, debugging, everything. You will bloat your code, make it prone to errors, missing to notice the meaning of what you are doing, perhaps the beauty, and of course the ways to improve it and invent something new.

Java is a neat easy language. It's a pleasure programming Java. It is like coming back to Basic but with Objects.. Ok but, must say the truth, when it comes up to tougher tasks, you can touch how Java is still confined to dumb toys. It is much limited in performance and still limited in abstraction in a manner that is not acceptable.

Look at this:

vec3 fBm( vec2 point, int octaves, float persistence, mat2 lacunarity )
    vec3 value = 0;
    float magnitude = 0;
    float amplitude = 1;
    mat2 frequency = mat2(1);

    for( int i=0; i<octaves; i++ )
        vec3 signal = amplitude * dnoise( frequency*point );
        signal.xy *= frequency;

        value += signal;
        magnitude += amplitude;

        amplitude *= persistence;
        frequency *= lacunarity;

    return value/magnitude;

GLSL? Well, it is. But it's not. It is actually a scrap of plain C++ code. You still see C++ and some new C++ types, and operators, properly forged to mimic GLSl. The right tools...

If you know what I'm talking about, you will see how this code is concise and clean. This fBm speaks in GLSL. dnoise returns a vec3 where z holds the 2D value noise and xy (a vec2 type) holds the derivatives dx and dy. Then all of them can be weighted, added, normalized, by using a neat fair expression. It is now painless, natural, elegant coping with vectors and matrices, just like those were scalars, just like in GLSL or Mathlab. Also it is so natural to raise the frequency concept to a domain transformation concept, and then apply the transformation to derivatives as well, according to the fact that

D[ f(g(x)) ] = g'(x) f'(g(x))

Ok, this is correct only when the trasformation is a rotation matrix, so that the transpose would give it self back. Otherwise we would have to put up with a Jacobian matrix. But this is not worrying anymore since this library features all GLSL functions and some nice more. Now I can write code intended for CPU using the same expressivity given by GLSL.

See this:

vec3 forward = vec3( -cos(a.x)*sin(a.z), cos(a.x)*cos(a.z), sin(a.x) );
vec3 strafe = vec3( -cos(a.z), -sin(a.z), 0 );
camera.position += forward*(key('W')? +speed: key('S')? -speed: 0)
                  + strafe*(key('A')? +speed: key('D')? -speed: 0);

Or this:

vec3 sky_color = vec3(0,1/3,1) * dot( normalize(sun_dir), upvector );
glFogfv( GL_FOG_COLOR, (sky_color+albedo.rgb).array );

Couldn't avoid of making this library as much GLSL compliant as possible because I will move pieces of code to GPU and back to CPU at will. Though it also turns out a bit embarrassing when you wonder which side you are working on, whether CPU or GPU. I even have set up a mechanism for coding GLSL shaders just like C++ classes. Definitely the two worlds are so tight in gdevice, even without CUDA..

Anyway, why writing yet another BLAS library? There are so many out there, well suited, well performing... Simply I wanted to learn coding my own, perhaps GLSL compliant, performing at best (with SSE2 optimizations). Also, I must to say, I didn't like what I saw inspecting their code, I could not accept to waste all that code, making the whole so confusing and not properly performing like it must be. Above all I expected that types (and operators) had to descend from these two, just like is thought in algebra:

template <typename T, int N> class vec;
template <typename T, int N, int M=N> class mat;

But again I got disappointed so in the end I did my own..

By the way, I still like playing with Java when prototyping new algorithms. It is relaxing :)

Thanks to IƱigo Quilez for sharing the idea of derivatives calculated together with value noise.

11 November 2009

Yet another discrete level of detail

The picture shows my first attempt on generating a terrain height map. It's a non normalized Fractional Brownian motion applied over Value noise, rendered by using color gradients and fake shadows.
Indeed, this shot comes from an interactive prototype I coded not for generating terrains but for testing an idea for a DLOD system. This is different than previous DLODs I developed before, since this one handles unbound models, doesn't need much precalculation and also performs a nice computational complexity of O(log2(N)). I've also found out that it is someway similar with what is known in literature as Chunked LOD. Though it is in practice quite different than that, I didn't find any valid reason to think at it with a different name, because it is still actually a chunked LOD.
The prototype is a 2D viewer, written in plain Java, fast enough to allow exploration and... roaming around that infinite world, I couldn't avoid of fantasizing about an imaginary procedural doungeon master. Of course missing a number of advisable features. But the terrain it depicts, though still monotonous, has not much to envy to those its human counterparts may conceive. And it's still a prototype.
I have to stuff all this into the engine, in C++, using VBO and shaders, otherwise I won't believe myself either. I will do this right after cleaning up the framework, rearranging those parts I neglected by months.

03 October 2009

Automating human activities

As either player or programmer, I was just astonished that first time I saw Wolfenstein 3D, the forefather of the series Doom, Quake, etc., which just addicted me. I had never seen something similar in games before. That first-person action play, those 3D worlds to explore.. An evolution chapter in videogame entertainment was coming and I was there experiencing it.

Ok but, from a game mechanics standpoint, there was not much new. It was the usual predefined walk-through facing enemies, bosses, getting keys, opening doors, or solving some tricky puzzle. All stuffed with some nice storytelling in between. A walk-through that starts and ends, just like a book. And when the book ends, you come back to the shelf to buy another one.

When I came out from the sewer system of the Imperial city in the game Oblivion and saw the world outside, I felt free to ignore the addressed goal and go everywhere doing everything. That was weird and amazing at the same time.. Well, soon I found out that I couldn't actually go everywhere doing everything, but that initial illusion made me think over it.

In my conception of games, a game should be played repeatedly without loss of fun, and I believe that a videogame shouldn't be less. The limited fruition has more to do with books and movies and is of course a convenient business model, but it's not what I expect from a game. Definitely, I came up to the conclusion that a game shouldn't be intended as a predefined walk-through.

In last decade we've been observing an increase of realism in videogames. Though realism is not exactly all that entertains, it does actually make the experience more immersive.
There are several aspects contributing to improve realism. But it is the visual what has been mainly fed, due probably to the graphic hardware capacity that grows year by year.

People buys new powerful video cards and expects that the new coming games will fit tight in them. So soon we will see very visually realistic NPCs. But what happens when those NPCs will speak or act someway?

I believe that the visual realism has been favoured excessively.. Probably to satisfy the market expectations. Probably to avoid of getting the risk on improving other aspects, trying new game mechanics. Anyhow this has been requiring ever-increasing hordes of artists. Teams have been growing and growing, so management complexity and costs.. Videogames have become more expensive now and needer to meet a wider consumer market. And this has someway led to dumber games (where the more is less..).

In order to solve this, someone tries to keep teams small and improves the handling of content, addressing to a more massive content creation (John Carmack, MegaTexture). Others try to control resources and costs managing better and globalizing production.
Anyhow, along time, it's been experienced a slight divergence between the growth of graphic hardware and the ability of teams to create the content required, which leads to think that this development model is somewhat unsustainable in the long run.

Realism makes experience more immersive. But content cosmetics is only one aspect contributing to the overall realism.
When I play a game I expect the world is not bound and NPCs should answer realistically and convincing, and not repeating. Because reality doesn't repeat. Predefined behavioural patterns are far to be realistic. Also the sandbox effect and predefined goals are not realistic.

We want a freeform and realistic gameplay. But how to define things which are not actually predefined and yet convincing? I believe the technological answer is somewhere between human pre-made content and procedural generated content. Let's say, things that are someway forged by procedures but that lie into boundaries of real things. Definition won't be expressed in details but by means of abstraction. For instance, I won't put a forest leaf by leaf, but I will just say there is a forest made by those classes of plants, arranged by those classes of patterns.

Procedural content generation is problematic and has well known cons. But I believe computer programming is for automating human activities. And procedural content generation is actually about automating human activities.

This post is just another mind rambling of mine where I'm not going to suggest any recipe explicitly. But I want to point out some pros of procedural content generation:

  • Typically 70% of development time is spent for coding tools (developer side) and create content with those tools (artist side).
  • Procedural can lower time and resources needs so that management complexity and costs can be kept under more sustainable boundaries.
  • Procedural enables small teams to start development.
  • Small teams can invent new game mechanics better than huge ones.
  • Procedural reduces touch of memory and increases performances, allowing you to get more from hardware.
  • Procedural enables developers to invent open game grounds where exploration and interaction are not bound, behavioural patterns are not predefined and the overall experience is thus more realistic.

19 September 2009

A few words about debugging

Either your design is neat or messy, you will anyway put up with bugs.

Of course a neat design helps localizating bugs. But debugging will ever be the dark side of development, which will prove your smartness and fantasy, sometimes sorely.
There are many approaches and many books about debugging, ranging from breakpointing and logging to test oriented methodologies. I don't want here to go in depth but I wish you can do it yourself by reading about Agile and more specifically about eXtreme Programming.

Anyhow, I persuaded myself how important is preventing bugs, much before facing them.

Software development appears to be one of those things where less is more. Actually, the less code you write, the less probability you get to get into bugs. So be minimalist, achieve your functionalities with the least amount of code, avoiding redundancy and duplication. Don't Repeat Yourself and also Keep It Simple, Stupid.. All these recommendations, indeed, focus the Abstraction principle which is a minimalistic principle essential in designing software (and base of OOP).
Imagine that you have two parts in your engine where triangles are rendered. When you need to add a new rendering feature, you will have to modify both parts. Also if a bug about rendering incurs, you will have to check both parts again.

Have you ever heard about the Occam's razor principle? Well, if you still can't see how much a neat design has to do with minimalism, I think it can be worth to read some about it.

Regarding the finding bugs thing, I believe there is nothing better than a prototyping approach. Simply split in separate sessions your development. In each session changes are applied (say refactoring) to a bug-free version of your software in order to evolve it to a better yet bug-free version. This means that you need some kind of tool (say regression test) to verify that a certain version is still bug-free. When a new bug comes out, you will have certainly a better clue where it may resides.

The prototyping approach is OOP compliant and also brings more benefits such as:
  • There is always a working release to show (though prototypal)
  • You have a better vision of your design and can better address your efforts
As an example of bad prototyping, you can observe how many coders out there prefer to focus the last stunning effects of their 3D engines first instead of getting the game working as soon as possible and come back later to polish the details. They hardly have a clear vision of what is to be done, to be recycled, to be reinvented from scratch. In most of cases they don't even need to code a new 3D engine as there are already some others that can fit their requirements, perhaps open source (see devmaster.net).
A curious yet convincing lecture about this phenomenon is Write games not engines.

The prototyping approach also gives one more valuable benefit: you can get a prototypal version of your game working and sell it like it is the final version, even if not all intended requirements are accomplished (the above details to be polished). You make some money and can then keep on working for a new prototype, say a new chapter of the game.
It can be then worthwhile to split development in separate workflows, one for the making of a new game and others for the improvement of the engines in use. Which means that the game of today can be played with a better 3D engine, AI engine or Physics engine, developed in the future. This can be possible by means of a good software design.

13 September 2009

A few words about designing

The first time I read about Object Oriented Programming I didn't get it. I thought it was the next useless thing they wanted to sell me. So for a while I kept on avoiding it.

I think I was already a smart coder, experienced along a few years of hobbistic development. But I was used to think at software through a low-level and algorithm feasibility perspective. I was (and still I am) a greedy consumer of the Demoscene running in the 1990s, when on our desks were Amiga computers. And I was amazed by what could be brought to screen by tweaking hardware and well crafted low-level optimizations. Of course that was a starting era for the present-day computer graphics.
Above all, I think I was too proud of myself and wasn't used to think at feasibility under a resource basis. In front of the screen, I was the man with a vision in mind, the code at his fingers reach and no deadline for it.

It was by means of some professional experience that I could put up with more vast projects and evaluate feasibility in terms of men and time against deadlines.
Algorithm and performances typically were not a concern for that kind of applications and I could then focus the benefits of reusing code. By reusing you can effectively take advantage in a neat and rational manner of what has been done by you or others, accomplishing goals you wouldn't expect to do before under certain deadlines.

Along this experience I've been growing to like objects because the object way is a way to achieve reusability of code. And it can be applied to both huge and small projects, in team or one-man development.
OOP is shaped around the idea that each piece of code should always be intended to be reused and extended soon or later as requirements grow. For instance, I can start writing a basic Vector class for an exercise of 3D math now and some time later extend and reuse this class, under new requirements, in functionality and performances, for a more demanding project.

How to develop object oriented?

Using objects you won't probably get the philosophy behind until the day you start designing objects. Then you will be learning (and getting delighted) how parts of your software can be wisely isolated so that those parts can get their logical identity (as feature, issue, etc.) and thus be reused and evolved independently, by you or other programmers, for your current project or other projects.
You have to isolate a subset of your data and functions that work tightly to solve a specific concern of your application and put all in an object. So that that object will represent the part of your software that handles/solves that specific concern from a user perspective.
Now imagine that the user of this object may be an external system or another object of the same system. And also, following a Top-down approach, you can let a more abstract object to be itself user of more concrete objects (say lower level objects or more specialized objects). And those more concrete objects may actually not be completely concrete in their initial versions, fulfilling all intended functionality, but just be dummy objects to be perfected later.

This approach trains you to separate concerns since the beginning. Basically you have to sort out things considering their coupling degree per functionality and then put together what is tightly coupled and separate what is loosely coupled, minimizing the overlapping of functionality as much as possible.
Separating diligently you will be applying the so called Separation Of Concerns, which is one more thing prof. Dijkstra has been teaching me after his dead.

I guess that you have got that this is not intended to be the nth tutorial about OOP and probably what I've been saying can just sound well for those who already know exactly what I'm talking about. I just wanted you to see that: 3D engines shouldn't ever integrate hardware specific logic because that would mean to tie your software to a certain hardware configuration in a certain time. Graphic hardware APIs change along platforms and time. So if you want your software to survive, even if using a specific hardware API, you have to keep into account a painless port to other hardware APIs soon or later.
Same thing is for type of content. Imagine that your 3D engine takes advantage by specifically using BSP or Octrees, what happens if you start working with on-the-fly generated content or content which is loaded in memory piece by piece while the player proceeds? You know, BSP and Octrees come from static content.. You may then find yourself reinventing the wheel once again, which is right what OOP is made to avoid for.

However, sometimes 3D engines comes to be tightly integrated to their OpenGL/Direct3D layer and it may not be for a bad OOP design but for a matter of performances. In fact OOP abstraction requires a small overhead even for a plain Adapter Design Pattern, due to virtual calls indirection. So you may have a neat modular design but paying a toll in performances that can be undesirable when it comes up to real-time applications. And this is where integration come someway to a opposite role to separation.

Fortunately there are a few solutions to get rid of run-time virtual-table lookups, such as template-based static polymorphism (see the Curiously Recurring Template Pattern), that shows how C++ is still far beyond most of languages, enabling coders to write for performances yet following a great high-level design. And it's too bad that most of so called C++ programmers still consider template metaprogramming mysterious and confusing.

Designing software is really a process of creativity that requires some experience. I believe it is useful to know the best practices as well as the worst ones, for which I suggest to read about Anti-patterns.

10 September 2009

The ability to simplify

"The ability to simplify means to eliminate the unnecessary so that the necessary may speak."

Hans Hofmann, Introduction to the Bootstrap, 1993

31 August 2009


I came back yesterday night from my andalusian trip. It was pleasant and refreshing. It was the first true vacation after a long quinquennium.
I expected that this time could help me to sort out ideas and focus goals. Instead, once there, I just stopped to think and let myself naturally plunge into the waves of vacation. I finally enjoyed not to find me in conflict with myself for neglecting any responsibility.
But that is already past. Now I'm here seeking job ads and putting up with bills.
After the end of last contract, this is the very first work day which sees me not working, and thus the first day in which I can entirely feel the psychological burden of this unexpected condition of unemployment.
However this unlucky outcome has put me in a weird good mood. I can see clear and see how less and less bearable has been that routine for me. Is this that kind of shakes than can lead a person to change chapter in life? I really wish it is.
However all appears more difficult now. I won't likely get into the industry I aim to as only those very experienced ones seem to be admitted into those very few research labs. Besides, this seems to be a time where hardly someone here would be willing to pay me for doing the things I like.
But it's funny (and bitter at the same time) when they do drop proposals for pieces of work I get done along with my hobbystic activity. I consider this fact in no way good because, even though employers know this is value-added, they want to save the money of the research process to get it. They let others to develop the key technologies and leave themselves the task to integrate. And yet they can't understand why, while other businesses are always busy, their company is always slow...
So, at last, for the sake of my mental sanity, I have reasonably accepted to keep on working for those who pay me as well, even if for doing less interesting things, routinary things. And secretly I will keep on quenching the thirst of my thirsty mind by means of those few drops I can get in my very spare time.

24 August 2008

Reinventing wheels

Writing software from scratch is expensive. Each time you try to do something from scratch, it take ages to get it to work. Conversely, copy-pasting ready solutions means taking the advantage of all the time spent by others on the same problem and your work is likely more than half done. So there is no surprise in the fact that companies adopt avoiding reinventing wheels as a standard practice, as from a profit standing point reinventing is bad. However, you can find quite a few benefits on making things from scratch. The first benefit is obviously instruction, self-instruction. By making things from scratch you learn way more than by just copy-pasting. A second less obvious benefit is that, although you risk seriously to do it worse than others, and maybe taking quite a long time, there is some chance to do it better, perhaps finding a solution that better addresses your specific problem and more efficiently. Indeed, there is also a chance that you can invent something new, unexpectedly, something that may possibly not even be related to the thing you had in mind. In my humble opinion, in a world striving for better wheels, provided you can focus on your vision the subtle lines connecting issues and solutions, reinventing wheels is the most exciting activity you can hope to be busy with. This hint is where this blog starts.

22 August 2008


It was meant to be, somebody would say. I don't know what to say. It turned out at some point that I had to choose whether to stay or move on. And if I'm here it's because I moved on.
I have been taught to never giver up on a dream. But the big lesson in this circumstance is that giving up can give sometimes so much freedom. Unexpected freedom.