Java is a neat easy language. It's a pleasure programming Java. It is like coming back to Basic but with Objects.. Ok but, must say the truth, when it comes up to tougher tasks, you can touch how Java is still confined to dumb toys. It is much limited in performance and still limited in abstraction in a manner that is not acceptable.

Look at this:

vec3 fBm( vec2 point, int octaves, float persistence, mat2 lacunarity )

{

vec3 value = 0;

float magnitude = 0;

float amplitude = 1;

mat2 frequency = mat2(1);

for( int i=0; i<octaves; i++ )

{

vec3 signal = amplitude * dnoise( frequency*point );

signal.xy *= frequency;

value += signal;

magnitude += amplitude;

amplitude *= persistence;

frequency *= lacunarity;

}

return value/magnitude;

}

{

vec3 value = 0;

float magnitude = 0;

float amplitude = 1;

mat2 frequency = mat2(1);

for( int i=0; i<octaves; i++ )

{

vec3 signal = amplitude * dnoise( frequency*point );

signal.xy *= frequency;

value += signal;

magnitude += amplitude;

amplitude *= persistence;

frequency *= lacunarity;

}

return value/magnitude;

}

GLSL? Well, it is. But it's not. It is actually a scrap of plain C++ code. You still see C++ and some new C++ types, and operators, properly forged to mimic GLSl. The right tools...

If you know what I'm talking about, you will see how this code is concise and clean. This fBm speaks in GLSL. dnoise returns a vec3 where z holds the 2D value noise and xy (a vec2 type) holds the derivatives dx and dy. Then all of them can be weighted, added, normalized, by using a neat fair expression. It is now painless, natural, elegant coping with vectors and matrices, just like those were scalars, just like in GLSL or Mathlab. Also it is so natural to raise the frequency concept to a domain transformation concept, and then apply the transformation to derivatives as well, according to the fact that

D[ f(g(x)) ] = g'(x) f'(g(x))

Ok, this is correct only when the trasformation is a rotation matrix, so that the transpose would give it self back. Otherwise we would have to put up with a Jacobian matrix. But this is not worrying anymore since this library features all GLSL functions and some nice more. Now I can write code intended for CPU using the same expressivity given by GLSL.

See this:

vec3 forward = vec3( -cos(a.x)*sin(a.z), cos(a.x)*cos(a.z), sin(a.x) );

vec3 strafe = vec3( -cos(a.z), -sin(a.z), 0 );

camera.position += forward*(key('W')? +speed: key('S')? -speed: 0)

+ strafe*(key('A')? +speed: key('D')? -speed: 0);

vec3 strafe = vec3( -cos(a.z), -sin(a.z), 0 );

camera.position += forward*(key('W')? +speed: key('S')? -speed: 0)

+ strafe*(key('A')? +speed: key('D')? -speed: 0);

Or this:

vec3 sky_color = vec3(0,1/3,1) * dot( normalize(sun_dir), upvector );

glFogfv( GL_FOG_COLOR, (sky_color+albedo.rgb).array );

glFogfv( GL_FOG_COLOR, (sky_color+albedo.rgb).array );

Couldn't avoid of making this library as much GLSL compliant as possible because I will move pieces of code to GPU and back to CPU at will. Though it also turns out a bit embarrassing when you wonder which side you are working on, whether CPU or GPU. I even have set up a mechanism for coding GLSL shaders just like C++ classes. Definitely the two worlds are so tight in gdevice, even without CUDA..

Anyway, why writing yet another BLAS library? There are so many out there, well suited, well performing... Simply I wanted to learn coding my own, perhaps GLSL compliant, performing at best (with SSE2 optimizations). Also, I must to say, I didn't like what I saw inspecting their code, I could not accept to waste all that code, making the whole so confusing and not properly performing like it must be. Above all I expected that types (and operators) had to descend from these two, just like is thought in algebra:

template <typename T, int N> class vec;

template <typename T, int N, int M=N> class mat;

template <typename T, int N, int M=N> class mat;

But again I got disappointed so in the end I did my own..

By the way, I still like playing with Java when prototyping new algorithms. It is relaxing :)

Thanks to IĆ±igo Quilez for sharing the idea of derivatives calculated together with value noise.

## 0 Comments: