Archive for August, 2004

New rendering features !

I added Gamma Correction, bump/normal mapping, and Depth of Field.

I also fixed few bugs.

ScreenShots of Gamma Correction


No correction


Gamma corrected

It’s brighter where it should be, and still dark where it should be too.

The picture was took from the ATI’s sRGB sample.

ScreenShots of Normal Mapping


The left sphere is the high poly one (40K faces). The right is the low poly version (960 faces) with the normal map applied.
The normal map was created with our 3D Studio Max Bump-o-matic plugin.


Wire version of the first screenshot.


Rendering of the normals.

ScreenShots of Depth of Field


The white AABBs symbolize the Plane in Focus. Check their intersection with the scene to get a better idea of their position.

More about depth-of-field:

I read many things about Depth of Field, the article in GPU Gems for instance, saw many formula without really knowing how to practically implement them.

So I came out with an in-house one, really simple:
 Df = DP * abs(PosZPiF) / PosZ.
  DP is the Depth of Field Power. 0 to disable it, 1 for standard result, >1 to get something really blurry.
  PosZ is the position in camera space of the pixel to compute.
  PiF is the Plane in Focus position in camera space.
  Df is the result, I clamp it to [0,1] and use it in the lerp from the accumulation buffer and the blurred one during the ToneMapping.

Comments

Parallax mapping, more ambient occlusion n’ stuffs

Parallax mapping is finished.
The whole production pipeline is now ready for that technique. The 3D Studio MAX plugin now computes the correct scale/bias and can also display the result in a custom view.

Screenshots


As you can see, the specular highlight is not ‘real’ for that kind of material (supposed to be rocks…)


Wireframe mode!

I added a new parameter in the Ambient Occlusion Map creation
which is the length of the rays used to perform the occlusion test. This way the occlusion map builder can now produce maps for indoor meshes.

Screenshots

 
Ambient occlusion off


Ambient occlusion on


Ambient occluion off


Ambient occlusion on


Ambient occlusion map


3DS Max UVW unwrap modifier


The original mesh of the room wasn’t mapped, so I used the flatten mapping of the UVW Unwrap modifier of 3DS MAX to generate mapping coordinates, then use the Bum-o-matic plugin to generate the Ambient Occlusion Map.

The result speaks itself.

Light volume rendering.
Before, for each light was lighting every pixel on the viewport, which was quite slow/wasteful. Now for point and spot lights, their bounding volume is rendered to perform the lighting, as you can guess, this is much faster for small area lights.

Screenshots


Without


With


Without


With

I Added an IML Console right in the viewport.
Having more and more rendering parameters I’d like to tweak in real-time, I’ve decided to take advantage of the whole IML architecture to interact with the renderer (and the 3D Scene) in run-time.

Screenshots

More about Ambient Occlusion builder:

For each pixel on the map we’re created, its position into the mesh is located, and a series of rays are thrown to perform occlusion tests (intersection) with other part of the mesh itself. The problem for indoor environments is there’s always a intersection found (because the mesh is closed), making it impossible to produce an accurate map. By letting the graphist set a length for the rays that are cast, the occlusion can be perform on a limited area, then producing the expected result.

More about IML:

IML stands for Irion Micro Language, it’s a run-time wrapper to the C++ components, for each Irion component one is developing, he can create an IML Class that will be used to expose the component to the IML Framework. Using IML via an IML Console, you can create/edit/delete new components or existing ones. For instance, I developed an IML Class to wrap the SM3Viewport C++ class, I exposed a set of properties (rendering modes, rendering attributes, stats display, etc.) that can be later modified via an IML Console or Script.

Comments

More improvments…

Added projector texture for Point and Spot lights.
A cube map is used for the Point Light and a 2D texture for the spot, both are almost free concerning the rendering time.

Parallax mapping is almost done.
The technique itself is quite simple, but it implies many little things to get it “practical” and be able to produce graphic content using it.

Every effects/techniques implemented so far are “practical”.
That means you can produce 3D with them for a games of other kind of real-time applications, it’s not just for demo/screenshot! :)

Improved the compatibility of the renderer with the logical 3D engine.

I’ve made some tests of Sub-surface scattering.
(the light ray going through a given object and lighting it on the other side).

And at last I did a bit of performance tuning/optimisation, rearranged the main fx file which is starting to be big! :)

Ok some random screenshots, not sphere/cube/coder art this time…


If you look closely, the shadows are not accurate at some places, this was a minor bug that was fixed, but I was too lazy to start the screenshots again.maybe later!


50K faces, 4 point lights


50K faces, 4 spotlights


400K faces, 1 direct light


400K faces, 1 direct and 1 point light

More about the renderer architecture:

The 3D Engine is totally logical, it doesn’t have any dependencies with a given platform or hardware.
There is an abstract renderer interface which can be used to develop new renderers (XBox, OpenGL, DX7, DX8, DX9SM3 were tested/implemented so far).
If one wish to build is own renderer from scratch, no big deal, you don’t have to use this abstract interface if you don’t want to. The main reason is the rendering pipeline is not straight forward processed, but somewhat reversed processed: the 3D Engine won’t feed the renderer with 3D data (meshes, lights, etc.) but the renderer will take the data itself. Optimal computation/update of the data is provided: is computed only what the renderer needs, etc.

More about sub-surface scattering:

The technique can be easily implemented in the renderer and the production pipeline (one global density factor, and a texture for per-texel info), but I’m afraid that it doesn’t worth it. The main issue is I have to read the light Z-Buffer, and I can’t do it for direct and spot lights when using the nVidia’s UltraShadow. The concrete application of such effect is rare I guess, that’s why I’m putting it aside for now.

Comments

Weird things and improvments

Ok for some mysterious reasons using four MRTs can generate big slow down on the 6800.
So I separated the render of the MRT in two passes, the first one renders the Z-Buffer and the Z-MRT, the second one renders the three other MRTs (albedo, normal, material settings).
This way the second pass took advantage of the Z Culling, sometime pixel shaders can be heavy when funky stuffs are done to compute the albedo, this should be faster when it’s the case.
On the performance side, it’s always faster, regardless the vertices count of the meshes.

I’ve finalized the soft shadows on point lights.
I’m using only two samples, the vector used to address the cube map is slightly disrupted from the position of the pixel being rendered. I can’t say it’s perfect or nice, but well, it’s fast. Four samples instead of one make the whole lighting pass 50% slower!

I also fixed few bugs.

Screenshots:

 


A 256*256 Cube map is used. The render time of the shadow map is not bad, about 10% of the VBL.

Comments

Shadow mapping improvments

I implemented Point Light shadows, soft shadows on spot lights.
Soft shadows on point lights are still in progress, the result is not great so far.

I also implemented the tone mapping to the back buffer of a different size than the deferred buffers, not a hard thing to do.
Doing all the deferred stuff using a 400*400 resolution for a final back buffer of 600*600 saves you a lot of rendering time (actually is almost twice faster) for a final result, not that bad. Maybe we can improve the final quality using few pixel shader instructions during the tone mapping.

Screenshots time:
600*600 rendering target, using a deferred/back buffer ratio of 1, 5 spots lights lighting the whole buffer each.

I know, rendering time is quite awful, but:

· Spot light lighting is currently NOT optimized and doing a loooots of stuffs (const/linear/quad attenuation, penumbra, emissive, ambient, diffuse, specular computing, soft shadow, etc…).

· Each spot light is lighting the whole screen: 600*600 = 360 000 pixels (doing 5 times = 1 800 000 of lit pixels).

· Geometry here, doesn’t matter (and texturing too), MRT render time is about 0.15VBL, putting 500 times more faces will push it to 0.4, no big deal.

Shadow map rendering is about 0.15 VBL for five renders into a 512*512 D24X8 texture, yummy!

More about point light shadows:

Point light shadows are rendered using a R32F cube map (256*256 pixels for a face). The rendering time is bad compared to a spot which uses a 512*512 D24X8 shadow map (using nVidia’s UltraShadow). But I’ll be able later to compute efficiently shadow casters and receivers which will certainly cut off 1-3 faces render of the cube map.

As usual the code is not fully optimized, but the design/architecture is. One may be afraid to the rendering time, but when you know few things like each light is lighting the whole screen (I don’t compute the bounding volume for an optimized lighting yet), after few majors optimization it should be better.

Comments