I was working on something else (and took holidays), didn’t have the time to go back to the renderer until three weeks ago. At first I wasn’t considering these three weeks work as a part of the SM3 Renderer, so I didn’t want to update this page. But well, even if it’s not talking about a cool rendering technique, it’s still part of this project, and this is something I’d like to share too. Here we go, let’s catching up with my new in-viewport GUI. Windowing System and redraw. There were three criteria to pay attention to: fast display of windows, make good use of the Alpha, having the whole system as flexible as possible. The GUI is system is like the others, you have windows organized in a hierarchical way. There’re notions of active windows, focus window, "hover" window. You can capture mouse events (and stack the captures). There’s a global alpha constant for the GUI and for each top level windows, which is used by all the low level drawing methods (DrawRect, FillRect, DrawLineList, DrawMesh, DrawTexture, etc…) for fading effects. Redraw had to be optimal so I’m also using clipping region (using D3D Scissor Rect). Rendering the windows’ […]
I added Gamma Correction, bump/normal mapping, and Depth of Field. I also fixed few bugs. ScreenShots of Gamma Correction No correction Gamma corrected It’s brighter where it should be, and still dark where it should be too. The picture was took from the ATI’s sRGB sample. ScreenShots of Normal Mapping The left sphere is the high poly one (40K faces). The right is the low poly version (960 faces) with the normal map applied. The normal map was created with our 3D Studio Max Bump-o-matic plugin. Wire version of the first screenshot. Rendering of the normals. ScreenShots of Depth of Field The white AABBs symbolize the Plane in Focus. Check their intersection with the scene to get a better idea of their position. More about depth-of-field: I read many things about Depth of Field, the article in GPU Gems for instance, saw many formula without really knowing how to practically implement them. So I came out with an in-house one, really simple: Df = DP * abs(PosZ – PiF) / PosZ. DP is the Depth of Field Power. 0 to disable it, 1 for standard result, >1 to get something really blurry. PosZ is the position in camera […]
Parallax mapping is finished. The whole production pipeline is now ready for that technique. The 3D Studio MAX plugin now computes the correct scale/bias and can also display the result in a custom view. Screenshots As you can see, the specular highlight is not ‘real’ for that kind of material (supposed to be rocks…) Wireframe mode! I added a new parameter in the Ambient Occlusion Map creation which is the length of the rays used to perform the occlusion test. This way the occlusion map builder can now produce maps for indoor meshes. Screenshots Ambient occlusion off Ambient occlusion on Ambient occluion off Ambient occlusion on Ambient occlusion map 3DS Max UVW unwrap modifier The original mesh of the room wasn’t mapped, so I used the flatten mapping of the UVW Unwrap modifier of 3DS MAX to generate mapping coordinates, then use the Bum-o-matic plugin to generate the Ambient Occlusion Map. The result speaks itself. Light volume rendering. Before, for each light was lighting every pixel on the viewport, which was quite slow/wasteful. Now for point and spot lights, their bounding volume is rendered to perform the lighting, as you can guess, this is much faster for small area […]
Added projector texture for Point and Spot lights. A cube map is used for the Point Light and a 2D texture for the spot, both are almost free concerning the rendering time. Parallax mapping is almost done. The technique itself is quite simple, but it implies many little things to get it “practical” and be able to produce graphic content using it. Every effects/techniques implemented so far are “practical”. That means you can produce 3D with them for a games of other kind of real-time applications, it’s not just for demo/screenshot! 🙂 Improved the compatibility of the renderer with the logical 3D engine. I’ve made some tests of Sub-surface scattering. (the light ray going through a given object and lighting it on the other side). And at last I did a bit of performance tuning/optimisation, rearranged the main fx file which is starting to be big! 🙂 Ok some random screenshots, not sphere/cube/coder art this time… If you look closely, the shadows are not accurate at some places, this was a minor bug that was fixed, but I was too lazy to start the screenshots again.maybe later! 50K faces, 4 point lights 50K faces, 4 spotlights 400K faces, 1 direct […]
Ok for some mysterious reasons using four MRTs can generate big slow down on the 6800. So I separated the render of the MRT in two passes, the first one renders the Z-Buffer and the Z-MRT, the second one renders the three other MRTs (albedo, normal, material settings). This way the second pass took advantage of the Z Culling, sometime pixel shaders can be heavy when funky stuffs are done to compute the albedo, this should be faster when it’s the case. On the performance side, it’s always faster, regardless the vertices count of the meshes. I’ve finalized the soft shadows on point lights. I’m using only two samples, the vector used to address the cube map is slightly disrupted from the position of the pixel being rendered. I can’t say it’s perfect or nice, but well, it’s fast. Four samples instead of one make the whole lighting pass 50% slower! I also fixed few bugs. Screenshots: A 256*256 Cube map is used. The render time of the shadow map is not bad, about 10% of the VBL.
I implemented Point Light shadows, soft shadows on spot lights. Soft shadows on point lights are still in progress, the result is not great so far. I also implemented the tone mapping to the back buffer of a different size than the deferred buffers, not a hard thing to do. Doing all the deferred stuff using a 400*400 resolution for a final back buffer of 600*600 saves you a lot of rendering time (actually is almost twice faster) for a final result, not that bad. Maybe we can improve the final quality using few pixel shader instructions during the tone mapping. Screenshots time: 600*600 rendering target, using a deferred/back buffer ratio of 1, 5 spots lights lighting the whole buffer each. I know, rendering time is quite awful, but: · Spot light lighting is currently NOT optimized and doing a loooots of stuffs (const/linear/quad attenuation, penumbra, emissive, ambient, diffuse, specular computing, soft shadow, etc…). · Each spot light is lighting the whole screen: 600*600 = 360 000 pixels (doing 5 times = 1 800 000 of lit pixels). · Geometry here, doesn’t matter (and texturing too), MRT render time is about 0.15VBL, putting 500 times more faces will push it […]
Perspective Shadow mapping is a real pain… I can’t get it work correctly, I’m putting the code on hold, and I’ll get back on it later.
Lost three days into a partition crash! Almost lost the 150gigs of data stored there, took more than a day to recover everything.
Implemented the spot light rendering. I have now the three basic types of light: directional, point and spot Added a Gaussian filter after the creation of the Occlusion Map. The results speak themselves, that is definitely a must have! The nightmare has begun: shadows… I knew that would be one of the hardest parts of the rendering (if not the hardest), and it is… As usual, I started by reading many papers and slideshows.I also looked the archives of the GD-Algorithms mailing list, and put back the topic because people there were apparently silent since one year ago. Between the two families, as I rely on the pixel power, my choice naturally tends to Shadow Maps. In the land of shadow maps, many people have different opinions about what is the best to use, and with time passing, it doesn’t seem to converge into one particular technique. Single buffer, multiple buffers, post perspective or not, trapezoidal, done in light space, oh my god… Before starting to ask for people’s opinion, I had faith into the Perspective Shadow Mapping (aka PSM), after reading its revision from Simon Kozlov in the GPU Gems. Many people still say it’s not a viable technique, […]
I’ve programmed the routine to compute a texture storing the ambient occlusion of a given mesh. The result is as expected: great! (see the screenshots). The computing process can take a while, had to throw a lot of rays per texel to get an accurate result. 512 is a good number. So for a 256*256 texture, you have at least 33 million rays thrown. When I say at least, it’s because if a given texel is shared by two faces of the mesh (across the edge), the double are thrown. Thanks to the opcode library, it doesn’t take forever… I’ll certainly add an option to filter the produced picture (useful when the ray count is low). Screenshots: The mesh (courtesy of Bruno Dosso) is 12592 triangles and 6424 vertices. Computed on a Athlon XP 2800+. You can see the render time in the texture’s window caption. Basic rendering, no diffuse texture, no ambient occlusion texture. Diffuse Texture, without Ambient Occlusion ² 256*256 occlusion map, 16bits, 1024 rays per texel. Same as left, with a diffuse texture. The computed Ambient Occlusion Texture, done in 2min27sec.