The future of O3D
Monday, May 3, 2010 | 6:40 PM
Labels: O3D API Blog
Monday, May 3, 2010 | 6:40 PM
Labels: O3D API Blog
Wednesday, October 7, 2009 | 3:30 PM
Labels: O3D API Blog
Friday, August 21, 2009 | 6:02 PM
Labels: O3D API Blog
Tuesday, August 11, 2009 | 3:57 PM
Labels: O3D API Blog
Adding shadows to a scene can profoundly improve the illusion of 3D. Shadow mapping is an algorithm which provides the basis for many techniques for hardware-accelerated shadows. It works by rendering the scene in two passes. The first pass renders from the perspective of the light to create an offscreen, grayscale image called the shadow map (see figure below, left). The shade of gray at each pixel represents the distance from the light to the rendered point. In principle, if an object is illuminated by the light, it should get drawn in front in the shadow map. The pixel shader in the second render pass samples the shadow map to determine if a point is in shadow. For each point that is rendered, the shader computes the location where that point would appear in the shadow map, samples the shadow map there, and then compares the distance encoded in the shadow map to the point's actual distance from the light. If the point's distance from the light is quite a bit bigger than the distance encoded in the map, then the point is assumed to be in shadow (see figure below, right).
The shadow map, which is rendered to a texture and used in lighting calculations to produce the effect of shadows in the scene. This scene is rendered from the perspective of the light. (In the example, you can view this rendering by pressing the Spacebar.) | Transform graph rendered using the shadow map. |
In O3D, the two passes required to perform shadow mapping are brought about using a custom render graph. The render graph needs two subtrees, one to render the shadow map to a texture, and one to render the scene. In the shadow map sample code, the render graph root has two children, each the root of a subtree. The root of the shadow pass subtree is given lower priority so that it is traversed first. Below that, there is a renderSurfaceSet
node. That renderSurfaceSet
node becomes the root of a standard render graph created using o3djs.rendergraph.createBasicView()
. The subtree for the second render pass (referred to as the "color" pass in the code) is created with a second call to o3djs.rendergraph.createBasicView()
. Each pass has its own DrawContext
object, so the model-view and projection matrices for the shadow pass can be set to render from the perspective of the light. The figure below shows the structure of the render graph in this sample.
In the sample, when the user hits the space bar, the toggleView()
function rearranges the render graph to draw the shadow map to the screen. This works by disconnecting the shadow pass subtree from the renderSurfaceSet
and reconnecting it to the render graph root, as shown in the figure below. Without the renderSurfaceSet
above it, the shadow pass draws to the screen instead of rendering to texture.
Each primitive in the scene has two draw elements, one to render with the Phong-shaded, shadowed material in the second pass, and one to render in gray to make the shadow map. The first draw element is added when the utility function in o3djs.primitives
creates the shape.
// A red phong-shaded material for the sphere.
var sphereMaterial = createShadowColorMaterial([0.7, 0.2, 0.1, 1]);
// The sphere shape.
var sphere = o3djs.primitives.createSphere(
g_pack, sphereMaterial, 0.5, 50, 50);
As the shapes in the scene are added to the transform graph, they are each equipped with the DrawElement
for the shadow pass.
transformTable[tt].shape.createDrawElements(g_pack, g_shadowMaterial);
Recall that the material used when rendering the scene for the shadow pass colors each pixel with that point's depth from the perspective of the light. To do this, the shader simply multiplies the position by the view-projection matrix for the view from the light. For efficiency, the multiplication is performed in the vertex program. This works fine, provided that the coordinates that are interpolated to produce the input to the pixel program are homogeneous.
output.position = mul(input.position, worldViewProjection);
output.depth = output.position.zw;
In O3D, the z coordinate of the position in the light's clip-space ranges from 0 to 1, so the pixel program puts in the red, green, and blue channels to produce a shade of gray.
float t = input.depth.x / input.depth.y;
return float4(t, t, t, 1);
The shader for the color pass is a modified Phong shader. This shader computes a coefficient called light
that captures whether the currently rendered point is illuminated or in shadow. To appeal to the shadow map, the shader needs a texture sampler parameter for the map itself. It also needs the view-projection matrix for the light's point of view so it can compute where to sample.
float4x4 lightViewProjection;
sampler shadowMapSampler;
Again, for efficiency, the vertex program performs the matrix multiplication to convert to the light's clip space.
output.projTextureCoords = mul(input.position, worldLightViewProjection);
The pixel shader converts the position of the currently rendered point from homogeneous coordinates to literal coordinates by dividing by w. Then to sample the texture in the right spot, clip-space x and y coordinates (which range from -1 to 1) are converted to fit the range from 0 to 1.
projCoords.xy /= projCoords.w;
projCoords.x = 0.5 * projCoords.x + 0.5;
projCoords.y = -0.5 * projCoords.y + 0.5;
Finally, the depth of the current point is compared to the depth in the shadow map to determine if the point is illuminated.
float light = tex2D(shadowMapSampler, projCoords.xy).r + 0.008 > depth;
A number of more advanced variants on the basic shadow map algorithm exist which improve the appearance of the shadows. A simple modification that would help would be to super-sample the shadow map to antialias the shadows' edges. Also, the render graph can be restructured to gain a little extra speed. For convenience, the two subtrees of the render graph in the sample code are generated using the utility function o3djs.rendergraph.createBasicView()
, but that function generates all the nodes that are needed to put something on the screen and not all those nodes are necessary in both subtrees. In particular, the tree traversal only needs to happen once, since elements using a particular material only get added to the draw lists associated with that material. We intend to add more functions to o3djs which make it convenient to add shadows to a scene, but because the complexity of geometry and desired shadow effects are so varied, it is difficult to provide a shadow solution that works in all situations. The goal of the shadow map sample is to provide a starting point. To add shadows to an existing scene, we recommend adding to the render graph to include a shadow pass, mimicking the structure of the render graph in the sample, and then fine tuning the shaders in the scene to get the effect required.
Monday, August 10, 2009 | 12:17 PM
Labels: O3D API Blog
Today, we're releasing version 0.1.40.1 of the O3D plugin. If you've already installed O3D, you'll receive the updated version automatically sometime later today. If you can't wait to try out the new features, just go over to our main site and download the plugin again. Here's a list of what's changed since our last release:
Bug Fixes
Friday, July 31, 2009 | 12:00 PM
Labels: O3D API Blog
SIGGRAPH 2009 is starting on Monday in New Orleans, LA. Google will have a booth in the exhibition hall showcasing its latest graphics technologies including O3D. I'll be hanging out at the O3D demo station in the booth where I'll be running live demos off the web and be around to answer questions. The exhibition is open from Tuesday through Thursday of the conference. So if you're attending the conference, stop by and say hi -- Google is right in the middle between Halls F and G.
While you're visiting the Google booth, you can also check out our other demos. We'll be showing Click2Go which uses a 3D aware cursor to allow you to more smartly navigate while in StreetView. You can play with SketchUp to make a model and then bring it into O3D. And you can feel like you're flying at the Google Earth demo which we'll be showing on a 56" 8 megapixel flat panel.
I'll also be doing a demo at the Blender booth on Wednesday afternoon from 3-4pm showing how you can bring models from Blender and other programs into O3D.
I'm pretty excited about the conference this year. The lineup of speakers, talks and courses looks fantastic. I'm looking forward to some of the real-time graphics talks and game papers, in particular. And of course, I'm also looking forward to getting some authentic Creole food while I'm in town!
Posted by Rachel Weinstein Petterson, O3D Team
Wednesday, July 1, 2009 | 6:54 PM
Labels: O3D API Blog
We stated before that one of the goals for O3D is to have no caps bits. That means, not having to make your code check if the user's machine has feature X: You can write your application and you can assume it will run on any machine. We also wanted to select a feature set that we thought would be a good trade off between allowing high end real time 3d graphics and running on the majority of machines currently in use without being too slow.
Unfortunately, there are a few extremely popular GPU chipsets out there, like the Intel 950 for example, that were missing a couple of these base features we felt were important. Without those features certain common effects, like floating point textures for shadows, would not be possible. We could have just used the software renderer on machines without those features but the software renderer is not as fast as GPU hardware so we came up with what we think is a reasonable solution.
When your application starts up O3D, it can tell O3D what features the application requires. If the user's system supports those features then O3D will use them. If the user's machine does not support those features O3D will fall back to a software renderer. If you do not explicitly request those features then O3D will not let you use them.
We chose this solution because it provides the ability for a much wider range of applications to use GPU hardware than before. For example, only 3 of our samples required additional features to be available, which means most of the samples will run with GPU hardware acceleration even on low-end GPU hardware.
The specific features you can request are:
o3djs.util.makeClients(initStep2, 'FloatingPointTextures');
©2008 Google - Privacy Policy - Terms of Service