The future of O3D

Monday, May 3, 2010 | 6:40 PM

Labels:

We launched the O3D API about a year ago to start a discussion within the web community about establishing a new standard for 3D graphics on the web. Since then, we’ve also helped develop WebGL, a 3D graphics API based on OpenGL ES 2.0 that has gradually emerged as a standard, and is supported by other browser and hardware vendors like Mozilla, Apple and Opera.

At Google, we’re deeply committed to implementing and advancing standards, so as of today, the O3D project is changing direction, evolving from its current plug-in implementation into a JavaScript library that runs on top of WebGL. Users and developers will still be able to download the O3D plug-in and source code for at least one year, but other than a maintenance release, we plan to stop developing O3D as a plug-in and focus on improving WebGL and O3D as a JavaScript library.

We did not take this decision lightly. In initial discussions we had about WebGL, we were concerned that JavaScript would be too slow to drive a low-level API like OpenGL and we were convinced that a higher level approach like the O3D scene graph would yield better results. We were also cognizant of the lack of installed OpenGL drivers on many Windows machines, and that this could hamper WebGL’s adoption.

Since then, JavaScript has become a lot faster. We've been very impressed by the demos that developers have created with WebGL, and with the ANGLE project, we believe that Chromium will be able to run WebGL content on Windows computers without having to rely on installed OpenGL drivers.

The JavaScript implementation of O3D is still in its infancy, but you can find a copy of it on the O3D project site and see it running some of the O3D samples from a WebGL enabled browser (alas, no Beach Demo yet). Because browsers lack some requisite functionality like compressed asset loading, not all the features of O3D can be implemented purely in JavaScript. We plan to work to give the browser this functionality, and all capabilities necessary for delivering high-quality 3D content.

We’d like to thank the developers who have contributed to O3D by delivering valuable feedback, submitting changes to the plugin and developing applications. To help you convert your application to the new WebGL implementation of O3D, we will keep our discussion group open where our engineering team will answer your questions and provide you with technical advice. For those of you concerned about support for Internet Explorer, we’ll recommend using Google Chrome Frame once it supports WebGL, and hope to see IE implement WebGL natively someday. We hope you will continue working with us and the rest of the WebGL community on moving 3D on the web forward.

In the future, we will not be posting to the O3D blog. For updates on O3D and the 3D web, please subscribe to the Chromium blog.

Posted by Matt Papakipos, Engineering Director, and Vangelis Kokkevis, Software Engineer

Plugin Update

Wednesday, October 7, 2009 | 3:30 PM

Labels:

A new version of the O3D plug-in (0.1.42) is now on its way! This release contains a few bug fixes along with some exciting new functionality including more flexible ways for manipulating image data via the new Bitmap object and support for Data URLs both for reading back the contents of the render buffer and for creating Raw Data buffers. For more details on what's included with this release, please take a look at our Release Notes. As always, if you are an existing O3D user, your plug-in will be updated automatically soon, but if you just can't wait, you can go to our home page and install it manually.

An interesting bit of trivia here is that this is likely the last release we'll make using our old build system. We now have all the pieces in place to switch our build over from scons to GYP. As you can imagine, switching build systems mid-flight in a product the size of O3D can be fairly challenging, but it looks like most of the hard work is now behind us. Switching to the new build system was a necessary step for getting O3D integrated into Chrome, but it also provides some additional benefits to developers who work with the O3D source code. GYP will generate native project files (Visual Studio on Windows and XCode on the Mac) which makes the edit/compile/debug cycle a lot faster than before. In addition, we'll soon be exposing our Buildbot-based continuous build system to everyone so that you can monitor the build progress in real time and even pick up binaries built from the top of our trunk! Stay tuned for detailed instructions.

As always, we appreciate your support and comments. Please continue using the o3d-discuss group for sending your feedback and questions and the o3d issue tracker for filing bugs.

Guest Post by Gavriel State, Founder and CTO of TransGaming Inc.

Friday, August 21, 2009 | 6:02 PM

Labels:

From time to time we plan to open up our blog to guest posts from leaders in the field of 3D graphics. Today's guest post is written by Gavriel Slate, CTO at TransGaming.

In June, the O3D team announced O3D's new software rendering feature, which is powered by TransGaming's SwiftShader software rendering system. High performance software rendering is a critical feature to enable 3D applications on the web to have the same level of global reach as the traditional 2D web. While most current PC hardware now ships with at least basic graphics hardware capable of accelerated 3D performance, there are hundreds of millions of PCs that lack such hardware, or which have older GPUs that do not support the shader capabilities needed for O3D. For example, around 50% of Pentium 4 based PCs shipped with on-board integrated graphics chipsets with no shader features. Many such PCs with poor support for shaders are in the developing world, so without software rendering the 3D web could be inaccessible to huge parts of the planet.

Makers of massively multiplayer games have long understood this problem - Blizzard, the developer of the hit MMO World of Warcraft, has built their game to be compatible with graphics chips developed as far back as 2001, for example. And yet, World of Warcraft's 10 Million+ users pales in comparison to number of people using the web, which is currently estimated as 1.6 Billion. In order for 3D content to become part of the mainstream web, users of older systems can't just be left behind.

Luckily, SwiftShader provides a solution to this problem. Unlike traditional software rendering, SwiftShader uses highly efficient techniques to analyze and dynamically compile shaders and other parts of the graphics pipeline into optimized CPU-specific code. This code is then cached, so future rendering of similar objects always takes place with pre-built code. SwiftShader also takes advantage of CPUs with multiple cores. Using these techniques on modern CPUs, SwiftShader can in many cases actually outperform integrated graphics hardware.

In fact, in many ways, SwiftShader's software rendering model points the way for some of the developments now occurring on the hardware side. Over time, graphics technologies have been getting ever more programmable - first with ever more powerful vertex and pixel shader instruction sets, and more recently with geometry shaders and tessellation which actually generate new triangles for a scene. As GPUs become more flexible in this way, they begin to be useful for general purpose computations, traditionally the domain of the CPU. On the flip side, every year, CPU vendors are cramming in more and more cores capable of high performance vector processing. Chips such as Intel's forthcoming Larabee are designed around the idea of purely programmable software rendering on hardware with a massive number of cores.

Back in the here-and-now however, O3D will automatically switch to using SwiftShader if the hardware renderer on the end-user's system doesn't have enough oomph. Although content developed for O3D should just work regardless of whether the rendering occurs in hardware or in SwiftShader, it's important for developers to test how their O3D content will run on different configurations. Ideally, this includes testing content on different 3D hardware configurations as well as testing software rendering.

Note that on MacOS X and Linux, O3D uses OpenGL, so software rendering works differently. On the Mac, the underlying OpenGL implementation will automatically switch to it's own built-in software rendering if the feature you are trying to use is not available. On Linux, you can install the Mesa driver to switch your computer to use software rendering.

Shadow Mapping in O3D

Tuesday, August 11, 2009 | 3:57 PM

Labels:


Adding shadows to a scene can profoundly improve the illusion of 3D. Shadow mapping is an algorithm which provides the basis for many techniques for hardware-accelerated shadows. It works by rendering the scene in two passes. The first pass renders from the perspective of the light to create an offscreen, grayscale image called the shadow map (see figure below, left). The shade of gray at each pixel represents the distance from the light to the rendered point. In principle, if an object is illuminated by the light, it should get drawn in front in the shadow map. The pixel shader in the second render pass samples the shadow map to determine if a point is in shadow. For each point that is rendered, the shader computes the location where that point would appear in the shadow map, samples the shadow map there, and then compares the distance encoded in the shadow map to the point's actual distance from the light. If the point's distance from the light is quite a bit bigger than the distance encoded in the map, then the point is assumed to be in shadow (see figure below, right).












The shadow map, which is rendered to a texture and used in lighting calculations to produce the effect of shadows in the scene. This scene is rendered from the perspective of the light. (In the example, you can view this rendering by pressing the Spacebar.)Transform graph rendered using the shadow map.


The Render Graph

In O3D, the two passes required to perform shadow mapping are brought about using a custom render graph. The render graph needs two subtrees, one to render the shadow map to a texture, and one to render the scene. In the shadow map sample code, the render graph root has two children, each the root of a subtree. The root of the shadow pass subtree is given lower priority so that it is traversed first. Below that, there is a renderSurfaceSet node. That renderSurfaceSet node becomes the root of a standard render graph created using o3djs.rendergraph.createBasicView(). The subtree for the second render pass (referred to as the "color" pass in the code) is created with a second call to o3djs.rendergraph.createBasicView(). Each pass has its own DrawContext object, so the model-view and projection matrices for the shadow pass can be set to render from the perspective of the light. The figure below shows the structure of the render graph in this sample.





In the sample, when the user hits the space bar, the toggleView() function rearranges the render graph to draw the shadow map to the screen. This works by disconnecting the shadow pass subtree from the renderSurfaceSet and reconnecting it to the render graph root, as shown in the figure below. Without the renderSurfaceSet above it, the shadow pass draws to the screen instead of rendering to texture.





Materials

Each primitive in the scene has two draw elements, one to render with the Phong-shaded, shadowed material in the second pass, and one to render in gray to make the shadow map. The first draw element is added when the utility function in o3djs.primitives creates the shape.


// A red phong-shaded material for the sphere.
var sphereMaterial = createShadowColorMaterial([0.7, 0.2, 0.1, 1]);

// The sphere shape.
var sphere = o3djs.primitives.createSphere(
g_pack, sphereMaterial, 0.5, 50, 50);

As the shapes in the scene are added to the transform graph, they are each equipped with the DrawElement for the shadow pass.


transformTable[tt].shape.createDrawElements(g_pack, g_shadowMaterial);


Shaders

Recall that the material used when rendering the scene for the shadow pass colors each pixel with that point's depth from the perspective of the light. To do this, the shader simply multiplies the position by the view-projection matrix for the view from the light. For efficiency, the multiplication is performed in the vertex program. This works fine, provided that the coordinates that are interpolated to produce the input to the pixel program are homogeneous.


output.position = mul(input.position, worldViewProjection);
output.depth = output.position.zw;

In O3D, the z coordinate of the position in the light's clip-space ranges from 0 to 1, so the pixel program puts in the red, green, and blue channels to produce a shade of gray.


float t = input.depth.x / input.depth.y;
return float4(t, t, t, 1);

The shader for the color pass is a modified Phong shader. This shader computes a coefficient called light that captures whether the currently rendered point is illuminated or in shadow. To appeal to the shadow map, the shader needs a texture sampler parameter for the map itself. It also needs the view-projection matrix for the light's point of view so it can compute where to sample.


float4x4 lightViewProjection;
sampler shadowMapSampler;

Again, for efficiency, the vertex program performs the matrix multiplication to convert to the light's clip space.


output.projTextureCoords = mul(input.position, worldLightViewProjection);

The pixel shader converts the position of the currently rendered point from homogeneous coordinates to literal coordinates by dividing by w. Then to sample the texture in the right spot, clip-space x and y coordinates (which range from -1 to 1) are converted to fit the range from 0 to 1.


projCoords.xy /= projCoords.w;
projCoords.x = 0.5 * projCoords.x + 0.5;
projCoords.y = -0.5 * projCoords.y + 0.5;

Finally, the depth of the current point is compared to the depth in the shadow map to determine if the point is illuminated.


float light = tex2D(shadowMapSampler, projCoords.xy).r + 0.008 > depth;

Further Optimizations

A number of more advanced variants on the basic shadow map algorithm exist which improve the appearance of the shadows. A simple modification that would help would be to super-sample the shadow map to antialias the shadows' edges. Also, the render graph can be restructured to gain a little extra speed. For convenience, the two subtrees of the render graph in the sample code are generated using the utility function o3djs.rendergraph.createBasicView(), but that function generates all the nodes that are needed to put something on the screen and not all those nodes are necessary in both subtrees. In particular, the tree traversal only needs to happen once, since elements using a particular material only get added to the draw lists associated with that material. We intend to add more functions to o3djs which make it convenient to add shadows to a scene, but because the complexity of geometry and desired shadow effects are so varied, it is difficult to provide a shadow solution that works in all situations. The goal of the shadow map sample is to provide a starting point. To add shadows to an existing scene, we recommend adding to the render graph to include a shadow pass, mimicking the structure of the render graph in the sample, and then fine tuning the shaders in the scene to get the effect required.



O3D Release 0.1.40.1

Monday, August 10, 2009 | 12:17 PM

Labels:

Today, we're releasing version 0.1.40.1 of the O3D plugin. If you've already installed O3D, you'll receive the updated version automatically sometime later today. If you can't wait to try out the new features, just go over to our main site and download the plugin again. Here's a list of what's changed since our last release:

Bug Fixes

  • Added support for XP64 and Windows7.
  • Fixed keys not working in o3dPingPong sample when O3D area had focus.
  • Fixed Tar code to support long filenames.
  • Fixed Mac install issue that caused Firefox to think an old version was installed.
  • Improved performance for dynamic texture setting.
  • Fixed interference between the embedded V8 engine and non-O3D related scripts (like Google Analytics) on a page. o3djs now only pulls scripts marked with id="o3dscript" into V8.
  • Fixed a bug with nested RenderSurfaceSet objects.
  • Fixed beach demo scrollwheel and initialization bugs.

Other plugin changes

Samples changes
  • New Samples:
  • Added a Toon Shader example to shader-test sample.
  • Sample particle library now supports one-shots and trails. Added examples of particle one-shots and trails to particles sample.
  • Picking example now shows the normal of the surface.
  • Box2d now uses compiled box2d library.
  • Fixed the beach demo to now run in hardware-accelerated mode on additional low-end GPUs by splitting up assets

Utilities changes

  • Added new methods
    • o3djs.element.getNormalForTriangle
    • o3djs.material.createConstantMaterial
    • o3djs.material.createCheckerMaterial
    • o3djs.effect.createCheckerEffect
    • o3djs.math.pseudoRandom

Tools changes
  • Fixed issue with multiple embedded shaders in the sample o3dConverter.
  • Added --file_paths option to sample o3dConverter to make it easier to convert existing COLLADA files.
  • Sample o3dConverter and sample deserializer now separate skinned streams (POSITION, NORMAL, ...) from non-skinned streams (COLOR, TEXCOORD).
  • Sample o3dConverter will by default mark any primitive with no normals to request a constant shader. This fixes the issue with SketchUp models getting an error of missing NORMAL stream.

See you at SIGGRAPH 09 in New Orleans!

Friday, July 31, 2009 | 12:00 PM

Labels:

SIGGRAPH 2009 is starting on Monday in New Orleans, LA. Google will have a booth in the exhibition hall showcasing its latest graphics technologies including O3D. I'll be hanging out at the O3D demo station in the booth where I'll be running live demos off the web and be around to answer questions. The exhibition is open from Tuesday through Thursday of the conference. So if you're attending the conference, stop by and say hi -- Google is right in the middle between Halls F and G.

While you're visiting the Google booth, you can also check out our other demos. We'll be showing Click2Go which uses a 3D aware cursor to allow you to more smartly navigate while in StreetView. You can play with SketchUp to make a model and then bring it into O3D. And you can feel like you're flying at the Google Earth demo which we'll be showing on a 56" 8 megapixel flat panel.

I'll also be doing a demo at the Blender booth on Wednesday afternoon from 3-4pm showing how you can bring models from Blender and other programs into O3D.

I'm pretty excited about the conference this year. The lineup of speakers, talks and courses looks fantastic. I'm looking forward to some of the real-time graphics talks and game papers, in particular. And of course, I'm also looking forward to getting some authentic Creole food while I'm in town!

Improving O3D's hardware compatibility

Wednesday, July 1, 2009 | 6:54 PM

Labels:

We stated before that one of the goals for O3D is to have no caps bits. That means, not having to make your code check if the user's machine has feature X: You can write your application and you can assume it will run on any machine. We also wanted to select a feature set that we thought would be a good trade off between allowing high end real time 3d graphics and running on the majority of machines currently in use without being too slow.

Unfortunately, there are a few extremely popular GPU chipsets out there, like the Intel 950 for example, that were missing a couple of these base features we felt were important. Without those features certain common effects, like floating point textures for shadows, would not be possible. We could have just used the software renderer on machines without those features but the software renderer is not as fast as GPU hardware so we came up with what we think is a reasonable solution.

When your application starts up O3D, it can tell O3D what features the application requires. If the user's system supports those features then O3D will use them. If the user's machine does not support those features O3D will fall back to a software renderer. If you do not explicitly request those features then O3D will not let you use them.

We chose this solution because it provides the ability for a much wider range of applications to use GPU hardware than before. For example, only 3 of our samples required additional features to be available, which means most of the samples will run with GPU hardware acceleration even on low-end GPU hardware.

The specific features you can request are:

  1. Floating Point Textures.
  2. Geometry with more than 65534 vertices.

If you are using our sample utility libraries, the second argument to o3djs.util.makeClients is now a comma separated list of features you want. For example:
o3djs.util.makeClients(initStep2, 'FloatingPointTextures');

will request floating point texture support for your application.

If you dig into our samples you'll notice this is only used in 3 of our samples so far.
  1. generate-textures.html uses floating point textures.
  2. vertex-shader.html geometry with more than 65534 vertices.
  3. The beach demo because it uses geometry with more than 65534 vertices.

For those last 2 samples, we could have avoided requesting those features if we wanted to. For example, in the case of vertex-shader.html we could just slightly lower the resolution of the plane that it animates. For the beach demo we could split any models with more then 65534 vertices in half and draw the 2 halves separately. This shows that many applications do not need those features or can be refactored to not need them and so a very large percentage of O3D applications can run using hardware accelerated graphics. Higher end applications that need those features can request them and they'll still run everywhere, but for applications that don't they'll be able to use hardware acceleration on a much much larger set of computers.

One question that is likely to come up is, "Could this solution be used to add really high end features like Shader 4.0?". The current answer is unfortunately "no". The reason is, if the user's machine doesn't have those features O3D uses a software renderer. Unfortunately we don't have access to a software renderer that could draw Shader 4.0 features at a reasonable speed.

We hope you'll agree that getting hardware acceleration on as many machines as possible is as awesome as we think it is. This change helps O3D run its best on a much larger set of computers.