Sign in to follow this  
Intrepid Homoludens

Ha ha! QuakeIII raytraced movie

Recommended Posts

dm2.jpg

It's usually considered not so efficient for real-time rendering due to the high number of rays being traced in one particular moment. Lately, Saarland University developed in cooperation with Erlangen University the worlds first completly ray traced 3D Shooter and luckily for id Software, the Quake 3: Arena demo was chosen for the project.

Quake III raytraced movie

Cute, a little eye candy indulgence for us.

Share this post


Link to post
Share on other sites

Kudos to them. Although I don't know what they were trying to prove. To my knowledge realtime raytracing was always possible (in fact I had several demos a few years ago) but there's very little incentive to use it because you can't take advantage of the 3d accelerator and you can achieve pretty much any effect with regular scanline rendering.

Share this post


Link to post
Share on other sites

Hmm. I'll have to ask Some experts I know but I think Quake3 already uses some form of raytracing to make the lightmaps, but that's done during map compile time, not realtime :)

Share this post


Link to post
Share on other sites

Now they just need to use this for the next realtime Myst game and, hopefully due to some complex logical jibba-jabba and a bit of luck, Jim will suddenly cease to exist.

Share this post


Link to post
Share on other sites
Now they just need to use this for the next realtime Myst game and, hopefully due to some complex logical jibba-jabba and a bit of luck, Jim will suddenly cease to exist.
Term "jibba-jabba" should be used more often.

Share this post


Link to post
Share on other sites
Now they just need to use this for the next realtime Myst game and, hopefully due to some complex logical jibba-jabba and a bit of luck, Jim will suddenly cease to exist.

I don't get it.

Share this post


Link to post
Share on other sites

If I'm not mistaken, Jim's (over on the AG forums) biggest complaint with the realtime Myst games is that they can't use the beautiful perfect raytraced graphics of the pre-rendered titles, so in a way the realtime Myst games aren't really Myst games at all.

Share this post


Link to post
Share on other sites
Term "jibba-jabba" should be used more often.

I pity the fool who doesn't use it on a regular basis!

Share this post


Link to post
Share on other sites

Jibba-jabba should only be used in conjunction with "fool" and "quit yo'".

Share this post


Link to post
Share on other sites
If I'm not mistaken, Jim's (over on the AG forums) biggest complaint with the realtime Myst games is that they can't use the beautiful perfect raytraced graphics of the pre-rendered titles, so in a way the realtime Myst games aren't really Myst games at all.

Oh, I get it now.

Share this post


Link to post
Share on other sites
Jibba-jabba should only be used in conjunction with "fool" and "quit yo'".

I disagree. Jibba-jabba has enough merit in and of itself, fool and quit yo only box it in and banalize it's splendidness.

Share this post


Link to post
Share on other sites

this movie is awesome. i got it here somewhere on a dvd.

in fact it's the first game-related effort in using real raytracing in games.

i mean: correct shadows on a bunch of characters and correct mirrored-mirror-images (reflection of a mirror to another mirror, which reflects back to the first mirror,...) without loosing any performance! and all this has been done by two guys. afaik.

awesome!

Share this post


Link to post
Share on other sites
... but there's very little incentive to use it because you can't take advantage of the 3d accelerator and you can achieve pretty much any effect with regular scanline rendering.

Wrong. Namely it's hard to emulate wet surfaces, mirrors, and refraction. The way to emulate mirroring in a scanline engine is to make a surface transparent and copy the entire environment behind it, so that it is literally a mirror (and consumes twice the CPU usage that the scene would have normally because the scene has a duplicate acting as its mirror). To my knowledge, the only refraction is accomplished by pinching (or the opposite of) a sector of the rendering in 2d. The wet/shiny surfaces are emulated by making them reflect more light (but they still don't mirror anything). All of these methods are inferior to real raytracing.

The reason they can raytrace in realtime and most other raytracers can take many hours or days is that they only selectively raytrace with a combination of scanline and limit their raytracing light sources. Blender3d does this, actually, and if they refined it to make it faster we could put it in the Blender3d game engine.

You'll notice that the mirrors they have in that video are perfect (they have no fresnel - the blending of a mirrored image with the object itself, like a shiny car). Fresnel consumes a lot of CPU usage, and therefore they can't do that. The realtime raytracers are still flawed in a way, but better than scanline alone.

Share this post


Link to post
Share on other sites

shbazjinkens:

The better way to emulate mirroring effect in a scanline renderer is to flip the camera based on the surface's normal and render the environment from the flipped camera's view, and project it from the original camera to the surface.

So it does not consume extra memory, although it takes an extra pass (and possibly a few clipping operation). But it's still a lot more efficient than raytracing. In fact, with non-realtime scanline renderer, it's more memory efficient because you don't need to cache the entire scene for trace calls. and the result is indistinguishable from raytracing. An added bonus is that you can preprocess reflection map before you project it, so blurred reflection, distorted reflection, etc are all easy.

(with raytracing, however, you need multisampling ala Monte Carlo, so unless you can write a really good biased ray distribution function, you end up with days of rendering time)

Only thing scanline can't do is self-reflection.

Wet/shiny surfaces are emulated by making them reflect more light?? what are you talking about? Are you familiar with bidirectional reflectance functions? it's used for both raytracing and scanline rendering model. In fact, under the same lighting setup, a scanline renderer should end up with the exact same image as the raytracer. All you need to calculate physically accurate specularity are:

1 surface normal

2 light direction vector

3 (and shadow map, if you want to be as accurate as possible)

Not only it's easy as hell to do it in scanline, it's being done with today's games -now that pixel shaders are being more and more common.

And I think you have the definition of Fresnel wrong. Fresnel refers to how much light gets absorbed into a surface and how much of it gets reflected based on SURFACE NORML. So for example, your car seem to reflect more light when viewed from its surface's grazing angle, but not so when you're directly looking at it. And no, fresnel effect does not consume a lot of cpu usage. I've seen it done in real time many times. Head to ATI or Nvidia's developers corner and take a look at some demos.

As for refraction, well, like you said there's no way to do it in scanline. But there are many good ways to fake it.

Share this post


Link to post
Share on other sites
The better way to emulate mirroring effect in a scanline renderer is to flip the camera based on the surface's normal and render the environment from the flipped camera's view, and project it from the original camera to the surface. So it does not consume extra memory, although it takes an extra pass (and possibly a few clipping operation). But it's still a lot more efficient than raytracing. In fact, with non-realtime scanline renderer, it's more memory efficient because you don't need to cache the entire scene for trace calls. and the result is indistinguishable from raytracing. An added bonus is that you can preprocess reflection map before you project it, so blurred reflection, distorted reflection, etc are all easy. (with raytracing, however, you need multisampling ala Monte Carlo, so unless you can write a really good biased ray distribution function, you end up with days of rendering time) Only thing scanline can't do is self-reflection.

I didn't refer to it taking extra memory, but extra CPU usage. Taking an extra pass cuts your FPS in half, unless you have it selectively rendering only that small area that is a mirror. Doing this is also effectively the same as flipping the scene, except it is calculated in two passes instead of one (unless it selectively renders only the mirrored area). Self-reflection is also a nice thing to have IMO, but not necessary. Also, I wasn't claiming that Raytracing is faster, even a novice knows that, I think it is better than scanline, and I don't know how you can disagree with that.

Wet/shiny surfaces are emulated by making them reflect more light?? what are you talking about?

Specularity maps, you seem pretty knowledgable, so I'm sure you know what that is.. By making a spec map you can make liquid shiny and dull in certain portions to give the appearance that it has that uneven surface that water has. Best when animated. It works along with a normal map to give a better appearance. The downside is that it only reflects the lighting system to give the appearance that it is reflective to everything (because it is shinier than everything else), but it doesn't bend it or mirror other objects. You go on to explain what I'm talking about.. but it's still lacking mirroring. By that I mean both self-mirroring (for walls and such) and mirroring that goes with the bumpmaps and specmaps. I can think of ways around this, but it's always more of a hassle for the game makers than if you just had raytracing.

Are you familiar with bidirectional reflectance functions? it's used for both raytracing and scanline rendering model. In fact, under the same lighting setup, a scanline renderer should end up with the exact same image as the raytracer. All you need to calculate physically accurate specularity are:

1 surface normal

2 light direction vector

3 (and shadow map, if you want to be as accurate as possible)

Not only it's easy as hell to do it in scanline, it's being done with today's games -now that pixel shaders are being more and more common.

And I think you have the definition of Fresnel wrong. Fresnel refers to how much light gets absorbed into a surface and how much of it gets reflected based on SURFACE NORML. So for example, your car seem to reflect more light when viewed from its surface's grazing angle, but not so when you're directly looking at it. And no, fresnel effect does not consume a lot of cpu usage. I've seen it done in real time many times. Head to ATI or Nvidia's developers corner and take a look at some demos.

I didn't describe it very well, but I know what it is. I also know that it makes my rendertime shoot up no matter what renderer I use.. though all of the ones I use are free, I'm sure at least one of them would have implemented a faster method if it was readily availabe. Whatever realtime method you're referring to is probably a far simplified version of it so that it wouldn't matter.

As for refraction, well, like you said there's no way to do it in scanline. But there are many good ways to fake it.

Many more limitations to the good ways of faking it as well.

Share this post


Link to post
Share on other sites

*Sigh* It seems clear to me that you have no real 3d programming experience. You probably picked up all you know about 3d from modelling/animation packages like Blender or 3D studio. Not that there's anything wrong with it.

I hope you don't take my reply as a spiteful comeback - the purpose of this post is to inform, not argue.

well here we go

I didn't refer to it taking extra memory, but extra CPU usage. Taking an extra pass cuts your FPS in half, unless you have it selectively rendering only that small area that is a mirror. Doing this is also effectively the same as flipping the scene, except it is calculated in two passes instead of one (unless it selectively renders only the mirrored area). Self-reflection is also a nice thing to have IMO, but not necessary. Also, I wasn't claiming that Raytracing is faster, even a novice knows that, I think it is better than scanline, and I don't know how you can disagree with that.

Extra pass does not cut your FPS in half. It could, if you're not careful, but it typically doesn't. There are ways to limit what you're rendering for the reflection pass. Plus, CPU has nothing to do with it, since it's handled by the GPU.

Allaround, it's a much better alternative than raytracing. You think raytracing is better than scanline because you are probably not aware of what scanline renderers are capable of. Take Pixar's animated movies for example. They're 99% scanline rendered, and they only used raytracing for a very select scenes that needed accurate refraction and reflection.

Raytracing has its uses, but I would certainly not say that it's better than a scanline renderer.

Specularity maps, you seem pretty knowledgable, so I'm sure you know what that is.. By making a spec map you can make liquid shiny and dull in certain portions to give the appearance that it has that uneven surface that water has. Best when animated. It works along with a normal map to give a better appearance. The downside is that it only reflects the lighting system to give the appearance that it is reflective to everything (because it is shinier than everything else), but it doesn't bend it or mirror other objects. You go on to explain what I'm talking about.. but it's still lacking mirroring. By that I mean both self-mirroring (for walls and such) and mirroring that goes with the bumpmaps and specmaps. I can think of ways around this, but it's always more of a hassle for the game makers than if you just had raytracing.

No, specular mapping, combined with normal mapping, AND realistically distorted reflection, etc is all possible in real time scanline renderers, and it's no more difficult than adding these features to a raytracer (although the raytracer woudln't run in realtime). It's been possible since the days of pixel shaders. Heck, you can even add fake refraction underneath. look at Half-Life 2's water and stained glass effect. The only reason you don't see it a lot in games is because most game developers are lazy idiots.

Or too concerned about performance.

I didn't describe it very well, but I know what it is. I also know that it makes my rendertime shoot up no matter what renderer I use.. though all of the ones I use are free, I'm sure at least one of them would have implemented a faster method if it was readily availabe. Whatever realtime method you're referring to is probably a far simplified version of it so that it wouldn't matter.

well, no I don't think you know what it is. Your original post implies that any mixture of surface colour and reflected image is fresnel. It's not. Fresnel is not a function, but rather an idea, a concept that the mixture of reflection and colour is NOT uniform across objects that have multiple layers (to use the car example again, its surface would consist of 3 layres: metal, paint, and wax coating). Actually even that's not a 100% accurate description... the fact the word "reflection" is included in my description is merely a side effect. It has more to do with how light rays react around the boundaries of two media with different refractive index.

It does NOT slow down rendering. If it does, then that renderer has some serious flaws. It uses the same simplified mathematics whether it's realtime or not. In fact, here's a code that works:

colour = reflection - abs(dot(n,i)) * (reflection + surface_colour);

where n and i are normalized normal and incident vectors. See? only one line of code. If it slows down your renderer, then your renderer needs serious help. Of course if you want to control the falloff curve, then you need to introduce a new variable into the equation, but it shouldn't slow the renderr down at all.

I hope you find it all informative.

Share this post


Link to post
Share on other sites

I'm going basically on what I've done and what I've read, but my arguments against water reflections are unfounded. I did some searching through the articles on this site, and one example (http://www.idlethumbs.net/screenshot.php?id=73&article=23) clearly shows that I'm wrong. EDIT Previously I've only seen stuff like this: http://www.idlethumbs.net/screenshot.php?id=79&article=23 which does not have a very good water surface. I think I should be browsing more game sites rather than just the game art forum I frequent (cgtalk.com) so that I'll see the work of more teams rather than individuals.

I don't have an ounce of programming experience, of that you are correct.

The only reason you don't see it a lot in games is because most game developers are lazy idiots.

This is kind of what I'm hinting at.. with Raytracing a lot of this complicated stuff turns into pressing a button and adjusting a slider once it's programmed. Maybe scanline reflections are the same though, I don't know because I'm no programmer.

Fresnel is not a function, but rather an idea, a concept that the mixture of reflection and colour is NOT uniform across objects that have multiple layers (to use the car example again, its surface would consist of 3 layres: metal, paint, and wax coating).

I was under the impression that this kind of thing fell under sub-surface scattering or plain multi-layering. The fresnel function in Blender and Yafray will cause the reflection or refraction to show up less the higher it is turned up, and another slider causes slight distortions and modifies angular reflection. Fresnel, the man, invented the polarized lens which has nothing to do with multilayered textures or fading reflections, so maybe it's become a general term?

My problem, you see, is that I haven't been spending enough time looking at game art, rather I've been trying to create it, focusing nearly entirely on characters and animation.

Thanks for not getting too frustrated to type. :grin:

Share this post


Link to post
Share on other sites

This is kind of what I'm hinting at.. with Raytracing a lot of this complicated stuff turns into pressing a button and adjusting a slider once it's programmed. Maybe scanline reflections are the same though, I don't know because I'm no programmer.

well, yeah, it does take more effort. In ray tracing it's pretty straightfoward. In scanline rendering you need to treat it as a "special case"

I was under the impression that this kind of thing fell under sub-surface scattering or plain multi-layering. The fresnel function in Blender and Yafray will cause the reflection or refraction to show up less the higher it is turned up, and another slider causes slight distortions and modifies angular reflection. Fresnel, the man, invented the polarized lens which has nothing to do with multilayered textures or fading reflections, so maybe it's become a general term?

Well actually I don't know who Fresnel is. It's generally used to describe what I described, but damned if I know its origin.

Subsurface scatter is related, but not the same. Subsurface scatter is more associated with how the luminance of the overall object is affected due to internal light scatter (for translucent objects like human skin or milk). and fresnel is more commonly associated with reflection. It's funny we have all these different theories with different names that describe behaviour of the same entity (light rays).

:)

Share this post


Link to post
Share on other sites
It's funny we have all these different theories with different names that describe behaviour of the same entity (light rays).

Well, you're describing it from a programming perspective, whereis I just use it and look at it.. it's kind of like if you asked a mechanical engineer how a car works, in comparison with asking a 5 year old.

Share this post


Link to post
Share on other sites
Guest
This topic is now closed to further replies.
Sign in to follow this