[RANT]Computer Graphics and Alpha Blending

Right now, as of Thursday 1 AM, I am working with OpenGL and I am so annoyed, too frustrated for the unreasonably missing basic feature in GPUs; Proper alpha testing. Please don’t mention glAlphaFunc(), because that’s practically useless now and is not exactly what I am talking about.


Too much switching with shaders is already a problem by itself, and so are uber shaders which may, more or less, seem to reduce the problem but probably still deliver the same performance penalty. Solution? Simple! We just group together the meshes that uses every particular shader, set each shader only once and render their groups respectively. This way we reduce shader switches to a minimum! Perfect, but then, if you’re going to sort meshes by the shader they use, how will you make sure the ones which are closer to the screen will be rendered last? The painter’s algorithm requires one to sort the objects according to their z-positions before issuing the render, but the objects are already sorted according to shaders they use. The z-buffer is a GPU built-in container structure that solves this problem!


The way the z-buffer works is pretty smart. Every time you render a fragment (a pixel, almost) to the screen, a test referred to as depth test is done. It is checked whether the depth value of the last rendered fragment (how close or near the fragment is) on the screen on the same coordinates is smaller than the one you’re trying to render. If it is, your new one will dominate the spot as the new nearest pixel, if it isn’t then not your lucky day, fragment :\. But yes, you do not need to sort your meshes anymore! In fact, the z-buffer is a solution a case where 2 meshes intersect.


And here comes the greatest letdown, and that is how the depth buffer does NOT care about the opacity of the fragment. This means that, unless you explicitly declare so, a fragment of opacity 0, ie completely transparent, will be replaced if it’s closer to the screen than another visible fragment that is farther. Now as I said, you can discard a fragment if it has an alpha component of 0, but that doesn’t work in case the closer fragment is translucent. You’d expect it to blend according to the blend function you set with glBlendFunc(), but unless your blend function is commutative, it won’t work. That is to say, the depth test doesn’t perform an alpha test to find out which is the source from the destination which. BUT WHY!?, whoever is responsible for this? So the painter’s algorithm all over again. Should I sacrifice proper blending to avoid the case where meshes intersect? Or should I go for proper blending and screw meshes intersecting? Or I go with the half-baked ‘ideal’ method and just use a mix; depth buffer to render opaque objects first according to shader use and  then sort the translucent and suffer the additional performance penalties?

A better question is, how can the lack of such a basic feature be justified?




Add A Comment

Your email address will not be published. Required fields are marked *