AWARDS SIGGRAPH 90 Academy 96 Academy 98 Doctorate 99 CSG 04 NGS 04 CRN Fame 04
NAE 06 ASG 07 Washington 10 ASG 10 Mundos 11 DiMe 12 NMSU 12
AAAS 13            

Technical Academy Award 1996

Ed Catmull, Tom Duff, Tom Porter, and I received a technical Academy Award on March 2, 1996 in Beverly Hills from the Academy of Motion Picture Arts and Sciences (AMPAS) for "pioneering inventions in Digital Image Compositing". This is essentially for the invention of the alpha channel as explained below. [Ceremony hosted by Richard Dreyfuss]. Text of the award is also below.

Academy Award 1996


Tom Duff 


Ed Catmull

Tom Porter


Academy of Motion Picture Arts & Sciences

Scientific and Engineering Award

To Alvy Ray Smith, Ed Catmull, Thomas Porter and Tom

Duff for their Pioneering Inventions in Digital Image




The Alpha Channel

A Simple Concept with Profound Implications

Alvy Ray Smith

January 2, 1996

The notion of an alpha channel carrying the opacities of a digital image is so ordinary now in computer graphics that it may surprise you to discover that it had to be invented. The idea, however, is much deeper than even its inventors understood: Not only does the alpha channel solve the original problem of image compositing but it frees us from the tyranny of the rectangular image and paves the way for the digital convergence of the two main branches of computer picturing, the sampling theory and geometry branches.

The notion of compositing two images together is an old one. Early digital compositors first simply mimicked the techniques developed in the film industry that preceded them. Thus, a digital matte was created to composite two digital images in direct emulation of old film techniques. This was not the invention of the alpha channel. That event occurred when the separate matte entity was obviated by inclusion of opacity information in the image itself. I shall refer to this first half of the concept as the integral alpha.

Ed Catmull and I invented the integral alpha channel in late 1977 at the New York Institute of Technology. I remember the moment very clearly. Ed was working on his sub-pixel hidden surface algorithm for SIGGRAPH paper submission [1]. He was rendering images over different backgrounds using the new technique. I was working with him since I knew where all the interesting background images lay in our file system. We had six of the rare 8-bit framebuffers at NYIT at this time. I would position a background in three of them (equivalently one RGB framebuffer) over which he would render a foreground element. A different background required a new rendering, a very slow process then.

Ed mentioned that life would certainly be easier if, instead of re-rendering the same image over different backgrounds, he rendered the opacity information with the color information once at each pixel into a file. Then the image could be composited over different backgrounds, without re-rendering, as it was read pixel-by-pixel from the file. I immediately said that this would be extremely easy to accomplish, confident because I had written our image file saving and restoring programs. Versions for saving and restoring 8-bit and 24-bit images existed, and I knew exactly the changes to extend the code to 32-bit images. I started right then and by the next morning had the full package, complete with Unix manual pages using "alpha" and "RGBA" terminology, ready for use. All Ed had to do was write the alpha information into a fourth framebuffer. We called it that because composition uses the classic linear interpolation formula aA + (1-a)B [read a as "alpha" (Greek letters are difficult for .htm documents)] where a controls the amount of interpolation between, in this case, two images A and B. I would save the four framebuffers (equivalently, one RGBA framebuffer) into a file with the new code, savpa4. Then Ed or I or anybody could use the newly revised restore routine, getpa, to composite the file image over an arbitrary image already in the framebuffers. getpa could detect a fourth channel in a file image and use it to composite as the image was read from the file. That was it. The integral alpha channel had been born. The "or anybody" is a key phrase: The integral alpha channel severs the image synthesis step from the compositing step.

In film terms, the alpha channel is exactly the matte needed to composite one image with another. It inherently supports partial transparencies. Given a matte with subtle transparencies, the integral alpha approach stores this matte in the fourth channel of the foreground image for which it serves to define the shape. There is no "second strip of film" for the matte. In other words, the digital contribution here is not simply a simulation of the classic film techniques; it is something new: a single concept incorporating both color and opacity. The matte ceases to exist conceptually. An image partially exists at a point depending on itself alone, namely its alpha channel.

To emphasize, notice that we could think of an image as four separate entities: red, green, blue, and opacity ones. We have gained great conceptual ease by combining the red, green, and blue entities into a single one called color. The alpha channel carries this one step further, reducing the mental load by including opacity in the colored entity.

The notion of the integral alpha led to the second one, premultiplied alpha. In 1984, our Lucasfilm colleagues Tom Porter and Tom Duff first drew the distinction between premultiplied and non-premultiplied alpha and showed the relative benefits of premultiplication [2]: Math performed on premultiplied RGB channels is identical to that performed on the alpha channel, consolidating the notion of a single entity. My Microsoft Tech Memo 4 shows that not using premultiplied alpha leads to an inconsistency in compositing. For more benefits see [3].

Recall the compositing formula aA + (1-a)B. Notice that if A is premultiplied by a - that is, its colors are premultiplied by a - then three multiplies are removed at each pixel (one for each color channel). Porter-Duff observed that a great many multiplies could thus be avoided at compositing time by prefiguring them into the image, as it was rendered say, to form an image with so-called premultiplied alpha. At the time, multiplies were expensive so this was a very large time saver.

But premultiplied alpha is more than efficient. It is as conceptually fundamental as the integral alpha. To see this, notice that the color channels of a completely transparent pixel must be 0: Premultiplication by 0 (for transparent) must result in 0 colors. Once you have 0 colors and 0 alpha, any information about non-0 color that might have existed previously is lost. Thus, for all practical purposes, a transparent pixel ceases to exist conceptually (and doesn't even have to be stored). This is profound because suddenly images generalize from rectangular opaque entities to shaped ones with partial transparencies. There is no longer the notion of a "traveling" matte - a separate shape descriptor somehow synchronized with the thing being given shape. The shape is integral to the image and, of course, moves with it. In fact, shape can be defined to be the non-0 parts of the alpha channel of an image.

Nearly all modern digital filmmaking utilizes the alpha concept-including all of Disney's blockbusters (e.g., Beauty and the Beast, Aladdin, The Lion King), Pixar's new movie Toy Story, and dozens of special effects movies such as Terminator II, Jurassic Park, and Jumanji. This should not be confused with the so-called blue-screen process which is a technique for creating a matte, whereas the alpha channel is a technique for using a matte that already exists.

The image objects that result from integral and premultiplied alpha concepts are what we now call sprites, a generalization of the old concept of sprite from the early days of personal computers. The important point is that once you have used alpha to free an image sprite from a rectangular container (as typically implied by terms like "image" or "layer") you have a programming object that is like any other picture object-in particular, like 2D or 3D geometrical objects from traditional computer synthesis. This puts pixels and geometry on the same conceptual footing and gives us a workable model for convergence of the two picturing domains of sampling theory and geometry.

[1] Catmull, Edwin, A Hidden-Surface Algorithm with Anti-Aliasing, Computer Graphics, 12:6-11, July 1978.

[2] Porter, Thomas, and Duff, Tom, Compositing Digital Images, Computer Graphics, 18:253-259, July 1984.

[3] Blinn, James F., Compositing, Part I: Theory, IEEE Computer Graphics & Applications, 83-87, September 1994.