Computer Graphics (CG) Memos

Image Compositing Fundamentals

A Sprite Theory of Image Computing

Alpha and the History of Digital Compositing

Digital Paint Systems: Historical Overview

Should Alpha Be Nonlinear If RGB Is?

Eigenpolygon Decomposition of Polygons

Digital Filtering Tutorial for Computer Graphics

Digital Filtering Tutorial, Part II

Families of Local Matrix Splines

The Making of Andre & Wally B.

Texas (TEXture Applying System)

Incremental Rendering of Textures in Perspective

Image Compositing Fundamentals

Tech Memo 4, Aug 15, 1995

Abstract. This is a short introduction to the efficient calculation of image
compositions. Some of the techniques shown here are not well known, and should
be. In particular, we explain the difference between premultiplied alpha and
not. These two related notions are often confused, or not even understood. We
show that premultiplied alpha is more efficient, yields more elegant formulas,
and occurs commonly in practice. We show that the non-premultiplied alpha
formulation is not closed on over, the fundamental image compositing
operator - as usually defined. Most importantly, the notion of premultiplied
alpha leads directly to the notion of *image object*, or *sprite* - a
shaped image with partial transparencies.

A Sprite Theory of Image Computing

Tech Memo 5, Jul 17, 1995

Abstract. This paper is an introduction to a theory of image computing. The theory, or model, underlies everything I say or think about images and imaging.

The primary idea behind the model to be presented here, the sprite theory or model of image computing, was that one was needed at all. It came from my realization, in the 1980s, that the reason the “image processing” market had not taken off was that it had no center—it was undefined. As opposed to the 2D geometry market—also known as desktop publishing—which was founded on the careful definition of 2D geometry embodied in the PostScript language, there was no accepted definition of 2D image computing. Every company or institution in the business had an internal idea, often vague, of a model that, of course, differed from the others. So I spent about a year at Pixar defining a language, called IceMan, to accomplish for images what PostScript did for 2D geometry. Many of these concepts, but not the IceMan language itself, I embodied in a prototype application that eventually became Altamira Composer. The concepts grew and matured in Altamira Composer, and they are those presented here.

*A Pixel is* Not *a Little Square, a
Pixel is* Not *a Little Square, a Pixel is* Not *a Little Square! (And a
Voxel is* Not *a Little Cube)*

Tech Memo 6, Jul 17, 1995

Abstract. My purpose here is to, once and for all, rid the world of the misconception that a pixel is a little geometric square. This is not a religious issue. This is an issue that strikes right at the root of correct image (sprite) computing and the ability to correctly integrate (converge) the discrete and the continuous. The little square model is simply incorrect. It harms. It gets in the way. If you find yourself thinking that a pixel is a little square, please read this paper. I will have succeeded if you at least understand that you are using the model and why it is permissible in your case to do so (is it?).

Everything I say about little squares and pixels in the 2D case applies equally well to little cubes and voxels in 3D. The generalization is straightforward, so I won’t mention it from hereon.

I discuss why the *
little square* *model* continues to dominate our
collective minds. I show why it is wrong in general. I show when it is
appropriate to use a little square in the context of a pixel. I propose a
discrete to continuous mapping—because this is where the problem arises—that
always works and does not assume too much.

I presented some of this argument in Tech Memo 5 but have encountered a serious enough misuse of the little square model since I wrote that paper to make me believe a full frontal attack is necessary.

Alpha and the History of Digital Compositing

Tech Memo 7, Aug 15, 1995

Abstract. The history of digital image compositing - other than simple
digital implementation of known film art - is essentially the history of the
alpha channel. Distinctions are drawn between digital printing and digital
compositing, between matte creation and matte usage, and between (binary)
masking and (subtle) matting. The history of the *integral alpha* channel
and *premultiplied alpha* ideas are presented and their importance in the
development of digital compositing in its current modern form is made clear.

Tech Memo 8, Aug 30, 1995

Abstract. The purpose of this memo is to distinguish between the various
meanings that digital painting may have. It is important to have a taxonomy so
that intelligent conversation may proceed on such important issues as
multi-resolution paint programs. Each type of painting will be discussed in its
multi-resolution generalization. The taxonomy here splits painting into *
discrete*
and *continuous* categories and each of these into
*maxing* and *non-maxing*
subcategories.

Tech Memo 9, Sep 1, 1995

Abstract. *Gamma* is one of the most misunderstood concepts in computer
imaging applications. Part of the reason is that the term is used to mean
several different things, completely confusing the user. The purpose of this
memo is to draw distinctions between the several uses and to propose a fixed
meaning for the term: Gamma is the term used to describe the nonlinearity of
a display monitor. This is the historical first meaning of the term. A
corollary of this definition is that all other uses of the term gamma should be
dropped, or the imaging industry will continue to stumble. In particular, this
means that gamma should not be used to describe nonlinearity introduced into the
image data itself. Image data, especially images intended for reuse, should
always contain linear data, as commonly assumed by all computer graphics
algorithms.

Digital Paint Systems: Historical Overview

Tech Memo 14, May 30, 1997

Introduction. The period I will cover is from the late 1960s to the early 1980s, from the beginnings of the technology of digital painting up to the first consumer products that implemented it. I include a little information about major developments in the later 1980s. Two surveys that cover this later period fairly well - when the emergence of the personal computer completely changed the software universe - were both published in the magazine Computer Graphics World. My emphasis, of course, is on those systems I knew firsthand. I begin with a simple timeline of programs and systems. I will attempt a weighting and a "genealogy" of these in a later section, where I will also narrow the field to those painting systems that have directly affected the movie industry.

Should Alpha Be Nonlinear If RGB Is?

Tech Memo 17, Dec 14, 1998

Introduction. The title question provides an excellent case study for the Fundamental Tenets of multimedia authoring. I argue that whenever a difficult problem is posed in the multimedia domain, one should fall back on the Fundamental Tenets to find the solution. They frequently serve as incisive probes into a problem. Having said this, I must admit that I failed to do so for the title question for an embarrassingly long time. It wasn’t until I remembered the Fundamental Tenets that I was able to solve the problem cleanly. The problem itself is interesting too. This is the record of a problem arising in an actual standards battle.

Eigenpolygon Decomposition of Polygons

Tech Memo 19, Oct 24, 1998

Introduction. Eigenvectors and eigenvalues play powerful roles in linear
algebra. A remarkable example of their use in geometry is presented here: Any
*n*-gon
can be represented as a complex linear sum of *n* eigenvectors. Since these
eigenvectors are themselves *n*-gons, I call them
*eigenpolygons*. For
example, any hexagon can be represented by a linear sum of six eigenhexagons,
or, as the title suggests, it can be decomposed into six eigenhexagons. And any
hexagon can be decomposed into a sum of the same six eigenhexagons—hence the
"eigen". The rightmost column of Figure 1 shows these characteristic,
or fundamental, hexagons. The columns of this figure, in fact, are the
eigenpolygons for triangles, quadrilaterals, pentagons, and hexagons,
respectively. Details of the eigenpolygon decomposition of 2-dimensional
polygons are presented below.

Digital Filtering Tutorial for Computer Graphics

Tech Memo 27, 20 Nov 1981 (Revised 3 Mar 1983)

Introduction. Digital sampling and filtering in both space and time are intrinsic to computer graphics. The pixels of a framebuffer representation of a picture are regularly placed samples in 2-dimensional screen space. Inappropriate application of sampling theory (or no application at all) results in the artifact called "jaggies". The frames of a film representation of a movement are regularly placed samples in time. The artifact here is called "strobing". Spatial filtering, called "antialiasing", is used to soften the jaggies. Temporal filtering, called "motion blur", removes strobing of edges and backward spinning stagecoach wheels.

The purpose of this memo is to review the principles of digital sampling and filtering theory in the context of computer graphics. In particular, it is a rewording of the classical results in terms with which I am comfortable. Hopefully other computer graphicists will also find the restatement helpful.

We will deal here only with the spatial case. The two examples studied are
scaling a picture up *(magnification) *and scaling a picture down *(minification)*.
We shall be especially concerned with what happens as magnification becomes
minification - as the scale factor passes through 1. We begin with well-known
theorems and derive the principal result: four equivalent statements of the
minification process and four equivalent statements of the magnification
process.

Digital Filtering Tutorial, Part II

Tech Memo 44, 10 May 1982 (Revised 11 May 1983)

Introduction. This note is to be considered an extension of *Digital
Filtering Tutorial for Computer Graphics*, TM 27. As mentioned in the
concluding section, the theory outlined in that memo does not treat the case of
variable rate sampling, as occurs, for example, in perspective mappings. The
Sampling Theorem assumes samples are taken at a uniform rate. We continue to
assume a given input is sampled uniformly - at the pixels - and that the output
is sampled uniformly - also at the pixels. So we can still use the Sampling
Theorem, for example, to reconstruct the original input from its samples. In
case of magnification and minification (scaling the abscissa *x* by a
constant factor), the spacing of the input samples is changed by the scaling but
the spacing remains uniform. In this memo, however, a less restricted class of
mappings is considered which, in general, does not preserve input sample spacing
uniformity. A good illustration of such a mapping is *f*(*x*) = (*ax*+*b*)/(*cx*+*d*)
which occurs in perspective.

We shall show in this memo that the simplifications obtained for magnification and minification do not go through for general mappings, including, unfortunately, perspective.

Then we shall discuss some practical filters which are useful for the case where the Sampling Theorem does hold: for scaling, translation, and rotation. The filters include the Catmull-Rom and B-spline cubic spline basis functions and the sinc function made finite with a variety of window functions: Bartlett, Hamming, Hanning, Blackman, Lanczos, Kaiser, etc.

Tech Memo 64, Revised 24 Jun 1983

Introduction. Our matrix and transformation conventions need to be restated and backed up with code to encourage their use. Since the modeling and rendering systems are currently being redesigned, it is a good time to ask that they be built according to convention. Some packages to ease this task will be presented here.

An example of what I would like to avoid in the future is the state of
affairs currently existing between "med" and "reyes"^{1}.
The field of view for "med" is taken to be one-half the total view
angle in the vertical direction. For "reyes", it is taken to be the
total horizontal viewing angle. In "med" the aspect ratio is the ratio
of the Picture System^{2}width to height in the display (non-menu)
portion of the screen. For "reyes", the aspect ratio is the aspect
ratio of an Ikonas^{3} framebuffer pixel^{4} (which is not the
aspect ratio of the corresponding video). It is not at all obvious to a user how
to get a picture displayed by "med" in the Picture System to appear in
the framebuffer with exactly the same relative shape and orientation when
rendered there by "reyes".

What we want is a default path from model language through calligraphic display to raster display which is natural and requires no input from the user. And all along the path control would be according to convention.

^{1}
The modeling and rendering
programs, respectively, in use at Lucasfilm at the time.

^{2}
A calligraphic line-drawing system by Evans & Sutherland Corp.

^{3}
With a full-color raster scan display device.

^{4}
I would now say "pixel
spacing", pixels themselves being point samples without height and width.

Tech Memo 77, 8 May 1983

Introduction. There are many different types of splines, but there are only a few which need be mastered for computer animation. In preparation for these notes, I looked through all the computer graphics texts on my shelf (six of the well-known ones) and found that none of them present splines as simply as I and my colleagues know them. They burden the reader with splines which I believe to be of only historical interest (natural spline) while failing to present one of the most useful splines for computer animation (cardinal spline). In all but one text, the convenient 4x4 matrix formulation of cubic splines is not mentioned. So the purpose of these notes is to present two very powerful classes of cubic splines - the cardinal and the beta splines - for computer animation and simple 4x4 matrix realizations of them. The Catmull-Rom spline and the B-spline, representing the two classes respectively, will be paid particular attention. The addition of simple parameters to the matrices for these two special cases form the defining matrices for the two general classes.

Tech Memo 84, Jun 24, 1983 (Revised May 4, 1984)

Introduction. The *viewing transformation* is the operation that maps a
perspective view of an object in world coordinates into a physical device’s
display space. In general, this is a complex operation which is best grasped
intellectually by the typical computer graphics technique of dividing the
operation into a concatenation of simpler operations.

The terms world, model, and eye space will refer to coordinate systems in
object space coordinates. *World space* refers to the 3-D universe in which
objects are embedded. *Model space* refers to a coordinate system embedded
in world space but with its origin attached to a particular model. *Eye space*
is similar but has its origin at the eyepoint from which a view is desired. I
will tend to use world, model, and object space interchangeably.

The terms device, screen, and display space will be used interchangeably for
coordinate systems in image space coordinates. *Screen space* refers to a
coordinate system attached to a display device with the *xy* plane
coincident with the display surface. It can be thought of as a 2-D space with a
third dimension carried along for auxiliary purposes (eg, depth cueing).

Orthographic projection will be mentioned from time to time, but the memo concentrates on perspective projection. Other projections such as elevation oblique or plan oblique will not be treated.

The approach here will be to present a convention for specifying a view then follow with a convention for transforming a conventional view. The conventional view specification is the more important as far as conventions are concerned. The other part is a justification for and a guide to the use of the convention.

Families of Local Matrix Splines

by Tom Duff, Tech Memo 104, 21 Dec 1983

Abstract. Jim Clark's Cardinal splines are a family of local interpolating
splines with an adjustable tension parameter. The family may be described by a
matrix which yields spline coefficients as linear functions of knot values. We
characterize the Cardinal spline matrix in a way which suggests a method of
adding tension to B-splines, and show that this tension corresponds to the *beta*_{2}
parameter of Brian Barsky's *beta*-splines. The similarity between the
tension parameters of these two splines suggests looking for an interpolating
spline family which incorporates a bias parameter analogous to Barsky's *beta*_{1}.
We demonstrate two such families with different continuity properties (one is *G*^{1},
the other *C*^{1}). Finally, we develop a five-parameter
characterization of all *C*^{0}, *G*^{1} translation
invariant cubic matrix splines and indicate that all the families we have
developed are sub-families of it.

The Making of Andre & Wally B.

Lucasfilm, Jul-Aug 1984

A description of the making of our first piece of character animation, for the 1984 Siggraph, featuring our new animator, John Lasseter, conceived and directed by me. There are three items to download here: the making of Andre and Wally B, a one-page summary of the film as presented at the 1984 Siggraph, and the Siggraph film show submission form:

download the Siggraph submission form

Tech Memo 7, 20 Jul 1978

Introduction. Paint is a menu-driven computer program for handpainting two-dimensional images in full color. It is a highly interactive software package with which a human artist may employ the power of a digital computer to compose paintings which are entirely of his own creation. The “canvas” is actually a large piece of digital computer memory which is displayed for the artist on a conventional color television monitor. His “brush” is an electronic stylus resembling an ordinary pen. Its shape can be any two-dimensional shape he desires, so long as it fits into the canvas memory space. He may choose any color he desires from a “palette” of 256 colors. If this is an inadequate selection, he may mix his own set of colors from a vast set of possibilities. The main purpose of this paper is to describe in detail how an artist accomplishes these acts and what his choices are.

A secondary purpose is a careful description of a successful human engineering design. Paint is designed to have a “natural feel” and to be readily usable by computer-naïve people. There are detailed descriptions of the techniques which make this possible. In fact, this paper is designed to be read as a textbook for Paint users.

Paint includes routines for defining and selecting brushes, automatic filling and clearing large areas, saving and restoring pictures, magnifying the canvas temporarily for detail work, and recording histories of picture composition. These functions will be described fully. Subsidiary routines available to the artist include such graphic aids as straight-line, ellipse, circle, and spline generators, mirroring, rotation, etc. These programs will not be described here nor will be the large overseeing program Bigpaint of which Paint is a principal component. Bigpaint permits the artist to work on a canvas so large that it cannot all be displayed simultaneously.

There have been several versions of Paint at NYIT (New York Institute of Technology). Each version exists or existed for a specific configuration of equipment. Appendix A gives the various configurations that have been tried, with appraisals of each.

One version of Paint represents increased sophistication rather than mere equipment reconfiguration. This is called Paint3. It is superior to Paint described here in its use of 24 bits for representation of each point in the canvas memory rather than 8 bits. Appendix B explains the extension of Paint to Paint3.

Appendix C gives a brief history of painting programs, emphasizing those which most directly influenced Paint.

This is the original Paint paper with original illustrations. The two published versions have been edited slightly from it. This paper figured highly in two patent trials.

Texas (TEXture Applying System)

Tech Memo 10, 27 Jul 1979, with addendum of 21 Aug 1979

Introduction. Texas - from TEXture Applying System - is a set of programs for
moving pictures about in 3-space to obtain other pictures. Texas makes pictures
comprised of other pictures embedded in 3-space. Here a *picture*, or *texture*,
may be thought of as the contents of a framebuffer, although it may be stored
temporarily in a disk file or disguised as a mipmap. Texas takes as input a
stageset, which is a set of flats, where a *flat* is a picture painted
(mapped) onto the surface enclosed by a planar, convex quadrilateral. Its output
is a 2-dimensional rendering of a stageset as viewed from some vantage point in
3-space.

Texas features incremental rendering of textures in perspective, depth darkening, front and back textures for each flat, z buffering or priority sorting for hidden surface removal, antialiasing of edges or tag buffering for ex post facto dekinking. It uses the Picture System II driver by Jim Clark, the mipmap utilities of Lance Williams and Dick Lundin, and has an interface to Garland Stern’s 3-dimensional animation program Arbor (or Boop) and to Ed Catmull’s polygon rendering program Render.

Texas is intended to be a generalization of the multiplane camera. The current version lacks transparency [but see Appendix], however, so cannot yet do multiplaning. Another inadequacy at present is the priority sort algorithm which is simple minded. Intended additions are light sources, shadows, global partial transparencies for individual flats, correct pixel preimage averaging, as well as local transparency and a better sort.

This memo has "Preliminary Report" in its original title, but there never was a followup, so I have omitted it.

6 Aug 1979

Introduction. One of the most basic picture-making tools in
computer graphics (specifically, color raster graphics) is *moving* or *copying*
an image from one location to another, from one type of memory device to another
type. What we shall call *painting* is an example of this basic operation.
One image, called the *brush*, is copied from disk memory into main and
then repeatedly copied from there into a special type of visible memory called a
*framebuffer* at a changing position determined by the user’s hand on a
tablet. All the variations on simple copying available on a computer are, of
course, available for the special case of painting. We shall detail several of
these.

This paper figured highly in two patent trials because it contains an early description of matting.

28 Oct 1979

Abstract. Table look-up procedures are particularly useful at computer graphics installations because of the large chunks of cheap random access memory (framebuffers) which tend to be components of these facilities. We explain here how one additional framebuffer and one analog control device in addition to the tablet create a powerful and flexible extension to any typical framebuffer painting program. In particular, this readily available extra equipment gives the artist brushes which change size, shape, orientation, and/or color in realtime and smoothly. Examples are rotating brushes, animated brushes, airbrushes with changing spread and density, oriented brushes which track the direction of motion, and many more. That is, the artist is given a third dimension of stylus control which may be as graceful as the spatial two. He may define this new dimension from a menu of selections. The extra framebuffer is used as table memory. The extra device (e.g., joystick, trackball, pot, footpedal, microphone, strain guage, keyboard) is used as a controller for the added dimension. Realtime is obtained by using the output of the controller for table look-up.

Incremental Rendering of Textures in Perspective

28 Oct 1979

Abstract. An incremental, scanline-ordered algorithm for rendering a 2-dimensional texture into a planar convex polygon embedded in 3-space is presented. The textured polygon is projected into a viewing plane where it is seen in perspective. The method is shown to cost a divide and two lerps per pixel more than the well-known incremental algorithm for rendering an orthographic view of a texture or a perspective view of a "texture" consisting of a single color (lerp = linear interpolation). It is argued that the divide is necessary - that, for example, second-order incrementing is not good enough in general.