MIME-Version: 1.0 Server: CERN/3.0 Date: Sunday, 01-Dec-96 18:58:27 GMT Content-Type: text/html Content-Length: 14495 Last-Modified: Tuesday, 06-Aug-96 02:25:38 GMT
Although various three-dimentional display devices exist, most
computer graphics view surfaces are two-dimentional. Thus a
three-dimentional pipeline - the jargon term used to describe the
various processes in converting from three-dimentional world
coordinate space to a two-dimentional representation - must contain a
projective transformation and a viewing transformation, the minimum
requirements to convert a three-dimentional scene to a two-dimentional
projection.
I have designed and implemented the Viewing System IV which was
carefully explained in the "3D Computer Graphics" by Alan Watt using
JAVA .
All the objects are created in the world coordinate
system which is conventionally represented as a right-handed
system. We can consider a transformation, Tview from a
world coordinate system to a viewing coordinate system (Xv, Yv,
Zv). Here, in viewing coordinate system, vertices are expressed
in a left-handed coordinate system with the origin sometimes known as
the view point or view reference point which is
represented as the position of a viewer's eye or the position at which
a virtual camera is placed. Then, the second transformation,
Tpers is applied to project three-dimensional points in
view space onto the the three-dimensional screen space(Xs,
Ys, Zs). Then, I projected the vertices from screen space onto the
projection space which is two-dimentional view plane to
simulate the camera transformation.
A full viewing system will also determine a view
volume(truncated view frustum), which is the
subset of world coordinate space which is to be included in the
transformation process. As far as a general three-dimensional
rendering pipeline is concerned, I needed to define a view volume.
To define this view volume, I needed to specify a near plane, a far
plane and a view plane window. The specific plane formula will be
presented in the Equations section of this
report.
Since it was very hard to create the view frustum
in the world coordinate system, I have created the view frustum
in the viewing coordinate space and apply inverse Tview
matrix to the view frustum that has created in the viewing coordinate
system. By doing this, I was able to create the view frustum in the
world coordinate system. Also, the three dimentional clipping should
be performed in the viewing coordinate space.
We need a viewing direction vector N, and up vector V and an optional vector U. The following equations are showing how to obtain these vectors.
N vector is a viewing direction vector and is obtained by calculating camera to position subtract camera from position. This N vector is used when we calculate the Tview Matrix.
V vector is a up vector. Because B must be perpendicular to N, we need a strategy that can take care of this care where the user entered N is not perpendicular to M. Since the user entered V vector is just a estimated V vector, we need to calculate the V vector by apply this equation. This V vector is also used when we calculate the Tview Matrix.
U vector is an optional vector that could be obtained by cross product of N and V vector resulting in a left-handed coordinate system. Also, this U vector is used when we calculate the Tview Matrix.
Tview matrix is obtained simply by calculating this equation. This Tview can transform any points in world coordinate system into the viewing coordinate system. T matrix translates the object, and B matrix roates the object where T and B are the followings.
Now, we have to have someway to transform from the viewing coordinate system into the world coordinate system. In the case of view frustum which was created in the viewing coordinate system, and needed to be transformed to world coordinate system.
Tview inverse matrix is obtained simply by calculating this equation. This Tview inverse can transform any points in viewing coordinate system into the world coordinate system. B transpose matrix translates the object, and T(-x) matrix roates the object.
After we have create the object in the world coordinate system, and transform the points into the viewing coordinate system, we need to transform the points in viewing coordinate system into the screen coordinate system again. By doing this, we are finally be able to simulate the camera transformation from world coordinate system to screen coordinate system.
where
Tpers matrix is obtained simply by calculating this equation. This Tpers matrix is consists of d, f and h values that the user has entered from the user interface. d is the distance from the camera to the near plane. Incidently, the view plane is the same as the near plane in this viewing system. f is the distance from the camera to the far plane. Finally, the h value is the half of the height of the view plane window.
As I create the view frustum, I should be able to specify the six planes. There are Xv, Yv and two Zv values. Xv specifies the two planes of the view frustum. Yv specifies the top and the bottom planes of the view frustum. And the Zv specifies the front(near plane and view plane) plane and the far plane.
In this project, I have implemented 14 classes which represents each objects in this Viewing System. I will introduce all the classes and briefly go over each class' class definition in this section.
Class definitions
If you like to simulate the camera transformation to see how the viewing system works, you may press on the viewing system.
You will see four canvases in the viewing system which are the world coordinate canvas, viewing coordinate canvas, screen coordinate canvas and the projection coordinate canvas. As a user changes the approximated V vector, camera location vector, camera to vector, d, f and h values, this program redraws the objects according to the values of variables that user entered on all these four canvas appropreately. The user needed to press on VIEW button after they have changed the values of variables to simulate the camera transformation.
I was able to simulate the camera transformation for viewing system in this project. By simulating the Viewing System IV, I had less restriction when I view the object in the world coordinate since I could change the camera position and where to look at. Also, by having the view frustum, I didn't have to relying on a two-dimentional clipping algorithm to eliminate any information in the view plane that falls outside the view plane window.
Eventhough this project performs just fine with my initial design, I could think of some more features that could make this project works even better. First, adding more objects to view could make it better to understand what is going on while viewing the object. Second, diminishing the line brightness when draw the object as it gets deeper in Z axis could help the user a lot more to tell which line is in front and which one is in the back.
It would have been difficult to finish this project without the greatest guidence of Professor Bruce Land. I would like to thank him from the deep of my heart.
I would like to thank my mother, Yeon-Ja Shim, for helping me and praying for me to survive from the darkest period of my life. Thanks to my brother and sister, Andy Shim and Julie Shim, for supporting me and understanding me by financially and mentally. I, also, would like to thank to my friends and cyber-friends in HANA and KIDS.
Most of all, I would like to dedicate this work to my father, Han-Yong
Shim who is in heaven, for giving me a strength and courage to
continue to learn and complete the Masters Degree in Cornell University.
[1] Alan Watt, "3D Computer Graphics" Addison-Wesley Publishing Company, 1993.