3D projection

projectiongraphical projection3D
Curvilinear perspective. Cutaway. Descriptive geometry. Engineering drawing. Exploded-view drawing. Graphics card. Homogeneous coordinates. Homography. Map projection (including Cylindrical projection). Multiview projection. Perspective (graphical). Plan (drawing). Technical drawing. Texture mapping. Transform and lighting. Viewing frustum. Virtual globe.

3D computer graphics

3D3D graphicsthree-dimensional
3D computer graphics or three-dimensional computer graphics (in contrast to 2D computer graphics), are graphics that use a three-dimensional representation of geometric data (often Cartesian) that is stored in the computer for the purposes of performing calculations and rendering 2D images. Such images may be stored for viewing later or displayed in real-time. 3D computer graphics rely on many of the same algorithms as 2D computer vector graphics in the wire-frame model and 2D computer raster graphics in the final rendered display.

Graphics pipeline

pixel pipelinerendering pipelinepipeline
With vp=Viewport; v=Point after projection. Pipeline (computing). Instruction pipelining. Hardware acceleration. Tomas Akenine-Möller, Eric Haines: Real-Time Rendering. AK Peters, Natick, Mass. 2002, ISBN: 1-56881-182-9. Michael Bender, Manfred Brill: Computergrafik: ein anwendungsorientiertes Lehrbuch. Hanser, München 2006, ISBN: 3-446-40434-1.

Cube mapping

cube mapcube mapsCube-mapped
Another application which found widespread use in video games is projective texture mapping. It relies on cube maps to project images of an environment onto the surrounding scene; for example, a point light source is tied to a cube map which is a panoramic image shot from inside a lantern cage or a window frame through which the light is filtering. This enables a game developer to achieve realistic lighting without having to complicate the scene geometry or resort to expensive real-time shadow volume computations. A cube texture indexes six texture maps from 0 to 5 in order Positive X, Negative X, Positive Y, Negative Y, Positive Z, Negative Z.

Rendering (computer graphics)

renderingrenderedrender
Some are integrated into larger modeling and animation packages, some are stand-alone, some are free open-source projects. On the inside, a renderer is a carefully engineered program, based on a selective mixture of disciplines related to: light physics, visual perception, mathematics, and software development. In the case of 3D graphics, rendering may be done slowly, as in pre-rendering, or in realtime. Pre-rendering is a computationally intensive process that is typically used for movie creation, while real-time rendering is often done for 3D video games which rely on the use of graphics cards with 3D hardware accelerators.

Computer graphics

graphicsCGcomputer-generated
Texture mapping. Texture mapping is a method for adding detail, surface texture, or colour to a computer-generated graphic or 3D model. Its application to 3D graphics was pioneered by Dr Edwin Catmull in 1974. A texture map is applied (mapped) to the surface of a shape, or polygon. This process is akin to applying patterned paper to a plain white box. Multitexturing is the use of more than one texture at a time on a polygon.

Z-buffering

z-bufferdepth bufferdepth
After a perspective transformation, the new value of z, or z', is defined by: After an orthographic projection, the new value of z, or z', is defined by: where z is the old value of z in camera space, and is sometimes called w or w'. The resulting values of z' are normalized between the values of -1 and 1, where the plane is at -1 and the plane is at 1. Values outside of this range correspond to points which are not in the viewing frustum, and shouldn't be rendered. Typically, these values are stored in the z-buffer of the hardware graphics accelerator in fixed point format.

Lightmap

light maplight mappinglightmapping
Texture mapping. Baking (computer graphics).

Clipping (computer graphics)

clippingclippedCircle and B-Splines clipping algorithms
Beyond projection of vertices & 2D clipping, near clipping is required to correctly rasterise 3D primitives; this is because vertices may have been projected behind the eye. Near clipping ensures that all the vertices used have valid 2D coordinates. Together with far-clipping it also helps prevent overflow of depth-buffer values. Some early texture mapping hardware (using forward texture mapping) in video games suffered from complications associated with near clipping and UV coordinates.

History of computer animation

during the late 70s and early 80s1940s and 1950sbirthplace
The computer science faculty was founded by David Evans in 1965, and many of the basic techniques of 3D computer graphics were developed here in the early 70s with ARPA funding (Advanced Research Projects Agency). Research results included Gouraud, Phong, and Blinn shading, texture mapping, hidden surface algorithms, curved surface subdivision, real-time line-drawing and raster image display hardware, and early virtual reality work.

Polygon

polygonalpolygonsn''-gonal
If any two simple polygons of equal area are given, then the first can be cut into polygonal pieces which can be reassembled to form the second polygon. This is the Bolyai–Gerwien theorem. The area of a regular polygon is also given in terms of the radius r of its inscribed circle and its perimeter p by This radius is also termed its apothem and is often represented as a.

Distance

distancesproximitydepth
Distance is a numerical measurement of how far apart objects are. In physics or everyday usage, distance may refer to a physical length or an estimation based on other criteria (e.g. "two counties over"). In most cases, "distance from A to B" is interchangeable with "distance from B to A". In mathematics, a distance function or metric is a generalization of the concept of physical distance. A metric is a function that behaves according to a specific set of rules, and is a way of describing what it means for elements of some space to be "close to" or "far away from" each other.

Shadow mapping

shadowsshadow mapCascaded Shadow Maps
For a point light source, the view should be a perspective projection as wide as its desired angle of effect (it will be a sort of square spotlight). For directional light (e.g., that from the Sun), an orthographic projection should be used. From this rendering, the depth buffer is extracted and saved. Because only the depth information is relevant, it is common to avoid updating the color buffers and disable all lighting and texture calculations for this rendering, in order to save drawing time. This depth map is often stored as a texture in graphics memory.

Orthographic projection

orthographicprojectionprojections
An orthographic projection map is a map projection of cartography. Like the stereographic projection and gnomonic projection, orthographic projection is a perspective (or azimuthal) projection, in which the sphere is projected onto a tangent plane or secant plane. The point of perspective for the orthographic projection is at infinite distance. It depicts a hemisphere of the globe as it appears from outer space, where the horizon is a great circle. The shapes and areas are distorted, particularly near the edges. The orthographic projection has been known since antiquity, with its cartographic uses being well documented.

Parallax occlusion mapping

Parallax occlusion mapping is used to procedurally create 3D definition in textured surfaces, using a displacement map (similar to a topography map) instead of through the generation of new geometry. This allows developers of 3D rendering applications to add 3D complexity in textures, which correctly change relative to perspective and with self occlusion in real time (self-shadowing is additionally possible), without sacrificing the processor cycles required to create the same effect with geometry calculations. Parallax occlusion mapping was first published in 2005 by Zoe Brawley and Natalya Tatarchuk in ShaderX3.

3D rendering

rendering3D acceleration3D
A simple example of shading is texture mapping, which uses an image to specify the diffuse color at each point on a surface, giving it more apparent detail. Some shading techniques include: Transport describes how illumination in a scene gets from one place to another. Visibility is a major component of light transport. The shaded three-dimensional objects must be flattened so that the display device - namely a monitor - can display it in only two dimensions, this process is called 3D projection. This is done using projection and, for most applications, perspective projection.

Quadrilateralized spherical cube

In mapmaking, a quadrilateralized spherical cube, or quad sphere for short, is an equal-area mapping and binning scheme for data collected on a spherical surface (either that of the Earth or the celestial sphere). It was first proposed in 1975 by Chan and O'Neill for the Naval Environmental Prediction Research Facility. This scheme is also often called the COBE sky cube, because it was designed to hold data from the Cosmic Background Explorer (COBE) project. The quad sphere has two principal characteristic features. The first is that the mapping consists of projecting the sphere onto the faces of an inscribed cube using a curvilinear projection that preserves area.

List of computer graphics and descriptive geometry topics

Reverse perspective. Scan line rendering. Scrolling. Technical drawing. Texture mapping. Trimetric projection. Vanishing point. Vector graphics. Vector graphics editor. Vertex shaders. Volume rendering. Voxel. List of geometry topics. List of graphical methods.

Polar coordinate system

polar coordinatespolarpolar coordinate
Curvilinear coordinates. List of canonical coordinate transformations. Log-polar coordinates. Polar decomposition. Coordinate Converter — converts between polar, Cartesian and spherical coordinates. Polar Coordinate System Dynamic Demo.

Mathematics and art

mathematical artmathematics of artartistic and imaginative pursuit
As early as the 15th century, curvilinear perspective found its way into paintings by artists interested in image distortions. Jan van Eyck's 1434 Arnolfini Portrait contains a convex mirror with reflections of the people in the scene, while Parmigianino's Self-portrait in a Convex Mirror, c. 1523–1524, shows the artist's largely undistorted face at the centre, with a strongly curved background and artist's hand around the edge. Three-dimensional space can be represented convincingly in art, as in technical drawing, by means other than perspective.

Vanishing point

abstract pointone-point perspectivevanishes to a single point
A vanishing point is a point on the image plane of a perspective drawing where the two-dimensional perspective projections (or drawings) of mutually parallel lines in three-dimensional space appear to converge. When the set of parallel lines is perpendicular to a picture plane, the construction is known as one-point perspective, and their vanishing point corresponds to the oculus, or "eye point", from which the image should be viewed for correct perspective geometry. Traditional linear drawings use objects with one to three sets of parallels, defining one to three vanishing points.

UV mapping

UVUV unwrappingUVs
UV mapping is the 3D modelling process of projecting a 2D image to a 3D model's surface for texture mapping. The letters "U" and "V" denote the axes of the 2D texture because "X", "Y" and "Z" are already used to denote the axes of the 3D object in model space. UV texturing permits polygons that make up a 3D object to be painted with color (and other surface attributes) from an ordinary image. The image is called a UV texture map. The UV mapping process involves assigning pixels in the image to surface mappings on the polygon, usually done by "programmatically" copying a triangular piece of the image map and pasting it onto a triangle on the object.

Analytic geometry

analytical geometrycoordinate geometryanalytic
In three dimensions, lines can not be described by a single linear equation, so they are frequently described by parametric equations: where: In the Cartesian coordinate system, the graph of a quadratic equation in two variables is always a conic section – though it may be degenerate, and all conic sections arise in this way.

Edwin Catmull

Ed CatmullCatmullDr. Ed Catmull
During his time there, he made two new fundamental computer-graphics discoveries: texture mapping, and bicubic patches; and invented algorithms for spatial anti-aliasing and refining subdivision surfaces. He also independently discovered Z-buffering, even though it had already been described 8 months before by Wolfgang Straßer in his PhD thesis. In 1972, Catmull made his earliest contribution to the film industry: an animated version of his left hand which was eventually picked up by a Hollywood producer and incorporated in the 1976 movie Futureworld, the science-fiction sequel to the film Westworld and the first film to use 3D computer graphics.