3D projection

projectiongraphical projection3D
Curvilinear perspective. Cutaway. Descriptive geometry. Engineering drawing. Exploded-view drawing. Graphics card. Homogeneous coordinates. Homography. Map projection (including Cylindrical projection). Multiview projection. Perspective (graphical). Plan (drawing). Technical drawing. Texture mapping. Transform and lighting. Viewing frustum. Virtual globe.

Shadow mapping

shadowsshadow mapCascaded Shadow Maps
For a point light source, the view should be a perspective projection as wide as its desired angle of effect (it will be a sort of square spotlight). For directional light (e.g., that from the Sun), an orthographic projection should be used. From this rendering, the depth buffer is extracted and saved. Because only the depth information is relevant, it is common to avoid updating the color buffers and disable all lighting and texture calculations for this rendering, in order to save drawing time. This depth map is often stored as a texture in graphics memory.

Cube mapping

cube mapcube mapsCube-mapped
Another application which found widespread use in video games is projective texture mapping. It relies on cube maps to project images of an environment onto the surrounding scene; for example, a point light source is tied to a cube map which is a panoramic image shot from inside a lantern cage or a window frame through which the light is filtering. This enables a game developer to achieve realistic lighting without having to complicate the scene geometry or resort to expensive real-time shadow volume computations. A cube texture indexes six texture maps from 0 to 5 in order Positive X, Negative X, Positive Y, Negative Y, Positive Z, Negative Z.

Orthographic projection

An orthographic projection map is a map projection of cartography. Like the stereographic projection and gnomonic projection, orthographic projection is a perspective (or azimuthal) projection, in which the sphere is projected onto a tangent plane or secant plane. The point of perspective for the orthographic projection is at infinite distance. It depicts a hemisphere of the globe as it appears from outer space, where the horizon is a great circle. The shapes and areas are distorted, particularly near the edges. The orthographic projection has been known since antiquity, with its cartographic uses being well documented.

Parallax occlusion mapping

Parallax occlusion mapping is used to procedurally create 3D definition in textured surfaces, using a displacement map (similar to a topography map) instead of through the generation of new geometry. This allows developers of 3D rendering applications to add 3D complexity in textures, which correctly change relative to perspective and with self occlusion in real time (self-shadowing is additionally possible), without sacrificing the processor cycles required to create the same effect with geometry calculations. Parallax occlusion mapping was first published in 2005 by Zoe Brawley and Natalya Tatarchuk in ShaderX3.

3D rendering

rendering3D acceleration3D
A simple example of shading is texture mapping, which uses an image to specify the diffuse color at each point on a surface, giving it more apparent detail. Some shading techniques include: Transport describes how illumination in a scene gets from one place to another. Visibility is a major component of light transport. The shaded three-dimensional objects must be flattened so that the display device - namely a monitor - can display it in only two dimensions, this process is called 3D projection. This is done using projection and, for most applications, perspective projection.

Rendering (computer graphics)

Some are integrated into larger modeling and animation packages, some are stand-alone, some are free open-source projects. On the inside, a renderer is a carefully engineered program, based on a selective mixture of disciplines related to: light physics, visual perception, mathematics, and software development. In the case of 3D graphics, rendering may be done slowly, as in pre-rendering, or in realtime. Pre-rendering is a computationally intensive process that is typically used for movie creation, while real-time rendering is often done for 3D video games which rely on the use of graphics cards with 3D hardware accelerators.

3D computer graphics

3D3D graphicsthree-dimensional
. • Graphics and software • Glossary of computer graphics • Comparison of 3D computer graphics software • Graphics processing unit (GPU) • Graphical output devices • List of 3D computer graphics software • List of 3D modeling software • List of 3D rendering software • Real-time computer graphics • Reflection (computer graphics) • Rendering (computer graphics) • Fields of use • 3D data acquisition and object reconstruction • 3D motion controller • 3D projection on 2D planes • 3D reconstruction • 3D reconstruction from multiple images • Anaglyph 3D • Computer animation • Computer vision • Digital geometry • Digital image processing • Game development tool • Game engine • Geometry pipelines • Geometry

UV mapping

UVUV unwrappingUVs
UV mapping is the 3D modelling process of projecting a 2D image to a 3D model's surface for texture mapping. The letters "U" and "V" denote the axes of the 2D texture because "X", "Y" and "Z" are already used to denote the axes of the 3D object in model space. UV texturing permits polygons that make up a 3D object to be painted with color (and other surface attributes) from an ordinary image. The image is called a UV texture map. The UV mapping process involves assigning pixels in the image to surface mappings on the polygon, usually done by "programmatically" copying a triangular piece of the image map and pasting it onto a triangle on the object.

Projective geometry

projectiveprojective geometriesProjection
Properties meaningful for projective geometry are respected by this new idea of transformation, which is more radical in its effects than can be expressed by a transformation matrix and translations (the affine transformations). The first issue for geometers is what kind of geometry is adequate for a novel situation. It is not possible to refer to angles in projective geometry as it is in Euclidean geometry, because angle is an example of a concept not invariant with respect to projective transformations, as is seen in perspective drawing. One source for projective geometry was indeed the theory of perspective.

Sprite (computer graphics)

Modern GPU hardware can mimic sprites with two texture-mapped triangles or specific primitives such as point sprites. Some hardware makers used terms other than sprite. Player/Missile Graphics was a term used by Atari, Inc. for hardware-generated sprites in the company's early coin-op games, the Atari 2600 and 5200 consoles, and the Atari 8-bit computers. The term reflected the usage for both characters ("players") and smaller associated objects ("missiles") that share the same color. Movable Object Block, or MOB, was used in MOS Technology's graphics chip literature (data sheets, etc.)

Graphics processing unit

GPUGPUsgraphics processor
GPUs were initially used to accelerate the memory-intensive work of texture mapping and rendering polygons, later adding units to accelerate geometric calculations such as the rotation and translation of vertices into different coordinate systems. Recent developments in GPUs include support for programmable shaders which can manipulate vertices and textures with many of the same operations supported by CPUs, oversampling and interpolation techniques to reduce aliasing, and very high-precision color spaces.


z-bufferdepth bufferdepth
After a perspective transformation, the new value of z, or z', is defined by: After an orthographic projection, the new value of z, or z', is defined by: where z is the old value of z in camera space, and is sometimes called w or w'. The resulting values of z' are normalized between the values of -1 and 1, where the plane is at -1 and the plane is at 1. Values outside of this range correspond to points which are not in the viewing frustum, and shouldn't be rendered. Typically, these values are stored in the z-buffer of the hardware graphics accelerator in fixed point format.


pixel shadervertex shadershaders
Vertex shaders describe the traits (position, texture coordinates, colors, etc.) of a vertex, while pixel shaders describe the traits (color, z-depth and alpha value) of a pixel. A vertex shader is called for each vertex in a primitive (possibly after tessellation); thus one vertex in, one (updated) vertex out. Each vertex is then rendered as a series of pixels onto a surface (block of memory) that will eventually be sent to the screen. Shaders replace a section of the graphics hardware typically called the Fixed Function Pipeline (FFP), so-called because it performs lighting and texture mapping in a hard-coded manner. Shaders provide a programmable alternative to this hard-coded approach.

Computer graphics

Texture mapping. Texture mapping is a method for adding detail, surface texture, or colour to a computer-generated graphic or 3D model. Its application to 3D graphics was pioneered by Dr Edwin Catmull in 1974. A texture map is applied (mapped) to the surface of a shape, or polygon. This process is akin to applying patterned paper to a plain white box. Multitexturing is the use of more than one texture at a time on a polygon.


light maplight mappinglightmapping
Texture mapping. Baking (computer graphics).

Computer animation

computer-animatedcomputer animatedCGI
Developers of computer games and 3D video cards strive to achieve the same visual quality on personal computers in real-time as is possible for CGI films and animation. With the rapid advancement of real-time rendering quality, artists began to use game engines to render non-interactive movies, which led to the art form Machinima. The very first full length computer animated television series was ReBoot, which debuted in September 1994; the series followed the adventures of characters who lived inside a computer. The first feature-length computer animated film was Toy Story (1995), which was made by Pixar. It followed an adventure centered around toys and their owners.


specular mappingspecularshininess
It is frequently used in real-time computer graphics and ray tracing, where the mirror-like specular reflection of light from other surfaces is often ignored (due to the more intensive computations required to calculate it), and the specular reflection of light directly from point light sources is modelled as specular highlights. A materials system may allow specularity to vary across a surface, controlled by additional layers of texture maps. Early shaders included a parameter called "Specularity".

Normal mapping

normal mapsnormalnormal map
It was later possible to perform normal mapping on high-end SGI workstations using multi-pass rendering and framebuffer operations or on low end PC hardware with some tricks using paletted textures. However, with the advent of shaders in personal computers and game consoles, normal mapping became widely used in commercial video games starting in late 2003. Normal mapping's popularity for real-time rendering is due to its good quality to processing requirements ratio versus other methods of producing similar effects.

General Perspective projection

perspective (or azimuthal) projectionVertical perspective
The General Perspective projection is a map projection. When the Earth is photographed from space, the camera records the view as a perspective projection. When the camera is aimed toward the center of the Earth, the resulting projection is called Vertical Perspective. When aimed in other directions, the resulting projection is called a Tilted Perspective. The Vertical Perspective is related to the stereographic projection, gnomonic projection, and orthographic projection. These are all true perspective projections, meaning that they result from viewing the globe from some vantage point.

Geographic information system

GISgeographic information systemsgeographical information system
This has been enhanced by the availability of low-cost mapping-grade GPS units with decimeter accuracy in real time. This eliminates the need to post process, import, and update the data in the office after fieldwork has been collected. This includes the ability to incorporate positions collected using a laser rangefinder. New technologies also allow users to create maps as well as analysis directly in the field, making projects more efficient and mapping more accurate. Remotely sensed data also plays an important role in data collection and consist of sensors attached to a platform.