现在的位置: 首页 > 综合 > 正文

Per-pixel lighting

2018年02月15日 ⁄ 综合 ⁄ 共 3678字 ⁄ 字号 评论关闭
From Wikipedia, the free encyclopedia

In
computer graphics
, per-pixel lighting refers to any technique for lighting an image or scene that calculates illumination for each pixel on a rendered image. This is in contrast to other popular methods of lighting such as
vertex lighting, which calculates illumination at each vertex of a

3D model
and then
interpolates
the resulting values over the model's faces to calculate the final per-pixel color values.

Per-pixel lighting is commonly used with techniques like
normal mapping
,
bump mapping
,
specularity
, and
shadow volumes
. Each of these techniques provides some additional data about the surface being lit or the scene and light sources that contributes to the final look and feel of the surface.

Most modern video game engines implement lighting using per-pixel techniques instead of vertex lighting to achieve increased detail and realism. The
id Tech 4 engine, used to develop such games as

Brink
and Doom 3, was one of the first game engines to implement a completely per-pixel shading engine. All versions of the

CryENGINE
,
Frostbite Engine
, and
Unreal Engine
, among others, also implement per-pixel shading techniques.

Deferred shading is a recent development in per-pixel lighting notable for its use in the Frostbite Engine and
Battlefield 3. Deferred shading techniques are capable of rendering potentially large numbers of small lights inexpensively (other per-pixel lighting approaches require full-screen
calculations for each light in a scene, regardless of size).

Contents

 [hide

[edit]
History

While only recently have personal computers and video hardware become powerful enough to perform full per-pixel shading in real-time applications such as games, many of the core concepts used in per-pixel lighting models have existed for decades.

Frank Crow published a paper describing the theory of
shadow volumes in 1977.[1] This technique
uses the
stencil buffer
to specify areas of the screen that correspond to surfaces that lie in a "shadow volume", or a shape representing a volume of space eclipsed from a light source by some object. These shadowed areas are typically shaded after the scene is
rendered to buffers by storing shadowed areas with the stencil buffer.

Jim Blinn first introduced the idea of
normal mapping in a 1978
SIGGRAPH paper.[2] Blinn pointed out that the
earlier idea of unlit texture mapping proposed by
Edwin Catmull
was unrealistic for simulating rough surfaces. Instead of mapping a texture onto an object to simulate roughness, Blinn proposed a method of calculating the degree of lighting a point on a surface should receive based on an established "perturbation"
of the normals across the surface.

[edit]
Implementations

[edit]
Hardware Rendering

Real-time applications, such as
computer games
, usually implement per-pixel lighting through the use of
pixel shaders
, allowing the
GPU
hardware to process the effect. The scene to be rendered is first rasterized onto a number of buffers storing different types of data to be used in rendering the scene, such as depth, normal direction, and diffuse color. Then, the data is passed into
a shader and used to compute the final appearance of the scene, pixel-by-pixel.

Deferred shading is a per-pixel shading technique that has recently become feasible for games.[3]
With deferred shading, a "g-buffer" is used to store all terms needed to shade a final scene on the pixel level. The format of this data varies from application to application depending on the desired effect, and can include normal data, positional data,

specular
data,
diffuse
data,
emissive
maps and
albedo
, among others. Using multiple render targets, all of this data can be rendered to the g-buffer with a single pass, and a shader can calculate the final color of each pixel based on the data from the g-buffer in a final "deferred pass".

[edit]
Software Rendering

Per-pixel lighting is also performed in software on many high-end commercial rendering applications which typically do not render at interactive framerates. This is called offline rendering or

software rendering
. NVidia's
mental ray
rendering software, which is integrated with such suites as
Autodesk
's
Softimage
is a well-known example.

【上篇】
【下篇】

抱歉!评论已关闭.