现在的位置: 首页 > 综合 > 正文

OpenGL学习笔记

2014年01月09日 ⁄ 综合 ⁄ 共 12662字 ⁄ 字号 评论关闭

1、帧缓存:一般指保存我们正在渲染图像的那块内存。渲染可以在显示器屏幕上进行,一个文件上进行,一个AVI中的一帧,或者是一张纹理上。

The frame buffer is the memory of the graphics display device, which means the image is displayed on your screen.

OpenGL does not render (draw) these primitives directly on the screen. Instead, rendering is done in a buffer, which is later swapped to the screen. We refer to these two buffers as thefront(the screen) and
back color buffers. By default, OpenGL commands are rendered into the back buffer, and when you call glutSwapBuffers (or your operating system–specific buffer swap function), the front and back buffers are swapped so that
you can see the rendering results. You can, however, render directly into the front buffer if you want. (glDrawBuffer)

When you do single-buffered rendering, it is important to call either
glFlush
or glFinish whenever you want to see the results actually drawn to screen. A buffer swap implicitly performs a flush of the pipeline and waits for rendering to complete before the swap actually occurs. 

2、3D中,水平方向上的缩放值和垂直方向上的比例要适中,否则图像将会变形。一般情况下,这个比例要和显示图像的窗口的水平方向和垂直方向的大小比例保持一致。

3、object space = modeling space = local space     ------------  model transformation

         camera space = eye space

         clip space = canonical view volume space  -------------- clip transformation

4、Two principal tasks are required to create an image of a three-dimensional scene: modeling and rendering. The modeling task generates a model, which is the description of an object that is going to be used by the graphics system. Models must be created
for every object in a scene; they should accurately capture the geometric shape and appearance of the object. Some or all of this task commonly occurs when the application is being developed, by creating and storing model descriptions as part of the application’s
data. 

The second task, rendering, takes models as input and generates pixel values for the final image. OpenGL is principally concerned with object rendering; it does not provide explicit support for creating object models. The model input data is left to the application
to provide. The OpenGL architecture is focused primarily on rendering polygonal models; it doesn’t directly support more complex object descriptions, such as implicit surfaces. 

To provide accurate rendering of a model’s appearance or surface shading, the modeler may also have to determine color values, shading normals, and texture coordinates for the model’s vertices and faces.

5、To smoothly shade an object, a given vertex normal should be used by all polygons that share that vertex. Ideally, this vertex normal is the same as the surface normal at the corresponding point on the original surface. However, if the true surface normal
isn’t available, the simplest way to approximate one is to add all (normalized) normals from the common facets then renormalize the result (Gouraud, 1971). This provides reasonable results for surfaces that are fairly smooth, but does not look good for surfaces
with sharp edges.

6、Since the polygon winding may be used to cull back or front-facing triangles, for performance reasons it is important that models are made consistent; a polygon wound inconsistently with its neighbors should have its vertex order reversed. A good way to
accomplish this is to find all common edges and verify that neighboring polygon edges are drawn in the opposite order.

To ensure that the rewound model is oriented properly (i.e., all polygons are wound so that their front faces are on the outside surface of the object), the algorithm begins by choosing and properly orienting the seed polygon. One way to do this is to find
the geometric center of the object: compute the object’s bounding box, then compute its mid-point. Next, select a vertex that is the maximum distance from the center point and compute a (normalized) out vector from the center point to this vertex. One of the
polygons using that vertex is chosen as the seed. Compute the normal of the seed polygon, then compute the dot product of the normalwith the out vector. A positive result indicates that seed is oriented correctly. A negative result indicates the polygon’s
normal is facing inward. If the seed polygon is backward, reverse its winding before using it to rewind the rest of the model.

7、vertex buffer objects

In OpenGL 1.5, vertex buffer objects were added to the specification to enable the same server placement optimizations that are used with display lists. Vertex buffer objects allow the application to allocate vertex data storage that is managed by the OpenGL
implementation and can be allocated from accelerator memory. The application can store vertex data to the buffer using an explicit transfer command (glBufferData), or by mapping the buffer (glMapBuffer). The vertex buffer data can also be examined by the application
allowing dynamic modification of the data, though it may be slower if the buffer storage is now in the accelerator. Having dynamic read-write access allows geometric data to be modified each frame, without requiring the application to maintain a separate copy
of the data or explicitly copy it to and from the accelerator. Vertex buffer objects are used with the vertex array drawing commands by binding a vertex buffer object to the appropriate array binding point (vertex, color, normal, texture coordinate) using
the array point commands (for example, glNormalPointer). When an array has a bound buffer object, the array pointer is interpreted relative to the buffer object storage rather than application memory addresses. 

8、The OpenGL transformation pipeline can be thought of as a series of cartesian coordinate spaces connected by transformations that can be directly set by the application. Five spaces are used: object space, which starts with the application’s coordinates,
eye space, where the scene is assembled, clip space, which defines the geometry that will be visible in the scene, NDC space, the canonical space resulting from perspective division, and window space, which maps to the framebuffer’s pixel locations.

The pipeline begins with texture, vertex, and light position coordinates, along with normal vectors, sent down from the application. These untransformed values are said to be in object space. If the application has enabled the generation of object space
texture coordinates, they are created here from untransformed vertex positions.

The modelview matrix is typically used to assemble a series of objects into a coherent scene viewed from a particular vantage.

An important use of the modelview matrix is modifying the parameters of OpenGL light sources. When a light position is issued using the glLight() command, the position or direction of the light is transformed by the current modelview matrix before being
stored. The transformed position is used in the lighting computations until it’s updated with a new call to glLight().

The eye space coordinate system is where object lighting is applied and eye-space texture coordinate generation occurs. OpenGL makes certain assumptions about eye space. The viewer position is defined to be at the origin of the eye-space coordinate system.
The direction of view is assumed to be the negative z-axis, and the viewer’s up position is the y-axis.

Normals are consumed by the pipeline in eye space. If lighting is enabled, they are used by the lighting equation—along with eye position and light positions—to modify the current vertex color. The projection transform transforms the remaining vertex and
texture coordinates into clip space. If the projection transform has perspective elements in it, the w values of the transformed vertices are modified.

If new vertices are generated as a result of clipping, the new vertices will have texture coordinates and colors interpolated to match the new vertex positions. The exact shape of the viewvolume depends on the type of projection transform; a perspective
transformation results in a frustum (a pyramid with the tip cut off), while an orthographic projection will create a parallelepiped volume.

9. Render: Rendering is the act of taking a geometric description of a three-dimensional object and turning it into an image of that object onscreen. 

10. Perspective: Perspective refers to the angles between lines that lend the illusion of three dimensions.

11. You expect the front of an object to obscure the back of the object from view. For solid surfaces, we call this hidden surface removal.

12. This technique of applying an image to a polygon to supply additional detail is called texture mapping. The image you supply is called a texture, and the individual elements of the texture
are called texels. Finally, the process of stretching or compressing the texels over the surface of an object is called filtering.

13. Blending(混合) is the combination of colors or objects on the screen.  By varying the amount each object is blended with the scene, you can make objects look transparent such that you see the object and what is behind
it (such as glass or a ghost image).

14.  By carefully blending the lines with the back-ground color, you can eliminate the jagged edges and give the lines a smooth appearance, This blending technique is called antialiasing. 简单的说也就是将图像边缘及其两侧的像素颜色进行混合,然后用新生成的具有混合特性的点来替换原来位置上的点以达到柔化物体外形、消除锯齿的效果。

15. With both immediate mode and retained mode, new commands have no effect on rendering commands that have already been executed.

16. A window is measured physically in terms of pixels. Before you can start plotting points, lines, and shapes in a window, you must tell OpenGL how to translate specified coordinate pairs into screen coordinates. You do this by specifying the region of
Cartesian space that occupies the window; this region is known as the
clipping region

17. Viewports: Mapping Drawing Coordinates to Window Coordinates. Rarely will your clipping area width and height exactly match the width and height of the window in pixels. The coordinate system must therefore be mapped
from logical Cartesian coordinates to
physical screen pixel coordinates
. This mapping is specified by a setting
known as the viewport. The viewport is the region within the window’s client area that is used for drawing the clipping area. The viewport simply maps the clipping area to a region of the window. Usually, the viewport is defined
as the entire window, but this is not strictly necessary; for instance, you might want to draw only in the lower half of the window.

18. The Vertex—A Position in Space. A vertex is nothing more than a coordinate in 2D or 3D space. 

19.  Primitives are one- or two-dimensional entities or surfaces such as points, lines, and polygons (a flat, multisided shape) that are assembled in 3D space to create 3D objects. 

20. OpenGL is a procedural rather than a descriptive graphics API. Instead of describing the scene and how it should appear, the programmer actually prescribes the steps necessary to achieve a certain appearance or effect.
These “steps” involve calls to the many OpenGL commands. These commands are used to draw graphics primitives such as points, lines, and polygons in three dimensions. In addition, OpenGL supports lighting and shading, texture mapping, blending, transparency,
animation, and many other special effects and capabilities.

21. OpenGL的数据类型

  • GLenum: 用于GL枚举的无符号整型。通常用于通知OpenGL由指针传递的存储于数组中数据的类型(例如,GL_FLOAT用于指示数组由GLfloat组成)。
  • GLboolean: 用于单布尔值。OpenGL ES还定义了其自己的“真”和“假”值(GL_TRUE和GL_FALSE)以避免平台和语言的差别。当向OpenGL传递布尔值时,请使用这些值而不是使用YES或NO(尽管由于它们的定义实际没有区别,即使你不小心使用了YES或NO。但是,使用GL-定义值是一个好的习惯。)
  • GLbitfield: 用于将多个布尔值(最多32个)打包到单个使用位操作变量的四字节整型。
  • GLbyte: 有符号单字节整型,包含数值从-128 到 127
  • GLshort: 有符号双字节整型,包含数值从−32,768 到 32,767
  • GLint: 有符号四字节整型,包含数值从−2,147,483,648 到 2,147,483,647
  • GLsizei: 有符号四字节整型,用于代表数据的尺寸(字节),类似于C中的size_t
  • GLubyte: 无符号单字节整型,包含数值从0 到 255。
  • GLushort: 无符号双字节整型,包含数值从0 到 65,535
  • GLuint: 无符号四字节整型,包含数值从0 到 4,294,967,295
  • GLfloat: 四字节精度IEEE 754-1985 浮点数
  • GLclampf: 这也是四字节精度浮点数,但OpenGL使用GLclampf特别表示数值为0.0 到 1.0
  • GLvoidvoid值用于指示一个函数没有返回值,或没有参数
  • GLfixed定点数 使用整型数存储实数。由于大部分计算机处理器在处理整型数比处理浮点数快很多,这通常是对3D系统的优化方式。
  • GLclampx: 另一种定点型,用于使用定点运算来表示0.0 到 1.0之间的实数。
  • 22. OpenGL uses floats internally, and using anything other than the single-precision floating-point functions adds a performance bottleneck because the values are converted to floats anyhow before being processed by OpenGL.
  • 23.  GLUT_SINGLE: A single-buffered window means that all drawing commands are performed on the window displayed. An alternative is adouble-buffered window, where the drawing commands are
    actually executed on an offscreen buffer and then quickly swapped into view on the window. This method is often used to produce animation effects. 
  • Double buffering can serve two purposes. The first is that some complex drawings might take a long time to draw, and you might not want each step of the image composition to be visible. Using double buffering, you can compose an image and display it only after
    it is complete. The user never sees a partial image; only after the entire image is ready is it shown onscreen.  A second use for double buffering is animation. Each frame is drawn in the offscreen buffer and then swapped quickly to the screen when ready.
    The GLUT library supports double-buffered windows. 
  • 24. alpha component, which is used for
    blending 
    and special effects such as transparency. Transparency refers to an object’s capability to allow light to pass through it. Suppose you would like to create a piece of red stained glass, and a blue light happens
    to be shining behind it. The blue light affects the appearance of the red in the glass (blue + red = purple). You can use the alpha component value to generate a red color that is semitransparent so that it works like a sheet of glass—an object  behind it
    shows through.
  • 25. A buffer is a storage area for image information. The red, green, and blue components of a drawing are usually collectively referred to as the color buffer or pixel
    buffer
    .  More than one kind of buffer (color, depth, stencil, and accumulation) is available in OpenGL. You will also see the term framebuffer, which refers to all these buffers collectively
    since they work in tandem.
  • 26. The aspect ratio is the ratio of the number of pixels along a unit of length in the vertical direction to the number of pixels along the same unit of length in the horizontal direction. In English, this just means the
    width of the window divided by the height.
  • 27. Rotation: To rotate an object about one of the three coordinate axes, or indeed any arbitrary vector, you have to devise a rotation matrix. Again, a high-level function comes to the rescue: glRotatef(GLfloat angle, GLfloat x,
    GLfloat y, GLfloat z)
    ; Here, we perform a rotation around the vector specified by the x, y, and z arguments. The
    angle of rotation is in the counterclockwise direction measured in degrees and specified by the argumentangle

抱歉!评论已关闭.