现在的位置: 首页 > 综合 > 正文

Week 13: Displaying Data With OpenGL ES 2.0

2014年03月27日 ⁄ 综合 ⁄ 共 4997字 ⁄ 字号 评论关闭
文章目录
 

原文:http://durchblickapp.wordpress.com/2010/05/

Week 13: Displaying Data With OpenGL ES 2.0

May 30, 2010

  • I used iPhone SDK’s UIKit to create face detection regions.
  • On this week’s meeting we decided to have a look at OpenGL and how it works on the iPhone to cover this aspect and gain some knowledge in this area.
  • Once again: OpenGL isn’t a easy task.
  • OpenGL ES is a subset of OpenGL, so everything (with a few exceptions) known from the desktop applies to the version for embedded devices.
  • There is no GLU or GLUT so no convenience functions like gluLookAt or glutSolidSphere. Everything has to be done by hand.
  • No immediate mode (glBegin … glEnd), no display lists, no quads, …
  • Current paradigm change from OpenGL ES 1.x to OpenGL ES 2.x
  • Shader effects now possible.
  • OpenGL ES: Defines only rendering and state management commands
  • Platform API: Creates and manages OpenGL ES rendering contexts Examples: AGL, CGL, EGL, GLX (and for us important EAGL)
  • Native window system API: Manages surfaces and displays rendered content Examples: Quartz, X-Windows
  • GLUT which was tought us in the course has done a lot of this for us on the desktop.
  • With a EAGLContext we render into a CAEAGLLayer inherited from a CALayer encapsulated in a UIView. Every UIView is backed by a CALayer (Frame Buffer>CALayer>UIView).
  • I wrote software to load vertices from an OFF-file and convert the vertices into a vertex-array to render a Icosahedron. Then I dove a little into animating it. It was a bit tricky to get blending working with the rest of the iPhone (OpenGL object “floating” on the camera preview). Performance is critical on the device.

Everything from this week you’ll find in next week’s article: Week 13: Displaying Data With OpenGL ES 2.0.

  • Difficult task in any case, even more on mobile.
  • Way more complicated than face detection with OpenCV.
  • A lot of engineering work could be put into managing ressources, get better results and yet stay responsive for the user
  • Preprocess image further to help the detection and recognition (equalize, …).
  • Reduce scaling dynamically in the background and take longer processing time for the sake of detect small faces further away.
  • Once a detection is successful, lower detection frequency and track the face which is less computation intense. Then use more ressources for recognition.
  • Track several faces from frame to frame.
  • Seeing With OpenCV, a excellent series covering OpenCV face detection and recognition.
  • OpenCVWiki: Face Recognition using OpenCV

Everything from this week you’ll find in next week’s article: Week 11: Face Recognition (Continued).

Starting face detection with OpenCV on the live camera preview I soon encountered performance issues.

Analyzing with Activity Monitor 90% of the main CPU, according to AnandTech a Samsung S5PC100 ARM Cortex-A8 with 833 MHz underclocked to 600 MHz, is not enough for face detection. I also found out by CPU Sampler that only 20% of the CPU time is spent on cvRunHaarClassifierCascade—the actual face detection. Everything else is consumed to run everything else.

Still, face detection took about 3 secs while the rest of the processing loop only took a few ms. Also, the there were a lot of false positives.

Optimizing the parameters gave better results: for face detection 2 secs even in complex environment with less false positives.

1 CvSeq *faces = cvHaarDetectObjects(iplImage,
2                                    cascade,
3                                    storage,
4                                    1.1, // double scale_factor CV_DEFAULT(1.1)
5                                    2, // int min_neighbors CV_DEFAULT(3)
6                                    CV_HAAR_DO_CANNY_PRUNING, // int flags CV_DEFAULT(0)
7                                    cvSize(30, 30)); // CvSize min_size CV_DEFAULT(cvSize(0,0))

I could get a big improvement by scaling down the image by factor 2 and smooth it with a 5×5 gaussian kernel: 0.3 secs, downside: face must be approximately the size of a thumb on the screen.

If you are like me you like to draw moustaches on people’s faces—no need for yet another frame around a detected face.

 

Lena with live face detection and display of a moustache

 

The moustache flickered away because he avoid face detection in the previous frame.

But here the problem with the way I access the camera image and occlusion manifested itself. The moustache got displayed, then removed because no face was recognized and then with the uncluttered face detected and displayed again.

The final tests I did with a dashed rounded rectagle with light and dark grey in it for good contrast against any object. I place it around the face so no occlusion of the face takes place.

 

Detected face with a bounding rectangle.

I can detect multiple faces in one frame. But it’s difficult to smoothly match the rectangles from the previous frame to the new ones. For now I constrain the displaying to one face an do a nice fade in and move animation.

It’s at the very beginning but quite impressive.

A C example of “Face Detection using OpenCV” in the Wiki helped me using this OpenCV function.

抱歉!评论已关闭.