现在的位置: 首页 > 综合 > 正文

Android Native Coding in C

2013年10月14日 ⁄ 综合 ⁄ 共 9018字 ⁄ 字号 评论关闭

When I read questions like "How
to do code sharing between Android and iOS"
 on stackoverflow.com, and the answers are variations on the theme: "the NDK is not for
creating cross platform apps", it makes me sad. The NDK is an excellent way to write cross platform games. Here is a little insight into the approach I've taken with my so-far unreleased Chaos port.


To code a game in C on Android you have to first write a Java Activity with a View. This can be either a regular View or one that is OpenGL-ified, this explanation uses the GLSurfaceView.
Then you use the Java Native Interface (JNI) to call from Java into your C code. You compile your C code using the Android Native Development Kit. The remaining problem
is then: how can I draw pixels?

You have 2 options (3 if you are willing to target Android 2.2+): drawing pixels to a Canvas,
drawing to an OpenGL ES texture, or drawing directly to the pixel buffer of a Bitmap. This last option is similar to option 1, but is faster and
available on Android 2.2 "Froyo" only.

Assuming you want to draw a screen that is smaller than the Android native screen size and scale this up, the OpenGL version is the fastest and most compatible of the 3 choices. Using OpenGL from C code is actually cleaner than from Java, as you do not need
to worry about adding gl. to all the GL function calls (gl.glActiveTexture for example, where gl is an instance of javax.microedition.khronos.opengles.GL10). You also don't have to worry about the arrays you pass to GL
functions being on the Java heap rather than being the required native arrays. This means you don't have to deal with all the ByteBuffer calls that clog up the Java OpenGL examples.

You will need at least 3 native functions to draw in OpenGL: the "main loop" that runs your game's code, the "screen resized" and the "screen rendered".

The main loop can be the classic while (1) { update_state(); wait_vsync();}. The screen resized function is called when the Android device is rotated or otherwise needs a new screen setting up. The screen rendered function is called once per frame.

The main loop and the render functions both accept no arguments. The screen resized or set up code accepts a width and height argument. The Java native declarations for these calls will look like this:

private static native void native_start();
private static native void native_gl_resize(int w, int h);
private static native void native_gl_render();

static {
       System.loadLibrary("mybuffer");
}

Now you have to write these in C and somehow register them with the Dalvik VM (or "Dalek VM"
as I often misread it. Exterminate!) Dalvik uses the same approach to binding native methods as the Java VM does; it opens a native library with dlopen() and looks for the symbolJNI_OnLoad and functions with "mangled" names that match
the native declarations. The library loaded here will be "libmybuffer.so". You can either implement your functions with the mangled names or using a call to "RegisterNatives" in the JNI_OnLoad function. There are many JNI
tutorials
 on the net, I won't rewrite how to do that here. Whichever way you choose, you still need to have the native declarations in your Java source code. My examples are using RegisterNatives, as it gives cleaner C function names.

In the constructor of your Java GLSurfaceView, you should call the main loop (in C) from a separate thread - this way the loop does not block the main thread of your Android application and the Android OS won't kill it for being non-responsive. It is important
that the main C code does not do OpenGL manipulation as that can also crash the application. All GL manipulation is done in the renderer call. The main-loop thread can change whatever C state it likes; the render call later reads this state to create the final
rendered screen.

public GlBufferView(Context context, AttributeSet attrs) {
       super(context, attrs);
       (new Thread() {
               @Override
               public void run() {
                       native_start();
               }
       }).start();
       setRenderer(new MyRenderer());
}

The implementation of your GLSurfaceView.Renderer class simply delegates to the native functions and should look like this:

class MyRenderer implements GLSurfaceView.Renderer {
       @Override
       public void onSurfaceCreated(GL10 gl, EGLConfig c) { /* do nothing */ }

       @Override
       public void onSurfaceChanged(GL10 gl, int w, int h) {
               native_gl_resize(w, h);
       }

       @Override
       public void onDrawFrame(GL10 gl) {
               native_gl_render();
       }
}

The onSurfaceCreated method is not used, the onSurfaceChanged method is what the OpenGL implementation really uses to indicate a screen should be set up properly. The method onDrawFrame is called once per frame, at a rate
of between 30-60 FPS (if you're lucky).

Now you can forget about Java until you need to do input, but that's another story, and write the rest of your game in C. Thenative_gl_resize method should grab a texture and set up the simplest rendering scenario it can. Experimentation has shown
that this is not too shabby:

#define TEXTURE_WIDTH  512
#define TEXTURE_HEIGHT 256
#define MY_SCREEN_WIDTH  272
#define MY_SCREEN_HEIGHT 208
static int s_w;
static int s_h;
static GLuint s_texture;

void JNICALL native_gl_resize(JNIEnv *env, jclass clazz, jint w, jint h)
{
       glEnable(GL_TEXTURE_2D);
       glGenTextures(1, &s_texture);
       glBindTexture(GL_TEXTURE_2D, s_texture);
       glTexParameterf(GL_TEXTURE_2D,
                       GL_TEXTURE_MIN_FILTER, GL_LINEAR);
       glTexParameterf(GL_TEXTURE_2D,
                       GL_TEXTURE_MAG_FILTER, GL_LINEAR);
       glShadeModel(GL_FLAT);
       glColor4x(0x10000, 0x10000, 0x10000, 0x10000);
       int rect[4] = {0, MY_SCREEN_HEIGHT, MY_SCREEN_WIDTH, -MY_SCREEN_HEIGHT};
       glTexParameteriv(GL_TEXTURE_2D, GL_TEXTURE_CROP_RECT_OES, rect);
       glTexImage2D(GL_TEXTURE_2D,             /* target */
                       0,                      /* level */
                       GL_RGB,                 /* internal format */
                       TEXTURE_WIDTH,          /* width */
                       TEXTURE_HEIGHT,         /* height */
                       0,                      /* border */
                       GL_RGB,                 /* format */
                       GL_UNSIGNED_SHORT_5_6_5,/* type */
                       NULL);                  /* pixels */
       /* store the actual width of the screen */
       s_w = w;
       s_h = h;
}

You can also call glDisable to turn off fog, depth, and other 3D functions, but it doesn't seem to make too much difference. TheglEnable(GL_TEXTURE_2D); call enables textures. You need this as you'll be drawing your pixels into a texture. glGenTextures andglBindTexture get
a handle to a texture and set it as the currently used one. The 2 glTexParameterf calls are needed to make the texture actually show up on hardware. Cargo cult coding here. Without these the texture is just a white square. Similarly, the glShadeModel andglColor4x are
needed to have any chance of your texture showing up either on hardware or on the emulator. Presumably if the screen has no color it is not drawn at all.

The rect[4] array and associated glTexParameteriv call will crop the texture to the rectangle size given. The MY_SCREEN_XX values depend on your "emulated" screen size, but should be smaller than the texture. The TEXTURE_XXX sizes
should be power-of-2 (256, 512, 1024) to work on hardware. Anything else may work on the emulator, but will fail miserably on the real thing. The rectangle is inverted here to get the final texture to show the right way round. The call to glTexImage2D allocates
the texture memory in video ram, passing NULL means nothing is copied there yet. The native Android pixel type is RGB565, which means 5 bits of red, 6 of green and 5 of blue. How close to the Nintendo DS or GBA pixel format, just 1 bit different! Using this
colour type speeds ups the frame rate from less than 30 FPS to a more respectable 50-60 FPS.

Now the render code. This uses the glDrawTexiOES function call, which is an OpenGLES extension to render a texture straight to the screen. It is the fastest way to do things as there is no real 3D going on, it is just drawing your texture straight
to screen.

void JNICALL native_gl_render(JNIEnv *env UNUSED, jclass clazz UNUSED)
{
       memset(s_pixels, 0, S_PIXELS_SIZE);
       render_pixels(s_pixels);
       glClear(GL_COLOR_BUFFER_BIT);
       glTexSubImage2D(GL_TEXTURE_2D,          /* target */
                       0,                      /* level */
                       0,                      /* xoffset */
                       0,                      /* yoffset */
                       MY_SCREEN_WIDTH,        /* width */
                       MY_SCREEN_HEIGHT,       /* height */
                       GL_RGB,                 /* format */
                       GL_UNSIGNED_SHORT_5_6_5, /* type */
                       s_pixels);              /* pixels */
       glDrawTexiOES(0, 0, 0, s_w, s_h);
       /* tell the other thread to carry on */
       pthread_cond_signal(&s_vsync_cond);
}

The memset clears out old pixel values. If you were careful and kept track of dirty areas, only refreshing those, this could be omitted. I'm keeping things simple here though and clearing the screen each time. The render_pixels routine
does whatever it takes to draw your game's pixels into the s_pixels array in the RGB565 format. The glClear call is not strictly necessary, but it may help to speed up the pipeline as the hardware knows not to worry about keeping
any old values. Experimentation shows that leaving it in doesn't harm the framerate at least. The glTexSubImage2D call will copy the s_pixels data into video memory, only updating the area indicated rather than the whole thing. If
youdo update the whole texture it is actually faster to call glTexImage2D. Finally, glDrawTexiOES will draw the texture to the screen, scaled to the screen size.

The final pthread_cond_signal is to tell our vsync call to wake up. I haven't mentioned this yet, but in order to have a GBA or DS-like codingexperience, it is vital to wait on the screen refresh. The implementation of this is simple,
as Android lets you play with all the usual pthread calls from the world of Linux. You create a mutex and condition at the start of the main, and have the implementation of waitvblank lock the mutex and wait for a signal on the pthread condition.

#define UNUSED  __attribute__((unused))

static void wait_vsync()
{
       pthread_mutex_lock(&s_vsync_mutex);
       pthread_cond_wait(&s_vsync_cond, &s_vsync_mutex);
       pthread_mutex_unlock(&s_vsync_mutex);
}

void JNICALL native_start(JNIEnv *env UNUSED, jclass clazz UNUSED)
{
       /* init conditions */
       pthread_cond_init(&s_vsync_cond, NULL);
       pthread_mutex_init(&s_vsync_mutex, NULL);

       while (1) {
               /* game code goes here */
               wait_vsync();
       }
}

That ensures that main loop can wait on screen redraws, which avoids tearing.

Obviously if you want to write code from scratch that will only ever run on Android, there is no point jumping through these hoops. Just use Java and forget about the NDK. However, if you want to port existing code to Android, or you don't want to write new
code that is tied to a single platform, this approach is IMO the best way to go about it. It makes Android an almost decent platform for writing old-skool games on :-)

I've made a compilable example of this code available on github here.

转贴:http://quirkygba.blogspot.com/2010/10/android-native-coding-in-c.html

抱歉!评论已关闭.