Saturday, October 30, 2010

Porting from iPhone to OSX

If you've followed us at all you know by now that I like porting to new platforms, at least new platforms that support C++. Every port seems to make the base engine a bit better, and it brings in a few extra bucks to support continued development. With the news of the Mac app store coming soon I had to jump on porting the Golden Hammer engine and Big Mountain Snowboarding to OSX.

Our engine started on Windows, moved to OSX (carbon), then to iPhone/iPad, then to Android. The OSX port was never really finished because the iPhone took off. Carbon is outdated technology (see below), so for the port back to OSX I started over using Cocoa.

The port isn't fully ready for release, but within a week I was able to get the game pretty much working. This was definitely the easiest port so far.

Carbon vs Cocoa (use Cocoa)

OSX supports two different platform layers, Carbon and Cocoa. Carbon is in C, only runs on 32 bit, and does not seem to be fully supported anymore. The Carbon implementation in my engine gets a ton of deprecated code warnings, and there's a Mac App Store requirement that apps not use deprecated technology. I have not seen an official and specific announcement from Apple on Carbon, but to be safe it's better to go with Cocoa.

Also, Cocoa is almost a direct equivalent of Cocoa Touch, the API used for iPhone development. There are a few little differences that I'll note in the next section, but for the most part you can make copies of your iPhone platform layer with different includes, fix the compiler errors, and be ready to go.

Cocoa vs Cocoa Touch

As I said, Cocoa is almost exactly like Cocoa Touch. For most of my classes I was able to just make a copy of the iPhone version, add some different frameworks, rename a couple classes and be good.

The frameworks I'm currently using are:

  • Cocoa.framework
  • OpenGL.framework
  • ApplicationServices.framework
  • OpenAL.framework
  • AudioToolbox.framework
  • AppKit.framework
  • CoreData.framework
  • Foundation.framework

Most UIKit classes have an AppKit equivalent. Instead of UIView, there's NSView. Instead of CGPoint, there's NSPoint. The first line of defense on a compile error is to stick an NS in front of the class name and see if that works.

32 bit vs 64 bit

This probably won't matter to most developers, but Cocoa will compile for 64 bit systems. We're doing some behind the scenes magic with pointers and such, so I had to go through the codebase and replace a bunch of long data types with int32_t and u_int32_t, and remove some of the more questionable pointer code.

OpenGL vs OpenGL ES

For a first pass implementation, you can think of OpenGL as a near direct equivalent to OpenGL ES1. This is a horrible simplification, and I fear a lot of OpenGL users coming at me with pitchforks for saying it. A better way to think of it is ES1 is almost a direct subset of the full OpenGL, and you can get ES1 code running on OpenGL pretty easily. I have not yet ported our ES2 shader code, so I can't comment much on that aspect.

The includes you want are:

  • #include "OpenGL/gl.h"
  • #include "OpenGL/glu.h"

Make a copy of your ES1 implementation, change the includes, and hit compile. You will get a ton of compile errors. Most of them are easy to fix. For any function or variable that has "OES" at the end of it, simply delete the OES part. For any function named something like "glFrustumf", delete the "f". This will take care of 99% of the compile errors.

I'm not quite 100% sure, but I don't think PVR4 support is available. If I'm wrong on this let me know! It would save me some work. Right now I just have uncompressed textures, but DXT support seems to be available for use.

OSX Input

Modern macs support multitouch through the touchpad. The gotcha is that the mouse needs to be over the window in order for the app to receive any touch events. I'm planning on supporting keyboard inputs for those without a touchpad, and making the game fullscreen so we can always get the touch events.

You'll want the following functions in your NSView:

- (void)mouseMoved:(NSEvent *)theEvent

- (void)mouseDragged:(NSEvent *)theEvent

- (void)mouseEntered:(NSEvent *)theEvent

- (void)mouseExited:(NSEvent *)theEvent

- (void)mouseDown:(NSEvent *)theEvent

- (void)mouseUp:(NSEvent *)theEvent

- (void)keyDown:(NSEvent *)theEvent

- (void)keyUp:(NSEvent *)theEvent

- (void)touchesBeganWithEvent:(NSEvent *)event

- (void)touchesMovedWithEvent:(NSEvent *)event

- (void)touchesEndedWithEvent:(NSEvent *)event

- (BOOL)acceptsFirstResponder { return YES; }

You'll also want to call these somewhere:

[window setAcceptsMouseMovedEvents:YES];

[view setAcceptsTouchEvents:YES];

Additional platform considerations

The straight port of snowboarding runs at 2ms/frame on my 13" macbook, or 500fps. This is without making any use of multithreading or doing any real hardcore optimizations anywhere. I think any halfway decent straight port of an iPhone app should run at crazy speeds on a low end Mac.

So is a straight port good enough? I have no idea and neither will anyone else really until the Mac App Store has been out a while. Is it competing with Steam and games like Half Life 2, or will the audience want smaller simple games? Can't tell yet! I'm exporting higher res maps and doing something in the middle.

Tuesday, October 12, 2010

Converting from OpenGL ES1 to ES2 on the iPhone

I recently got through upgrading our engine to support ES2 and GLSL shaders. It took about a week to get the game just looking the same as it did before, but rendering with shaders instead. I'm sharing some info that might be worthwhile to anyone else trying to update their iPhone renderers. This is not a how-to on using GLSL to achieve different effects, you can find plenty of that elsewhere.

ES2 is not an incremental improvement to ES1, it is a total paradigm shift in how pixels get rendered. You can't just take an ES1 renderer and add a couple shaders here and there like you can in DirectX. In ES2, you write vertex and pixel/fragment shaders in GLSL, and then pass values to the shaders at runtime.

The vertex shader reads in any values from VBOs or vertex arrays, and outputs any values that are useful for the pixel shader. Any values created in the vertex shader are interpolated along the triangle edges and raster lines before being passed to the pixel shader. The pixel/fragment shader has only one job, to output a color value.

I've found this site to be a nice reference for GLSL functions, starting at section 8.1:

3GS/iPhone4/iPad or bust:

ES2 is not supported on the 3G, first gen iPhone, or first gen iPod. It's possible to support both ES1 and ES2 in the same codebase, but you will need two entirely different render paths. There's no mixing and matching allowed, so within a run you are either entirely ES1 or entirely ES2.

EAGLContext* eaglContext = 0;

eaglContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];

if (eaglContext && [EAGLContext setCurrentContext:eaglContext])


// initialize a renderer that uses ES2 imports




eaglContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES1];

if (!eaglContext || ![EAGLContext setCurrentContext:eaglContext]) {

// total failure!


// initialize a renderer that uses ES1 imports


Loading and assigning shaders:

The shader compiler deals in character buffers. You will need to either create a GLSL stream in code, or more sanely load up a file containing a shader and pass the contents to the compiler.

int loadShader(GLenum type, const char* glslSourceBuf)


int ret = glCreateShader(type);

if (ret == 0) return ret;

glShaderSource(ret, 1, (const GLchar**)&glslSourceBuf, NULL);


int success;

glGetShaderiv(ret, GL_COMPILE_STATUS, &success);

if (success == 0)


char errorMsg[2048];

glGetShaderInfoLog(ret, sizeof(errorMsg), NULL,


outputDebugString("%s error: %s\n", fileName, errorMsg);


ret = 0;


return ret;


int loadShaderProgram(const char* vertSource, const char* pixelSource)


// load in the two individual shaders

int vertShader = loadShader(GL_VERTEX_SHADER, vertSource);

int pixelShader = loadShader(GL_FRAGMENT_SHADER, pixelSource);

// create a "program" which is a vertex/pixel shader pair.

int ret = glCreateProgram();

if (ret == 0) return ret;

glAttachShader(ret, vertShader);

glAttachShader(ret, pixelShader);

// assign vertex attributes to positions inside

// glVertexAttribPointer calls

glBindAttribLocation(ret, AP_POS, "position");

glBindAttribLocation(ret, AP_NORMAL, "normal");

glBindAttribLocation(ret, AP_DIFFUSE, "diffuse");

glBindAttribLocation(ret, AP_SPECULAR, "specular");

glBindAttribLocation(ret, AP_UV1, "uv1");


int linked;

glGetProgramiv(ret, GL_LINK_STATUS, &linked);

if (linked == 0)



outputDebugString("Failed to link shader program.");

return 0;


return ret;


void drawSomething(void)


// tell opengl which shaders to use for rendering


// set any values on the shader that you want to use.

// set up the vertex buffer using glVertexAttribPointer

// calls and the same positions used during the linking.

// then draw like usual.

glDrawElements(GL_TRIANGLES, numTris, GL_UNSIGNED_SHORT, 0);


No matrix stack:

All transformations are done in the shader, so anything using glMatrixMode is automatically out. glFrustumf and glOrthof are also gone, so you will need to write replacements. You can find examples of these two functions in the Android codebase at

For the transforms used by shaders, I have callbacks to grab values like ModelToView and ViewToProj from a structure that I calculate once per render pass.

In C++:

unsigned int transformShaderHandle = glGetUniformLocation(shaderId, "ModelToScreen");

glUniformMatrix4fv(transformShaderHandle, 1, GL_FALSE, (GLfloat*)mModelViewProj );

In the vertex shader:

uniform mat4 ModelToScreen;

attribute vec4 position;

void main()


gl_Position = ModelToScreen * position;


More textures!

You only get two texture channels to use under ES1. ES2 gives you 8. Setting up a texture in ES2 is similar to ES1, but you don't get the various glTexEnvi functions to define how multiple texture channels blend together. You do that part in GLSL instead.

In C++:

unsigned int textureShaderHandle = glGetUniformLocation(shaderId, "Texture0");

// tell the shader that Texture0 will be on texture channel 0

glUniform1i(textureShaderHandle, 0);

// then set up the texture like you would in ES1



glBindTexture(GL_TEXTURE_2D, mTextureId);

In the pixel shader:

uniform lowp sampler2D Texture0;

void main()


gl_FragColor = texture2D(Texture0, v_uv1);


No such thing as glEnableClientState(GL_NORMAL_ARRAY)

GL_NORMAL_ARRAY, GL_COLOR_ARRAY, etc have all gone away. Instead you use the unified glVertexAttribPointer interface to push vertex buffer info to the shaders. This is a pretty simple change.


glVertexAttribPointer(AP_NORMAL, 3, GL_FLOAT, false, vertDef.getVertSize(), (GLvoid*)(vertDef.getNormalOffset()*4));