Resources: Tinkerball implementation
Here you can find out a little about the about the implementation of Mousepickle’s Tinkerball application for iPhone®, iPad®, and iPod touch®.
Try the other resources...
The underlying implementation of Mousepickle’s Tinkerball application might be of interest to other developers, so some details are given below.
Generally speaking the application in implemented using traditional object-based methods, then switches to C for performance reasons as we get closer to the physics modelling and OpenGL rendering.
A separate page is provided in case you’d prefer just to learn how to use Tinkerball.
The sphere model is created from a hardcoded icosahedron model, by the following process:
Each face of the model is ‘stellated’ into four smaller faces, with each edge of the original face being bisected to create a fresh vertex.
All resulting vertices are then projected out onto a unit sphere (to put it another way, the vector for each is normalised). The stellation and projection steps are then repeated until a sufficiently smooth model is obtained.
Duplicate vertices are merged, and each face keeps track of its vertices; every vertex keeping track of which faces it’s part of. The method used to create the model makes it easy to generate these relationships between elements of the model.
The face / vertex structure created earlier is ‘decanted’ into a densely-packed C-flavoured structure, which facilitates delivery of colour, vertex position, and vertex normal data to OpenGL in an efficient interleaved format.
The physics modelling turns out to be relatively straightforward; we make use of some simple approximations. For example:
Bulk rotation of the model is done entirely via OpenGL calls; the sphere modelling per se only consists of moving vertices in and out from the center, and that we can do with a mixture of simple harmonic motion, elasticity, and drag calculations. Wherever possible integer arithmetic has been used.
Tinkerball uses OpenGL ES 1.1, mostly in a fairly conventional way (meaning example code provided with Xcode 4 supplied a lot of the basic framework) but with some tweaks:
The model uses large integer values for the vertex positions, directly from the sphere physics model, with no floating-point values used (except for vertex normals).
A second off screen buffer is used to render a low-resolution duplicate of the currently-drawn sphere, with each facet painted a unique RGB colour that encodes the index of each face within our densely-packed data structure (lighting effects are disabled for that render).
Calls to glReadPixels then retrieve the unique RGB value seen in that off-screen buffer at the points of interest - where the user is touching the screen, and at the points where we need to detect if a musical note must be played; from those RGB values we can calculate the face index, and we then have ready access to the face properties, vertices, neighbouring face indices etc.
The underlying iOS audio playback processes ask Tinkerball to populate a succession of small audio buffers, into which we put audio data that’s generated from a handful of wave tables created during application startup.
In addition to a vanilla sine-wave table we compute tables with distortion, and with pre-applied harmonics (for the ‘chime’ sound that you hear in the app).
Each sample in the output buffer is populated with a value computed by examining the wave tables in the appropriate fashion for each of the notes currently playing. Samples from different tables are used in different proportions depending on the currently-selected ‘theme’ for Tinkerball, and the deflection of the sphere surface in or out at each moment.
Amplitude envelopes are created on the fly, with attack and decay rates depending on the theme, pitch of each note, and the rate at which the sphere is spinning. We also bend the pitch of each note slightly as it plays, on some themes.
The audio buffers that we populate are expecting stereo data. Strictly speaking the Tinkerball code produces mono audio, but we delay the signal to one ear by one buffer’s worth, giving the resulting sound a feeling of weight and depth that it would otherwise lack, courtesy of the Haas effect.
The transition from uploaded to ‘in review’ status took a little over a week; that time might have been reduced if the 4th of July hadn’t come along! The review itself was relatively brief, presumably reflecting the self-contained nature of the application.