{{NEWPAGE}}{{ :logo_cinder_191x60_1_.png?nolink |}}
======Interactive Bitmaps======
=====Introduction=====
In this tutorial you will build on what you’ve learned in the first two tutorials, working with gestures to create a pair of images on screen that can be dragged, rotated, and scaled using multi-finger gestures. You will learn how to register touch objects with GestureWorks, register to receive gesture events, and use these events to manipulate objects on screen. For this tutorial you will need the [[http://gestureworks.com/core|GestureWorks Core]] multitouch framework; a [[http://files.gestureworks.com/downloads/Core/Trial/GestureworksCoreTrialSetup.exe|free trial]] is available.
Download the code for all of the **C++ & Cinder** multitouch tutorials here: {{:tutorials:cpp_cinder:tutorials_cpp_cinder.zip|}}
{{ :tutorials:cpp_cinder:cpp_3.1_386x261.png?nolink |}}
----
=====Requirements=====
Estimated time to complete: **1-2 hours**
All requirements found in [[tutorials:cpp_cinder:getting_started_1_hello_world|C++ & Cinder: Getting Started I (Hello World)]] as well as completion of [[tutorials:cpp_cinder:getting_started_2_hello_multitouch|C++ & Cinder: Getting Started II (Hello Multitouch)]].
----
=====Process Overview=====
- [[tutorials:cpp_cinder:interactive_bitmaps#setup|Setup]]
* Create application class and main function
* TouchObject class and constructors
* Loading a bitmap
* Registering touch objects
- [[tutorials:cpp_cinder:interactive_bitmaps#the_update_function|The update function]]
* Handling point events
* Receiving gesture events
* Performing object transformations
- [[tutorials:cpp_cinder:interactive_bitmaps#the_draw_function|The draw function]]
* Transforming coordinates
* Drawing the bitmap
- [[tutorials:cpp_cinder:interactive_bitmaps#notes_on_helper_functions|Notes on helper functions]]
----
=====Process Detail=====
====1. Setup====
If you’ve completed [[tutorials:cpp_cinder:getting_started_1_hello_world|C++ & Cinder: Getting Started I (Hello World)]] and [[tutorials:cpp_cinder:getting_started_2_hello_multitouch|C++ & Cinder: Getting Started II (Hello Multitouch)]], you should be familiar with setting up a project to use GestureWorks and Cinder. If you have any difficulty in doing this, please refer to the previous tutorials for details.
Create a new project called InteractiveBitmaps. Be sure to import the GWCUtils and GestureWorks Core source and header files, as well as the Cinder library.
As before, the main class with be called the project name plus the App suffix: InteractiveBitmapsApp.cpp.
First we’ll need the following include statements and namespace declarations (Note: GestureWorks.h):
#include "cinder/app/AppNative.h"
#include "cinder/gl/gl.h"
#include "cinder/gl/Texture.h"
using namespace ci;
using namespace ci::app;
#include "InteractiveBitmapsApp.h"
#include "GestureWorks.h"
#include "cinder/ImageIo.h"
Next, create a TouchObject class that will be used to store data required for object transformations. There are many ways that this could be done, the TouchObject approach was chosen for simplicity.
// TouchObject class to store object transformations
class TouchObject {
public:
float x;
float y;
float width;
float height;
float rotation;
float scale;
TouchObject();
TouchObject(float x, float y, float width, float height, float rotation, float scale);
};
TouchObject::TouchObject() { x = 0; y = 0; width = 0; height = 0; rotation = 0; scale = 0; }
TouchObject::TouchObject(float x, float y, float width, float height, float rotation, float scale)
: x(x), y(y), width(width), height(height), rotation(rotation), scale(scale) {}
The next bit of code you’ll need to write will be the InteractiveBitmapsApp class. First, create variables of the TouchObject class (one for each image to be drawn), and a gl:Texture variable that will be used to store the loaded bitmap image. Declare the usual Cinder functions: prepareSettings, setup, update, and draw.
To assist in the transformations, we’ll create some helper functions that encapsulate a little bit of math. Create some function prototypes that will be defined later.
The first two functions, radsToDegrees and degreesToRads convert radian measurements to degrees, and vice versa. These are necessary because GestureWorks operates in radians and Cinder operates in degrees.
The function test_point serves as a simple hit test function, which will return true if a touch point is in contact with one of our images.
The functions rotateAboutCenterX and rotateAboutCenterY calculate coordinates that represent a rotation about the center of the object.
The last two functions getDrawingCoordX and getDrawingCoordY calculate new coordinates based upon updated scale, rotation, and object dimensions that will be utilized to translate the object.
// Cinder and GestureWorks functions
class InteractiveBitmapsApp : public AppNative {
public:
TouchObject logo1_dimensions;
TouchObject logo2_dimensions;
gl::Texture logo;
int screen_width;
int screen_height;
bool use_pixels;
std::pair normScreenToWindowPx(float screen_x, float screen_y);
void prepareSettings(Settings *settings);
void setup();
void update();
void draw();
private:
// helper functions
float radsToDegrees(float rad);
float degreesToRads(float deg);
bool test_point(float point_x, float point_y, float box_x, float box_y, float box_width, float box_height, float box_angle, float box_scale);
float rotateAboutCenterX(float point_x, float point_y, float center_x, float center_y, float ref_angle);
float rotateAboutCenterY(float point_x, float point_y, float center_x, float center_y, float ref_angle);
float getDrawingCoordX(float width, float height, float box_x, float box_y, float box_angle, float box_scale);
float getDrawingCoordY(float width, float height, float box_x, float box_y, float box_angle, float box_scale);
};
Place an image entitled gw_logo in the assets folder that is created by TinderBox.exe, and load it through Cinder’s loadAsset and loadImage function. Casting the image as gl:Texture places the image into memory on the graphics card. We’ll use the stored image as our gesture manipulatable object.
We’ll use the logo texture to draw two images on screen without creating another instance of the texture. We’ll keep track of each of the logo’s properties and transformations through the TouchObject class, which is conveniently stored in the logo1_dimensions and logo2_dimensions variables. The dimensions of the image are 200 x 200 px, which are hard-coded into constructor of the TouchObject.
Continue to define the setup function by setting up GestureWorks, registering our object for touch, and adding Gestures to the objects.
The next four code blocks should be familiar from the last tutorial. We call LoadGestureWorks, then LoadGML, and InitializeGestureworks to bring GestureWorks into the program and ready to process. Finally, we call RegisterWindowForTouchByName. Remember that you’ll need to load the DLL file that corresponds to your target architecture (32- or 64-bit).
The remainder of the function is new GestureWorks code required to receive gesture events; registerTouchObject tells the core that there is a touch object to track, and provides a string identifier to the object. The string is arbitrary and will be used a reference to apply transformations to the object that have decided to pair to that string. Since we have two objects to manipulate, we call it twice:
void InteractiveBitmapsApp::setup(){
logo = gl::Texture(loadImage(loadAsset("gw_logo.png")));
int center_x = getWindowWidth() / 2;
int center_y = getWindowHeight() / 2;
logo1_dimensions = TouchObject(center_x - 200, center_y, 200, 200, 0, 1);
logo2_dimensions = TouchObject(center_x + 200, center_y, 200, 200, 0, 1);
if (GestureWorks::Instance()->LoadGestureWorks(L"GestureworksCore32.dll")) {
console() << "Error loading gestureworks dll" << std::endl;
}
if (!GestureWorks::Instance()->LoadGML(L"basic_manipulation.gml")) {
console() << "Could not find gml file" << std::endl;
}
GestureWorks::Instance()->InitializeGestureWorks(0, 0);
if (!GestureWorks::Instance()->RegisterWindowForTouchByName(L"Hello Multitouch!")) {
console() << "Could not register target window for touch." << std::endl;
}
use_pixels = true;
GestureWorks::Instance()->SetUsePixels(use_pixels);
GestureWorks::Instance()->RegisterTouchObject(L"logo1");
GestureWorks::Instance()->RegisterTouchObject(L"logo2");
GestureWorks::Instance()->AddGesture(L"logo1", L"n-drag");
GestureWorks::Instance()->AddGesture(L"logo1", L"n-rotate");
GestureWorks::Instance()->AddGesture(L"logo1", L"n-scale");
GestureWorks::Instance()->AddGesture(L"logo2", L"n-drag");
GestureWorks::Instance()->AddGesture(L"logo2", L"n-rotate");
GestureWorks::Instance()->AddGesture(L"logo2", L"n-scale");
}
We can add gestures to these objects by the names logo1 and logo2. The function AddGesture will add gesture processing for the given gesture name to the given string identifier, like so:
This adds the n-drag, n-rotate, and n-scale gestures to both logo1 and logo2, so that we can receive drag, rotate, and scale events that are delivered for each object. The n- prefix in their ids denotes that these gestures can be performed with any number of fingers. These gesture ids are resolved directly from the loaded GML file and can be defined however you wish, as long as you the id in your code matches the id in the GML code.
We are now configured and ready to start receiving events. Before we get to that, however, we’re going to talk about how the bitmaps get drawn.
====2. The update function====
In our previous tutorial we were only concerned with point events. This time around, we not only need to retrieve and deal with point events, but we also need to perform a “hit test” to assign relevant points to our touch objects, and then receive and handle subsequent gesture events as well.
We'll sync the GestureWorks processing rate with Cinder’s update cycle by calling GestureWorks processFrame function:
void InteractiveBitmapsApp::update(){
GestureWorks::Instance()->ProcessFrame();
Point event retrieval is done just as before, with a vector of point events used to store the results from calling ConsumePointEvents.
std::vector point_events = GestureWorks::Instance()->ConsumePointEvents();
Loop through the vector of PointEvents targeting TOUCHADDED. Use a “hit test” to decide, for any new point, whether it’s fallen within the boundaries of one of our touch objects. To do this, we call the custom function testPoint with the new point’s position as well as the box’s dimensions and orientation:
//Hit-test new points to see if they struck one of the logo objects
for (std::vector::iterator event_it = point_events.begin(); event_it != point_events.end(); event_it++) {
std::pair pos(normScreenToWindowPx(event_it->position.x, event_it->position.y));
if (event_it->status == gwc::TOUCHADDED)
{
//All new touchpoints must go through hit testing to see if they apply to our bitmap manipulation; since logo1 is always on top, we check it first
if (test_point(pos.first,
pos.second,
logo1_dimensions.x,
logo1_dimensions.y,
logo1_dimensions.width,
logo1_dimensions.height,
logo1_dimensions.rotation,
logo1_dimensions.scale)
)
{
GestureWorks::Instance()->AddTouchPoint(L"logo1", event_it->point_id);
}
else if (test_point(pos.first,
pos.second,
logo2_dimensions.x,
logo2_dimensions.y,
logo2_dimensions.width,
logo2_dimensions.height,
logo2_dimensions.rotation,
logo2_dimensions.scale)
)
{
GestureWorks::Instance()->AddTouchPoint(L"logo2", event_it->point_id);
}
}
}
If the hit test shows the point is within logo1’s area, then we tell GestureWorks to assign the touch point to that touch object. If it isn’t successful, we move on to test logo2. Because we aren’t tracking touch points, only assigning them to our touch objects, we don’t need to pay attention to TOUCHUPDATE and TOUCHREMOVED events, so they are discarded.
Continuing in update function, we are now going to interpret GML defined GestureEvents. We grab and store the gesture events similar to the point events utilizing the consumeGestureEvents function.
std::vector gesture_events = GestureWorks::Instance()->ConsumeGestureEvents();
The largest block in this function follows ConsumeGestureEvents, which gives us back a std::vector of GestureEvent objects. Each of these objects will contain general data, as well as data specific to its event type, which we can use to update the objects based on the event type received.
Looping through the vector of GestureEvent objects, the first order of business for each event is to determine its target object. First, check to see if the current hit target is logo1. Then, parse the gesture events based on gesture ids. Finally, apply the transformations to our TouchObject instance, logo1_dimensions.
For the most part, relevant gesture event data is stored in the values member of the event object. This member is a map of strings to floats. The string used to address any value will be based on that attribute’s name in the GML. See [[http://www.gestureml.org|GestureML.org]] for how these are laid out. For our example, these attribute names can be viewed in the basic_manipulations.gml file we loaded previously.
It is also important to note that these values are generally expressed as deltas; that is, they represent differentials between the current state and the previous state; e.g. drag_dx and drag_dy each represent the change in position between this frame and the last.
for (std::vector::iterator gesture_it = gesture_events.begin(); gesture_it != gesture_events.end(); gesture_it++) {
if (gesture_it->target == "logo1") {
if (gesture_it->gesture_id == "n-drag") {
float dx = gesture_it->values.at("drag_dx");
float dy = gesture_it->values.at("drag_dy");
if (!use_pixels)
{
dx = dx * screen_width;
dy = dy * screen_height;
}
float new_x = logo1_dimensions.x + dx;
float new_y = logo1_dimensions.y + dy;
logo1_dimensions.x = new_x;
logo1_dimensions.y = new_y;
}
else if (gesture_it->gesture_id == "n-rotate") {
//Rotation is about a specific point, so we need to do a coordinate transform and adjust
//not only the object's rotation, but it's x and y values as well
float rotation_angle = degreesToRads(gesture_it->values.at("rotate_dtheta"));
//If we have points down, move the box; if there are no points, this is from gesture inertia and there is no
//center about which to rotate
if (gesture_it->n != 0) {
float temp_x = rotateAboutCenterX(logo1_dimensions.x, logo1_dimensions.y, gesture_it->x, gesture_it->y, rotation_angle);
float temp_y = rotateAboutCenterY(logo1_dimensions.x, logo1_dimensions.y, gesture_it->x, gesture_it->y, rotation_angle);
}
logo1_dimensions.rotation = logo1_dimensions.rotation + rotation_angle;
}
else if (gesture_it->gesture_id == "n-scale") {
float dsx = gesture_it->values.at("scale_dsx");
if (!use_pixels)
{
dsx = dsx * screen_width;
}
logo1_dimensions.scale = logo1_dimensions.scale + dsx;
}
}
Repeat the algorithm above for logo2. The entire update function should now look like this:
void InteractiveBitmapsApp::update(){
GestureWorks::Instance()->ProcessFrame();
std::vector point_events = GestureWorks::Instance()->ConsumePointEvents();
//Hit-test new points to see if they struck one of the logo objects
for (std::vector::iterator event_it = point_events.begin(); event_it != point_events.end(); event_it++) {
std::pair pos(normScreenToWindowPx(event_it->position.x, event_it->position.y));
if (event_it->status == gwc::TOUCHADDED)
{
//All new touchpoints must go through hit testing to see if they apply to our bitmap manipulation; since logo1 is always on top, we check it first
if (test_point(pos.first,
pos.second,
logo1_dimensions.x,
logo1_dimensions.y,
logo1_dimensions.width,
logo1_dimensions.height,
logo1_dimensions.rotation,
logo1_dimensions.scale)
)
{
GestureWorks::Instance()->AddTouchPoint(L"logo1", event_it->point_id);
}
else if (test_point(pos.first,
pos.second,
logo2_dimensions.x,
logo2_dimensions.y,
logo2_dimensions.width,
logo2_dimensions.height,
logo2_dimensions.rotation,
logo2_dimensions.scale)
)
{
GestureWorks::Instance()->AddTouchPoint(L"logo2", event_it->point_id);
}
}
}
//Interpret gesture events
std::vector gesture_events = GestureWorks::Instance()->ConsumeGestureEvents();
for (std::vector::iterator gesture_it = gesture_events.begin(); gesture_it != gesture_events.end(); gesture_it++) {
if (gesture_it->target == "logo1") {
if (gesture_it->gesture_id == "n-drag") {
float dx = gesture_it->values.at("drag_dx");
float dy = gesture_it->values.at("drag_dy");
if (!use_pixels)
{
dx = dx * screen_width;
dy = dy * screen_height;
}
float new_x = logo1_dimensions.x + dx;
float new_y = logo1_dimensions.y + dy;
logo1_dimensions.x = new_x;
logo1_dimensions.y = new_y;
}
else if (gesture_it->gesture_id == "n-rotate") {
//Rotation is about a specific point, so we need to do a coordinate transform and adjust
//not only the object's rotation, but it's x and y values as well
float rotation_angle = degreesToRads(gesture_it->values.at("rotate_dtheta"));
//If we have points down, move the box; if there are no points, this is from gesture inertia and there is no
//center about which to rotate
if (gesture_it->n != 0) {
float temp_x = rotateAboutCenterX(logo1_dimensions.x, logo1_dimensions.y, gesture_it->x, gesture_it->y, rotation_angle);
float temp_y = rotateAboutCenterY(logo1_dimensions.x, logo1_dimensions.y, gesture_it->x, gesture_it->y, rotation_angle);
}
logo1_dimensions.rotation = logo1_dimensions.rotation + rotation_angle;
}
else if (gesture_it->gesture_id == "n-scale") {
float dsx = gesture_it->values.at("scale_dsx");
if (!use_pixels)
{
dsx = dsx * screen_width;
}
logo1_dimensions.scale = logo1_dimensions.scale + dsx;
}
}
else if (gesture_it->target == "logo2") {
if (gesture_it->gesture_id == "n-drag") {
float dx = gesture_it->values.at("drag_dx");
float dy = gesture_it->values.at("drag_dy");
if (!use_pixels)
{
dx = dx * screen_width;
dy = dy * screen_height;
}
float new_x = logo2_dimensions.x + dx;
float new_y = logo2_dimensions.y + dy;
logo2_dimensions.x = new_x;
logo2_dimensions.y = new_y;
}
else if (gesture_it->gesture_id == "n-rotate") {
//Rotation is about a specific point, so we need to do a coordinate transform and adjust
//not only the object's rotation, but it's x and y values as well
float rotation_angle = degreesToRads(gesture_it->values.at("rotate_dtheta"));
float temp_x = rotateAboutCenterX(logo2_dimensions.x, logo2_dimensions.y, gesture_it->x, gesture_it->y, rotation_angle);
float temp_y = rotateAboutCenterY(logo2_dimensions.x, logo2_dimensions.y, gesture_it->x, gesture_it->y, rotation_angle);
logo2_dimensions.rotation = logo2_dimensions.rotation + rotation_angle;
}
else if (gesture_it->gesture_id == "n-scale") {
float dsx = gesture_it->values.at("scale_dsx");
if (!use_pixels)
{
dsx = dsx * screen_width;
}
logo2_dimensions.scale = logo2_dimensions.scale + dsx;
}
}
}
}
====3. The draw function====
Drawing is bit more complicated now than it was in the last tutorial, because now we are applying 2D transformations. What we will be doing is moving the drawing position to each bitmap location, orienting it, and then re-drawing the bitmap to the screen. We then reset the draw position to the origin so we can use the same movement code in the next frame and get the same result. This also requires that we do a couple of simple calculations. Here’s an example of how it might look when finished:
{{ :tutorials:cpp_cinder:cpp_3.1_386x261.png?nolink |}}
Begin the same way as in the last tutorial by drawing a gray background to the screen to clear the canvas and enabling alpha blending using: gl::enableAlphaBlending():
void InteractiveBitmapsApp::draw(){
// clear out the window
gl::clear(Color(0.66f, 0.66f, 0.66f), true);
gl::enableAlphaBlending();
Next, add all of the variables that we’re going to need to draw the bitmap. Do this for each logo:
// transformation values
float height1 = logo1_dimensions.height;
float width1 = logo1_dimensions.width;
float rotation1 = logo1_dimensions.rotation;
float x1 = logo1_dimensions.x;
float y1 = logo1_dimensions.y;
float scale1 = logo1_dimensions.scale;
float draw_x1 = getDrawingCoordX(width1, height1, x1, y1, rotation1, scale1);
float draw_y1 = getDrawingCoordY(width1, height1, x1, y1, rotation1, scale1);
float height2 = logo2_dimensions.height;
float width2 = logo2_dimensions.width;
float rotation2 = logo2_dimensions.rotation;
float x2 = logo2_dimensions.x;
float y2 = logo2_dimensions.y;
float scale2 = logo2_dimensions.scale;
float draw_x2 = getDrawingCoordX(width2, height2, x2, y2, rotation2, scale2);
float draw_y2 = getDrawingCoordY(width2, height2, x2, y2, rotation2, scale2);
While most of the data gathered here comes directly from the TouchObject object for logo1, there are two functions here: getDrawingCoordX and getDrawingCoordY. These functions exist to calculate the x and y position for drawing the boxes. Because we’re keeping track of the boxes’ locations based on their centers, and drawing of bitmaps occurs from the upper left corner, we need to do a coordinate transform to figure out where the upper left corner of box of a given size and rotation is going to be, based on its center. These two functions take care of that for us.
Finally, based on the data we gathered before, we move the draw cursor to the necessary location, set the drawing angle, adjust the scale, and then draw the bitmap to the screen. Then apply the inverse of the drawing commands in order to reset the draw position.
// apply transformations
gl::translate(draw_x1, draw_y1);
gl::rotate(radsToDegrees(rotation1));
gl::scale(scale1 / 1.0f, scale1 / 1.0f);
gl::draw(logo);
gl::scale(1.0f / scale1, 1.0f / scale1);
gl::rotate(radsToDegrees(-rotation1));
gl::translate(-draw_x1, -draw_y1);
gl::translate(draw_x2, draw_y2);
gl::rotate(radsToDegrees(rotation2));
gl::scale(scale2 / 1.0f, scale2 / 1.0f);
gl::draw(logo);
gl::scale(1.0f / scale2, 1.0f / scale2);
gl::rotate(radsToDegrees(-rotation2));
gl::translate(-draw_x2, -draw_y2);
}
====4. Notes on helper functions====
The application window is not fullscreen, the points that come out of GestureWorks Core need to be normalized to the window area. Add the normScreenToWindowPx function outlined in [[tutorials:cpp_cinder:getting_started_2_hello_multitouch|Getting Started II (Hello Multitouch)]].
There is a set of helper functions at the end of the example code that are used in this project to make drawing and manipulating the objects proceed more naturally. An explanation of their working is outside the scope of this tutorial, but in short their purposes is to facilitate local coordinate transformations so that rotated objects can be moved about using x and y coordinates, leaving all of the maths for determining where a point is with respect to a rotated plane entirely confined to these functions. Refer to the tutorial source code for the definitions of the helper functions.
----
=====Review=====
In this tutorial we expanded on the knowledge gained in [[tutorials:cpp_cinder:getting_started_2_hello_multitouch|Getting Started II (Hello Multitouch)]] by manipulating on-screen objects using gesture data obtained from GestureWorks Core. This tutorial covers a number of concepts that are central to the usage of GestureWorks:
* Registering touch objects
* Adding gestures to the registered touch objects
* Manipulating TouchObject object instances based on data obtained from GestureWorks
A principal concept is that due to the language- and framework-independent nature of GestureWorks, it is the programmer’s responsibility to associate the local implementation of an object with an object as tracked by GestureWorks. To review: GestureWorks doesn’t know anything about the TouchObject class that we defined; it is our responsibility to associate the data received from GestureWorks with our in-application objects.
----
=====Continuing Education=====
It is the intention of this and the other [[:tutorials|GestureWorks Core Tutorials]] to get the programmer started using the GestureWorks core framework, they are by no means an exhaustive explanation of all of the features of GestureWorks Core; in fact, we’ve just barely scratched the surface!
For more information on GestureWorks Core, GestureML, and CreativeML, please visit the following sites:
* [[http://wiki.gestureworks.com/|GestureWorks Wiki]]
* [[http://www.gestureml.org/|GestureML.org]]
* [[http://www.creativeml.org/|CreativeML.org]]
----
Previous tutorial: [[tutorials:cpp_cinder:getting_started_2_hello_multitouch|Getting Started II (Hello Multitouch)]]