Documentation
Libraries
Related
Documentation
Libraries
Related
In this tutorial you will build on what you’ve learned in the first two tutorials, working with gestures to create a pair of images on screen that can be dragged, rotated, and scaled using multi-finger gestures. You will learn how to register touch objects with GestureWorks, register to receive gesture events, and use these events to manipulate objects on screen. For this tutorial you will need the GestureWorks Core multitouch framework; a free trial is available.
Download the code for all of the C++ & Cinder multitouch tutorials here: tutorials_cpp_cinder.zip
Estimated time to complete: 1-2 hours
All requirements found in C++ & openFrameWorks: Getting Started I (Hello World) as well as completion of C++ & openFrameWorks: Getting Started II (Hello Multitouch).
If you've completed C++ & openFrameWorks: Getting Started I (Hello World) and C++ & openFrameWorks: Getting Started II (Hello Multitouch), you should be familiar with setting up a project to use GestureWorks and openFrameworks. If you have any difficulty in doing this, please refer to the previous tutorials for details.
Create a new project. Be sure to import the GWCUtils and GestureWorks Core source and header files, as well as the openFrameworks lib.
As before, you’ll need to create a separate class, inheriting from ofBaseApp, with setup, draw, and update functions. Our example is named InteractiveBitmaps:
#pragma once #include <unordered_map> #include "ofMain.h" #include "Drawables.h" #include "GestureWorks.h" class InteractiveBitmaps : public ofBaseApp{ public: InteractiveBitmaps(); // openFrameworks overrides void setup(); void update(); void draw(); const static std::string window_title; private: clock_t last_tick; std::unordered_map<long, GWCTutorials::TouchPoint> touch_points; std::unordered_map<std::wstring, GWCTutorials::TouchRectangle> rectangles; ofImage logo; void handlePointEvents(const std::vector<gwc::PointEvent>& point_events); void handleGestureEvents(const std::vector<gwc::GestureEvent>& gesture_events); };
We also need to write a main function, as before, that just sets up the window title, dimensions, and calls our class:
#include "ofMain.h" #include "InteractiveBitmaps.h" int main(){ ofSetupOpenGL(1024, 768, OF_WINDOW); // Set the window title so we can register it with GestureWorks later ofSetWindowTitle(InteractiveBitmaps::window_title); // Start the app ofRunApp(new InteractiveBitmaps()); }
Additionally, we’ve created a class in Drawables that can keep track of all of the attributes of our displayed objects, called TouchRectangle:
// Visual representation of a rectangle that we can manipulate via gestures class TouchRectangle{ public: TouchRectangle(); TouchRectangle(const ofImage& image, int width); void SetPosition(int x, int y); void IncrementPosition(int dx, int dy); void SetScale(float scale); void IncrementScale(float ds); void Rotate(float dtheta, float rotation_center_x, float rotation_center_y); void Draw(); bool Hit(int x, int y) const; bool AddPoint(long id); bool RemovePoint(long id); private: std::vector<long> owned_points; ofImage image; int width; float rotation; float scale; int x; int y; };
Instances of this class will hold all of the data about where our boxes are, and what their size and orientation is.
The setup function is a little longer than the last tutorial.
As before, we call all the functions needed to load and initalize GestureWorks Core. We follow this with a new function, however, that will load our image from file and put it into an ofImage object that can be drawn on screen:
logo.loadImage("gw_logo.png");
The variable logo can now be used to place the image wherever we decide to draw it. Next, we need to create our boxes:
TouchRectangle rect(logo, 200); int center_x = ofGetWindowWidth() / 2; int center_y = ofGetWindowHeight() / 2; rect.SetPosition(center_x - 200, center_y); rectangles[L"logo1"] = rect; rect.SetPosition(center_x + 200, center_y); rectangles[L"logo2"] = rect;
This will place our boxes side-by-side in the center of the screen, with zero rotation and an initial scale value of 1.
The remainder of the function is new code required to receive gesture events. RegisterTouchObject tells the core that there is a touch object to track, and what it’s name is. Since we have two objects to manipulate, we call it twice:
// Tell GestureWorks about our touch objects GestureWorks::Instance()->RegisterTouchObject(L"logo1"); GestureWorks::Instance()->RegisterTouchObject(L"logo2");
We can add gestures to these objects by the names logo1 and logo2.
AddGesture will add gesture processing for the given gesture name to the given object, like so:
// Add an entire gesture set to "logo1" GestureWorks::Instance()->AddGestureSet(L"logo1", L"basic-gestures"); // We could also add gestures individually like we do for "logo2" GestureWorks::Instance()->AddGesture(L"logo2", L"n-drag"); GestureWorks::Instance()->AddGesture(L"logo2", L"n-scale"); GestureWorks::Instance()->AddGesture(L"logo2", L"n-rotate");
This adds either the entire gesture set defined in the basic_manipulation.gml file under “basic-gestures” section to logo1 or adds the n-drag, n-rotate, and n-scale gestures to logo2, so that we can receive gesture events that are delivered for each object. The n- prefix in their names denotes that these gestures can be performed with any number of fingers.
We are now configured and ready to start receiving events. Before we get to that, however, we’re going to talk about how the bitmaps get drawn.
Drawing is bit more complicated now than it was in the last tutorial because we're working with a bitmap instead of vector graphics. What we will be doing is moving the drawing position to each bitmaps' location, orienting it, and then pasting the bitmap to the screen.
We then reset the draw position to the origin so we can use the same movement code in the next frame and get the same result. This also requires that we do a couple of simple calculations.
All of our actual visualization is triggered from within the draw function.
// openFrameworks calls this function in an infinite loop until the app ends void InteractiveBitmaps::draw(){ for (auto rect : rectangles){ rect.second.Draw(); } for (const auto& point : touch_points){ point.second.Draw(); } }
All the values for Translation, Rotation and Scaling are calculated from calls in InteractiveBitmaps handleGestureEvents function.
void TouchRectangle::Draw(){ ofSetHexColor(0xffffff); ofSetRectMode(OF_RECTMODE_CENTER); ofPushMatrix(); ofTranslate(x, y); ofRotateZ(rotation); ofScale(scale, scale, 1); image.draw(0, 0, width, width); ofPopMatrix(); }
In our previous tutorial we were only concerned with point events. This time around, we not only need to retrieve and deal with point events, but we also need to perform a “hit test” to assign relevant points to our touch objects, and then receive and handle subsequent gesture events as well.
As before, our first block checks the current tick count and compares it to the last recorded tick count, proceeding only if at least 16 milliseconds have passed, which keeps our processing rate limited to 60 cycles per second.
Point event retrieval is done just as before, with a vector of point events used to store the result from calling ConsumePointEvents. However, this time around, we aren’t keeping track of the point objects. Instead, we’re using a “hit test” in handlePointEvents to decide, for any new point, whether it’s fallen within the boundaries of one of our touch objects. To do this, we call the custom function Hit with the new point’s position as well as the box’s dimensions and orientation:
// Returns true if (px, py) intersects our object bool TouchRectangle::Hit(int px, int py) const{ float rads = PI * rotation / 180.f; std::pair<int, int> local_coords(rotateAboutPoint(px, py, x, y, rads)); if (abs(local_coords.first - x) <= width * scale / 2){ if (abs(local_coords.second - y) <= width * scale / 2){ return true; } } return false; }
if (point.status == gwc::TOUCHADDED){ for (auto& rect : rectangles){ if (rect.second.Hit(window_coords.first, window_coords.second)){ // If this is a new point and hits one of our touch objects, we want to let GestureWorks // know so that it can perform cluster analysis during its processing cycle. GestureWorks::Instance()->AddTouchPoint(rect.first, point.point_id); rect.second.AddPoint(point.point_id); break; // only get the first one } } }
By looping through the touchareas (logo1 and logo2), if the hit test shows the point is within one of those areas, then we tell GestureWorks to assign the touch point to that touch object.
The handleGestureEvents is called following ConsumeGestureEvents, which gives us back a std::vector of GestureEvent objects. Each of these objects will contain general data, as well as data specific to its event type, which we can use to update the objects based on the event type received.
By looping through the vector of GestureEvent objects, and the logic being the same for both logos, it's easy to manipulate the appropriate TouchRectangle based on the Gestures returned from ConsumeGestureEvents.
We then further break down our response to the event based on its type. We are only dealing with the three gestures defined for logo2, any other gesture that comes through will be ignored, we have three blocks in the if-else chain:
void InteractiveBitmaps::handleGestureEvents(const std::vector<gwc::GestureEvent>& gesture_events){ for (const auto& gesture : gesture_events){ std::wstring target(gesture.target.begin(), gesture.target.end()); // Find the name of the touch object the gesture "targets". This will be the same name we used to register // the touch object with GestureWorks auto rect = rectangles.find(target); if (rect == rectangles.end()) continue; if (gesture.gesture_id == "n-drag"){ int dx = static_cast<int>(gesture.values.at("drag_dx") * ofGetScreenWidth()); int dy = static_cast<int>(gesture.values.at("drag_dy") * ofGetScreenHeight()); rect->second.IncrementPosition(dx, dy); } else if (gesture.gesture_id == "n-scale"){ float ds = gesture.values.at("scale_dsx") * ofGetScreenWidth(); rect->second.IncrementScale(ds); } else if (gesture.gesture_id == "n-rotate"){ float dtheta = gesture.values.at("rotate_dtheta"); if (!dtheta) continue; std::pair<int, int> center(normScreenToWindowPx(gesture.x, gesture.y)); rect->second.Rotate(dtheta, center.first, center.second); } } }
Each of these events is used to update the object as you might expect based on the data received, with rotate events changing the object’s rotation, drag events being used to change the object’s x and y coordinates, and scale events being used to change the object’s scale value.
For the most part, relevant gesture event data is stored in the values member of the event object. This member is a map of strings to floats. The string used to address any value will be based on that attribute's name in the GML (please see the GestureML wiki for more information). For our example, these attribute names can be viewed in the basic_manipulations.gml file we loaded previously.
It is also important to note that these values are generally expressed as deltas; that is, they represent differentials between the current state and the previous state; eg. drag_dx and drag_dy each represent the change in position between this frame and the last.
There is a set of helper functions at the end of the Drawables.cpp that are used in this project to make drawing and manipulating the objects proceed more naturally. An explanation of their working is outside the scope of this tutorial, but in short their purposes is to facilitate local coordinate transformations so that rotated objects can be moved about using x and y coordinates, leaving all of the maths for determining where a point is with respect to a rotated plane entirely confined to these functions.
In this tutorial we expanded on the knowledge gained in Tutorial #2 by manipulating on-screen objects using gesture data obtained from GestureWorks Core. This tutorial covers a number of concepts that are central to the usage of GestureWorks :
A principal concept is that due to the language- and framework-independent nature of GestureWorks, it is the programmer’s responsibility to associate the local implementation of an object with an object as tracked by GestureWorks. To review: GestureWorks doesn’t know anything about the movableBox class that we defined; it is our responsibility to associate the data received from GestureWorks with our in-application objects.
It is the intention of this and the other GestureWorks Core Tutorials to get the programmer started using the GestureWorks core framework, they are by no means an exhaustive explanation of all of the features of GestureWorks Core; in fact, we’ve just barely scratched the surface!
For more information on GestureWorks Core, GestureML, and CreativeML, please visit the following sites:
Previous tutorial: C++ & openFrameworks: Getting Started II (Hello Multitouch)
Next tutorial: C++ & openFrameworks: Hello Sensor