User Tools

Site Tools


This binding has been deprecated - please see the most recent release notes for more information.

Interactive Bitmaps


In this tutorial, we further explore Gestureworks Core and the Python bindings by consuming gesture events obtained from Gestureworks Core and manipulating on-screen images represented as Scatter objects within Kivy. This tutorial also introduces Scatter manipulation in Kivy such as rotation and scaling, values for which you will obtain from Gestureworks. For this tutorial you will need the Gestureworks Core multitouch framework; a free trial is available.

In this application, the user is able to drag the objects (the yellow Gestureworks logos) around the screen, scale them using the familiar “pinch”-style gesture, and rotate them using the n-finger rotate gesture.

Download the code for all of the Python & Kivy multitouch tutorials here:

Python 3.1 386x281.png


Estimated time to complete: 45 minutes

This tutorial assumes that you have completed the steps found in Python & Kivy: Getting Started I (Hello World) and Python & Kivy: Getting Started II (Hello Multitouch). You should have a fully configured Eclipse project ready for use and should be familiar with consuming Gestureworks point event data. If you have have not yet done so, please follow both Tutorial #1 to prepare an Eclipse project and Tutorial #2 to become familiar with consuming Gestureworks touch event data.

In addition to the above, this tutorial expects the student to have a basic understanding of the Python language, the Kivy framework, and is familiar with object-oriented programming concepts.

Process Overview

Process Details

1. Create A New Eclipse - Kivy Project

For this tutorial it is recommended that you create a new, empty Kivy project with the gwc_python module using the procedure outlined in Python & Kivy: Getting Started I (Hello World). You may also use the project created in Python & Kivy: Getting Started II (Hello Multitouch) but you will need to remove or comment out some unnecessary code.

2. Add the Gestureworks Logo Images to the Project

For this application, two identical images will be used as the textures for the Scatter objects that will appear on-screen. Using the method described in tutorial #2 for adding resources to the project, add the following folder and all contents to the project:


3. Create a Kivy Language File

Our Kivy file from the previous tutorial just needs a few minor additions for this application. We’ll also call this file “multitouch.kv”.

#:kivy 1.5.0
#:import kivy kivy
#:import win kivy.core.window
            rgb: 1, 1, 1
            source: 'media/Logo_gestureworks_core_1920x1080.png'
            size: self.size
    on_size: =
    size_hint: None, None

4. Import Required Modules

Again, we'll create a new file called “” and import the following:

from gwc_python.core import GestureWorksCore
from gwc_python.GWCUtils import TOUCHADDED, TOUCHREMOVED, TOUCHUPDATE, rotateAboutCenter
from import App
from kivy.clock import Clock
from kivy.uix.scatter import Scatter
from kivy.uix.image import Image
from kivy.factory import Factory
from math import radians

5. Create TouchContainer Class

We’ll need to create a simple class to hold our touch objects. This will inherit from the Kivy Scatter class and simply override the built in touch event handler.

class TouchObject(Scatter):
    def on_touch_move(self, touch): return
Factory.register('TouchObject', TouchObject)

6. Create Kivy App

Just as before, we’ll need a class that inherits the Kivy App class for our application to run. We’ll start with the constructor and build method.

class MultitouchApp(App):
    def __init__(self, gw): = gw
        self.touch_points = {}
        self.touch_objects = {}
        self.max_objects = 2
        self.has_received_gesture = False
        # schedule our processing cycle
        Clock.schedule_interval(self.updateGestureWorks, 1./60)
        super(MultitouchApp, self).__init__()

This is very similar to the previous tutorial, we just have a new dict for holding touch objects, a max_objects variable and a state variable has_received_gesture that we’ll use for aesthetic purposes later on. The build method will require more explanation this time.

    def build(self):
        # Register our app window with GestureWorks
        if not'Kivy'):
            print('Unable to register touch window')
        for i in range(0, self.max_objects):
            container = TouchContainer()
   = 'object_{}'.format(i)
            container.scale = 2
            self.touch_objects.update({ container})
            # Tell Gestureworks about our touch object and add gestures to it
  , 'n-drag')
  , 'n-rotate')
  , 'n-scale')

After we register our touch window with GestureWorks, we create each of our touch objects setting their image to the yellow GestureWorks logo provided with the tutorial code. The important code is at the end of the loop. Here, we are registering each touch object with GestureWorks using a unique string identifier that we keep as the key in our touch_objects dict. Then we add three gestures to each touch object: n-drag, n-rotate, n-scale. For a more detailed explanation of this, please see the GestureML wiki.

7. Initialize Gestureworks And Load GML

Again, we load and initialize Gestureworks the same way as before.

if __name__ == '__main__':
    # Initialize GestureWorksCore with the location of the library
    gw = GestureWorksCore('C:\\path\\to\\GestureworksCore\\GestureworksCore32.dll')
    if not gw.loaded_dll: 
        print 'Unable to load GestureworksCore'
        # Load a basic GML file
    except WindowsError, e:
        print 'Unable to load GML'
    gw.initializeGestureWorks(1920, 1080)
    app = MultitouchApp(gw)

Except, this time, we also load GML since we want GestureWorks to process gestures as well as basic touch input. You can either add the basic_manipulation GML file to your project as an imported resource or use the path to it’s location in the GestureWorks installation directory directly. Again, please be sure that you are loading the correct DLL for your system.

8. Create updateGestureWorks callback

Just like last time, we’ll need to create a callback for our scheduled timer event to call.

    def updateGestureWorks(self, *args):
        point_events = gw.consumePointEvents()
        gesture_events = gw.consumeGestureEvents()
        if len(gesture_events) != 0:
            self.has_received_gesture = True;
        if not self.has_received_gesture:
            buf = self.root.width / (len(self.touch_objects) + 1)
            for obj in self.touch_objects.values():
       = (buf, self.root.height / 2)
                buf += buf

We can consume gesture events just like we consume point events. The GestureEvent structure is also defined in GWCUtils and we will receive a list of gesture events when we consume them. The final bit of code is only run before any gesture events have been processed and is only there to make sure that we draw our two logos in the same place.

9. Process Touch Events

Again, we’ll create a method for processing touch events, translate the y-coordinates of our touch points and check the status of each event. Only in this tutorial, we need to check if new touch points collide with any of our touch objects. If they do, we want to let GestureWorks know about it so that it can add the touch point to the object and check for gesture events involving the new touch point.

    def processTouchEvents(self, touches):
        for touch in touches:
            # We need to convert kivy y coordinate to Gestureworks y coordinate
            touch_y = self.root.height - touch.position.y
            touch_x = touch.position.x
            if touch.status == TOUCHADDED:
                obj = self.hitTest(touch_x, touch_y)
                if obj:
          , touch.point_id)
            elif touch.status == TOUCHREMOVED:
                # Handle touch removed 
            elif touch.status == TOUCHUPDATE:
                # Handle touch updates

We don’t need to worry about any touch event statuses other than TOUCHADDED, but we’ve included the others here for illustration.

10. Hit Test And Associate Touch Points With Touch Objects

Hit testing is pretty straight forward in this example. We iterate through all of our touch objects, rotate the touch point about the object’s center based on the object’s current rotation and check to see if the point lies within the object’s bounding box. We return a reference to the touch object on the first detection of a collision. Since this is meant to be a tutorial on how to use Gestureworks and not on two dimensional transformations, we provide a function to rotate a point about another arbitrary point (rotateAboutCenter) and leave it to the reader to investigate these concepts further.

    def hitTest(self, x, y):
        for obj in self.touch_objects.values():
            (local_x, local_y) = rotateAboutCenter(x, y, obj.center_x, obj.center_y, radians(-obj.rotation))
            if (abs(local_x - obj.center_x) <= obj.width*obj.scale/2) and (abs(local_y - obj.center_y) <= obj.height*obj.scale/2):
                return obj

Remember that we need to account for the difference between Kivy coordinates and GestureWorks coordinates.

11. Process Gesture Events

Processing gesture events is just as simple as processing touch events. You saw in step 8 that we are now calling consumeGestureEvents in our update loop after we ask GestureWorks to process a frame. Now we need a method to process those gesture events.

    def processGestureEvents(self, gesture_events):
        for e in gesture_events:
            obj = self.touch_objects[]
            {'n-drag': self.handleDrag,
             'n-rotate': self.handleRotate,
             'n-scale': self.handleScale}[e.gesture_id](obj, e)

We just iterate through each event and check its gesture_id property. These IDs will correspond to the gesture names that we registered on each touch object at the end of step 6. The names of the gestures are defined in the GML file that we loaded when we initialized GestureWorks. Here we simply call a handler method with the touch object and gesture event as parameters based on which gesture we received.

Gestureworks lets us know which object the gesture applies to via the target field of the gesture event. This name will correspond to the string we used to register the touch object in step 6.

  • For the most part, relevant gesture event data is stored in the values member of the event object. This member is a map of strings to floats. The string used to address any value will be based on that attribute’s name in the GML (please see the GestureML wiki for more information). For our example, these attribute names can be viewed in the basic_manipulations.gml file we loaded previously.
  • It is also important to note that these values are generally expressed as deltas; that is, they represent differentials between the current state and the previous state; eg. drag_dx and drag_dy each represent the change in position between this frame and the last.

12. Overview Of Object Transformations

Drag and scale gestures are pretty straight forward and handled in similar ways. GestureWorks gives us values for most gestures as delta values based on frame boundaries so we can use the values directly as incrementers.

    def handleDrag(self, obj, gesture_event): 
        obj.center_x += gesture_event.values['drag_dx']
        obj.center_y -= gesture_event.values['drag_dy']
        # Make sure the objects stay on the screen
        obj.center_x = max(min(obj.center_x, self.root.width), 0)
        obj.center_y = max(min(obj.center_y, self.root.height), 0)

Note that we are subtracting the “drag_dy” value of the gesture to compensate for the difference in coordinate systems. The last thing we do is just check the final position of the object to make sure it stays on our screen. Scale is handled in much the same way.

    def handleScale(self, obj, gesture_event):
        dsx = gesture_event.values['scale_dsx']
        obj.scale += dsx
        obj.scale = max(min(obj.scale, 6), .5)

Rotation is only slightly more complicated because we want to rotate around the center of the cluster of touch points activating the gesture. This gives a more realistic feel than simply rotating the object around its center. Again, the rotateAboutCenter is provided in GWCUtils for this purpose.

    def handleRotate(self, obj, gesture_event):
        theta = gesture_event.values['rotate_dtheta']
        obj.rotation -= theta
        if gesture_event.n:
   = rotateAboutCenter(obj.center_x, obj.center_y, gesture_event.x, 
                                           self.root.height - gesture_event.y, radians(-theta))

We rotate the object around it’s center and then rotate the object around the center of our gesture event. Note that we only do the second rotation if there are actual touch points activating the gesture. When an inertial filter is active, it is possible for the event to be triggered when there are no active touch points. In this case, we only want to rotate the object about its own center. For more information on gesture filters, please see the GestureML wiki. One final important note to make is that GestureWorks gives us rotation values in degrees but our helper function expects radians.


In this tutorial we expanded on the knowledge gained in Tutorial #2 by manipulating on-screen objects using gesture data obtained from Gestureworks. This tutorial covers a number of concepts that are central to the usage of Gestureworks:

  • Registering touch objects
  • Adding gestures to the registered touch objects
  • Manipulating Kivy object instances based on data obtained from Gestureworks

A principal concept that was briefly touched upon in step #11 above is that due to the language- and framework-independent nature of Gestureworks, it is the programmer’s responsibility to associate the local implementation of an object with an object as tracked by Gestureworks. To review: Gestureworks doesn’t know anything about the TouchContainer class that we defined, but we’ve registered each object with Gestureworks using the object’s name property as an identifier; we then manipulate our TouchContainers based on the data contained in the matching GestureEvent (the GestureEvent whose Target property matches our TouchContainer’s name property).

Continuing Education

It is the intention of this and the other Tutorials and Legacy Tutorials to get the programmer started using the GestureWorks core framework, and is by no means an exhaustive explanation of all of the features of GestureWorks Core; in fact, we’ve only barely scratched the surface!

For more information on GestureWorks Core, GestureML, and CreativeML, please visit the following sites:

Previous tutorial: Python: Getting Started II (Hello Multitouch)

tutorials/legacy/python_kivy/interactive_bitmaps.txt · Last modified: 2019/01/21 19:24 (external edit)