[[tutorials:legacy:python_panda3d|{{ :Panda-logo-caption.png }}]]
This binding has been deprecated - please see the most recent [[:release_notes| release notes]] for more information.
====== Interactive Bitmaps ======
===== Introduction =====
In this tutorial, we further explore Gestureworks Core and the Python bindings by consuming gesture events obtained from Gestureworks Core and manipulating on-screen images represented as OnscreenImages within Panda3D. This tutorial also introduces image manipulation in Panda3D such as rotation and scaling, values for which you will obtain from Gestureworks. For this tutorial you will need the [[http://gestureworks.com/|Gestureworks Core]] multitouch framework; a [[http://files.gestureworks.com/downloads/Core/Trial/GestureworksCoreTrialSetup.exe|free trial]] is available.
In this application, the user is able to drag the objects (the yellow Gestureworks logos) around the screen, scale them using the familiar "pinch"-style gesture, and rotate them using the n-finger rotate gesture.
Download the code for all of the Python & Panda3D multitouch tutorials here: [[http://files.gestureworks.com/tutorials/tutorials_python_panda3d.zip|tutorials_python_panda3d.zip]]
{{:tutorials:legacy:python_panda3d:python_3.1_386x281.png?nolink|Python 3.1 386x281.png}}
----
===== Requirements =====
Estimated time to complete: **45 minutes**
This tutorial assumes that you have completed the steps found in [[tutorials:legacy:python_panda3d:getting_started_1_hello_world|Python & Panda3D: Getting Started I (Hello World)]] and [[tutorials:legacy:python_panda3d:getting_started_2_hello_multitouch|Python & Panda3D: Getting Started II (Hello Multitouch]]). You should have a fully configured Eclipse project ready for use and should be familiar with consuming GestureWorks point event data. If you have have not yet done so, please follow both [[tutorials:legacy:python_panda3d:getting_started_1_hello_world|Tutorial #1]] to prepare an Eclipse project and [[tutorials:legacy:python_panda3d:getting_started_2_hello_multitouch|Tutorial #2]] to become familiar with consuming GestureWorks touch event data.
In addition to the above, this tutorial expects the student to have a basic understanding of the Python language, the Panda3D framework, and is familiar with object-oriented programming concepts.
----
===== Process Overview =====
- [[tutorials:legacy:python_panda3d:interactive_bitmaps#create_a_new_eclipse_-_panda3d_project|Create a new Eclipse - Panda3D project]]
- [[tutorials:legacy:python_panda3d:interactive_bitmaps#add_the_gestureworks_logo_images_to_the_project|Add the GestureWorks logo images to the project]]
- [[tutorials:legacy:python_panda3d:interactive_bitmaps#import_required_modules|Import required modules]]
- [[tutorials:legacy:python_panda3d:interactive_bitmaps#set_screen_dimensions|Set screen dimensions]]
- [[tutorials:legacy:python_panda3d:interactive_bitmaps#create_touchcontainer_class|Create TouchContainer class]]
- [[tutorials:legacy:python_panda3d:interactive_bitmaps#create_panda3d_app|Create Panda3D app]]
- [[tutorials:legacy:python_panda3d:interactive_bitmaps#initialize_gestureworks_and_load_gml|Initialize GestureWorks and load GML]]
- [[tutorials:legacy:python_panda3d:interactive_bitmaps#create_updateGestureworks_callback|Create updateGestureWorks callback]]
- [[tutorials:legacy:python_panda3d:interactive_bitmaps#process_touch_events|Process Touch Events]]
- [[tutorials:legacy:python_panda3d:interactive_bitmaps#hit_test_and_associate_touch_points_with_touch_objects|Hit Test and associate Touch Points with Touch Objects]]
- [[tutorials:legacy:python_panda3d:interactive_bitmaps#process_gesture_events|Process Gesture Events]]
- [[tutorials:legacy:python_panda3d:interactive_bitmaps#overview_of_object_transformations|Overview of Object Transformations]]
----
===== Process Details =====
==== 1. Create a new Eclipse - Panda3D project ====
For this tutorial it is recommended that you create a new, empty Panda3D project with the gwc_python module using the procedure outlined in [[tutorials:legacy:python_panda3d:getting_started_1_hello_world|Python & Panda3D: Getting Started I (Hello World)]]. You may also use the project created in [[tutorials:legacy:python_panda3d:getting_started_2_hello_multitouch|Python & Panda3D: Getting Started II (Hello Multitouch)]] but you will need to remove or comment out some unnecessary code.
==== 2. Add the GestureWorks Logo Images to the Project ====
For this application, two identical images will be used as the textures that will appear on-screen. Using the method described in tutorial #2 for adding resources to the project, add the following folder and all contents to the project:
\GestureWorksCore\bindings\python\Panda3D\tutorials\03_BitmanManipulation\media
==== 3. Import Required Modules ====
Again, we'll create a new file called //main.py// and import the following:
from gwc_python.core import GestureWorksCore
from gwc_python.GWCUtils import TOUCHADDED, TOUCHREMOVED, TOUCHUPDATE, rotateAboutCenter
from direct.gui.OnscreenImage import OnscreenImage
from direct.showbase.ShowBase import ShowBase
from direct.task import Task
from pandac.PandaModules import LineSegs
from pandac.PandaModules import deg2Rad
from pandac.PandaModules import NodePath
from pandac.PandaModules import Vec3
from panda3d.core import TransparencyAttrib
from panda3d.core import TextNode
from math import radians
==== 4. Set screen dimensions ====
We want to set parameters for our screen dimensions (for screen height and width). Adjust the values to match your environment.
SCREEN_WIDTH = 1920
SCREEN_HEIGHT = 1080
==== 5. Create TouchContainer Class ====
We’ll need to create a simple class to hold our touch objects.
class TouchObject():
pass
==== 6. Create Panda3D App ====
Just as before, we’ll need a class that inherits the Panda3D //ShowBase// class for our application to run. We’ll start with the constructor and build method.
def __init__(self, gw):
self.gw = gw
self.touch_points = {}
self.touch_objects = {}
ShowBase.__init__(self)
self.build()
# schedule our processing cycle
self.taskMgr.add(self.updateGestureWorks, "updateGestureWorksTask")
This is very similar to the previous tutorial, we just have a new dict for holding touch objects and touch points.The build method will require more explanation this time.
def build(self):
if not self.gw.registerWindow('Panda'):
print('Unable to register touch window')
exit()
# Set background image
OnscreenImage(parent=render2d, image="media/Logo_gestureworks_core_1920x1080_white.png",pos=(0,0,0))
# Create our touch objects
container_0 = TouchObject()
container_0.name = 'object_0'
container_0.picture = OnscreenImage(image='media/logo.png',pos=(-1,0,0),scale=0.25)
container_0.scale = 0.25
self.touch_objects.update({container_0.name: container_0})
container_1 = TouchObject()
container_1.name = 'object_1'
container_1.picture = OnscreenImage(image='media/logo.png',pos=(1,0,0),scale=0.25)
container_1.scale = 0.25
self.touch_objects.update({container_1.name: container_1})
for name in self.touch_objects:
# Tell GestureWorks about our touch object and add gestures to it
self.gw.registerTouchObject(name)
self.gw.addGesture(name, 'n-drag')
self.gw.addGesture(name, 'n-rotate')
self.gw.addGesture(name, 'n-scale')
After we register our touch window with GestureWorks, we set the background image of our screen. Then we create our 2 touch objects, setting their image to the yellow GestureWorks logo provided with the tutorial code. The important code is at the end of the loop. Here, we are registering each touch object with GestureWorks using a unique string identifier that we keep as the key in our //touch_objects// dict. Then we add three gestures to each touch object: //n-drag//, //n-rotate//, //n-scale//. For a more detailed explanation of this, please see the [[http://www.gestureml.org/|GestureML wiki]].
==== 7. Initialize GestureWorks And Load GML ====
Again, we load and initialize GestureWorks the same way as before.
if __name__ == '__main__':
# Initialize GestureWorksCore with the location of the library
gw = GestureWorksCore('C:\\path\\to\\GestureWorksCore\\GestureWorksCore32.dll')
if not gw.loaded_dll:
print 'Unable to load GestureWorksCore'
exit()
try:
# Load a basic GML file
gw.loadGML('\\gml_path\\basic_manipulation.gml')
except WindowsError, e:
print 'Unable to load GML'
exit()
gw.initializeGestureWorks(SCREEN_WIDTH, SCREEN_HEIGHT)
app = MultitouchApp(gw)
app.run()
Except, this time, we also load GML since we want GestureWorks to process gestures as well as basic touch input. You can either add the **basic_manipulation.gml** file to your project as an imported resource or use the path to its location in the GestureWorks installation directory directly. Again, please be sure that you are loading the correct GestureWorksCore DLL (GestureWorksCore32.dll or GestureWorksCore64.dll) for your system.
==== 8. Create updateGestureworks callback ====
Just like last time, we'll need to create a callback for our scheduled timer event to call.
def updateGestureworks(self, task):
self.gw.processFrame()
point_events = gw.consumePointEvents()
gesture_events = gw.consumeGestureEvents()
self.processTouchEvents(point_events)
self.processGestureEvents(gesture_events)
return Task.cont
We can consume gesture events just like we consume point events. The GestureEvent structure is also defined in GWCUtils and we will receive a list of gesture events when we consume them.
==== 9. Process Touch Events ====
Again, we’ll create a method for processing touch events, translate the coordinates of our touch points and check the status of each event. Only in this tutorial, we need to check if new touch points collide with any of our touch objects. If they do, we want to let GestureWorks know about it so that it can add the touch point to the object and check for gesture events involving the new touch point.
def processTouchEvents(self, touches):
for touch in touches:
# We need to convert Gestureworks coordinates to Panda3D coordinates
touch_x = float((touch.position.x - SCREEN_WIDTH/2) / SCREEN_WIDTH) * 4
touch_y = float((SCREEN_HEIGHT/2 - touch.position.y) / SCREEN_HEIGHT) * 2
if touch.status == TOUCHADDED:
obj = self.hitTest(touch_x, touch_y)
if obj:
self.gw.addTouchPoint(obj.name, touch.point_id)
elif touch.status == TOUCHREMOVED:
# Handle touch removed
pass
elif touch.status == TOUCHUPDATE:
# Handle touch updates
pass
We don’t need to worry about any touch event statuses other than TOUCHADDED, but we’ve included the others here for illustration.
==== 10. Hit Test And Associate Touch Points With Touch Objects ====
Hit testing is pretty straight forward in this example. We iterate through all of our touch objects, rotate the touch point about the object’s center based on the object’s current rotation and check to see if the point lies within the object’s bounding box. We return a reference to the touch object on the first detection of a collision. Since this is meant to be a tutorial on how to use GestureWorks and not on two dimensional transformations, we provide a function to rotate a point about another arbitrary point (//rotateAboutCenter//) and leave it to the reader to investigate these concepts further.
def hitTest(self, x, y):
for obj in self.touch_objects.values():
(local_x, local_y) = rotateAboutCenter(x, y, obj.picture.getX(), obj.picture.getZ(), radians(-obj.picture.getR()))
if (local_x > (obj.picture.getX() - obj.scale) and local_x < (obj.picture.getX() + obj.scale) ):
if (local_y > (obj.picture.getZ() - obj.scale) and local_y < (obj.picture.getZ() + obj.scale) ):
return obj
Remember that we need to account for the difference between Panda3D coordinates and GestureWorks coordinates.
==== 11. Process Gesture Events ====
Processing gesture events is just as simple as processing touch events. You saw in step 8 that we are now calling consumeGestureEvents in our update loop after we ask GestureWorks to process a frame. Now we need a method to process those gesture events.
def processGestureEvents(self, gesture_events):
for e in gesture_events:
obj = self.touch_objects[e.target]
{'n-drag': self.handleDrag,
'n-rotate': self.handleRotate,
'n-scale': self.handleScale}[e.gesture_id](obj, e)
We just iterate through each event and check its //gesture_id// property. These IDs will correspond to the gesture names that we registered on each touch object at the end of [[tutorials:legacy:python_panda3d:interactive_bitmaps#create_panda3d_app|step 6]]. The names of the gestures are defined in the GML file that we loaded when we initialized GestureWorks. Here we simply call a handler method with the touch object and gesture event as parameters based on which gesture we received. GestureWorks lets us know which object the gesture applies to via the target field of the gesture event. This name will correspond to the string we used to register the touch object in [[tutorials:legacy:python_panda3d:interactive_bitmaps#create_panda3d_app|step 6]].
For the most part, relevant gesture event data is stored in the values member of the event object. This member is a map of strings to floats. The string used to address any value will be based on that attribute's name in the GML (please see the [[http://www.gestureml.org/|GestureML wiki]] for more information). For our example, these attribute names can be viewed in the **basic_manipulations.gml** file we loaded previously. It is also important to note that these values are generally expressed as deltas; that is, they represent differentials between the current state and the previous state; eg. //drag_dx// and //drag_dy// each represent the change in position between this frame and the last.
==== 12. Overview Of Object Transformations ====
Drag and scale gestures are pretty straight forward and handled in similar ways. GestureWorks gives us values for most gestures as delta values based on frame boundaries so we can use the values directly as incrementers.
def handleDrag(self, obj, gesture_event):
# We need to convert Gestureworks coordinates to Panda3D coordinates
dx = gesture_event.values['drag_dx'] / SCREEN_WIDTH * 4
dy = gesture_event.values['drag_dy'] / SCREEN_HEIGHT * 2
obj.picture.setX(obj.picture.getX() + dx)
obj.picture.setZ(obj.picture.getZ() - dy)
# Make sure the objects stay on the screen
obj.picture.setX(max(min(obj.picture.getX(), 1.90), -1.90))
obj.picture.setZ(max(min(obj.picture.getZ(), 1), -1))
Note that we must compensate for the difference in coordinate systems. The last thing we do is just check the final position of the object to make sure it stays on our screen. Scale is handled in much the same way.
def handleScale(self, obj, gesture_event):
dsx = gesture_event.values['scale_dsx']/4
obj.scale += dsx
obj.scale = max(min(obj.scale, 1.5), .125)
obj.picture.setScale(obj.scale)
Rotation is only slightly more complicated because we want to rotate around the center of the cluster of touch points activating the gesture. This gives a more realistic feel than simply rotating the object around its center. Again, the rotateAboutCenter is provided in GWCUtils for this purpose.
def handleRotate(self, obj, gesture_event):
theta = gesture_event.values['rotate_dtheta']
obj.picture.setR(obj.picture.getR() + theta)
if gesture_event.n:
# We need to convert Gestureworks coordinates to Panda3D coordinates
gesture_x = float((gesture_event.x - SCREEN_WIDTH/2) / SCREEN_WIDTH) * 4
gesture_y = float((SCREEN_HEIGHT/2 - gesture_event.y) / SCREEN_HEIGHT) * 2
(new_x,new_y) = rotateAboutCenter(obj.picture.getX(), obj.picture.getZ(), gesture_x, gesture_y, radians(-theta))
obj.picture.setX(new_x)
obj.picture.setZ(new_y)
We rotate the object around it’s center and then rotate the object around the center of our gesture event. Note that we only do the second rotation if there are actual touch points activating the gesture. When an inertial filter is active, it is possible for the event to be triggered when there are no active touch points. In this case, we only want to rotate the object about its own center. For more information on gesture filters, please see the [[http://www.gestureml.org/|GestureML wiki]]. One final important note to make is that GestureWorks gives us rotation values in degrees but our helper function expects radians.
----
===== Review =====
In this tutorial we expanded on the knowledge gained in Python & Panda3D: Getting Started II (Hello Multitouch) by manipulating on-screen objects using gesture data obtained from GestureWorks. This tutorial covers a number of concepts that are central to the usage of GestureWorks:
* Registering touch objects
* Adding gestures to the registered touch objects
* Manipulating Panda3D object instances based on data obtained from GestureWorks
A principal concept that was briefly touched upon in step #11 above is that due to the language- and framework-independent nature of GestureWorks, it is the programmer's responsibility to associate the local implementation of an object with an object as tracked by GestureWorks. To review: GestureWorks doesn't know anything about the TouchContainer class that we defined, but we've registered each object with GestureWorks using the object’s name property as an identifier; we then manipulate our TouchContainers based on the data contained in the matching GestureEvent (the GestureEvent whose Target property matches our TouchContainer's name property).
----
===== Continuing Education =====
It is the intention of this and the other GestureWorks Core tutorials to get the programmer started using the GestureWorks Core framework, and is by no means an exhaustive explanation of all of the features of GestureWorks Core; in fact, we’ve only barely scratched the surface!
----
For more information on GestureWorks Core and GestureML, please visit the following sites:
* [[http://wiki.gestureworks.com/|GestureWorks Wiki]]
* [[http://gestureml.org/|GestureML.org]]
----
Previous tutorial: [[tutorials:legacy:python_panda3d:getting_started_2_hello_multitouch|Python & Panda3D: Getting Started II (Hello Multitouch)]]