User Tools

Site Tools


This binding has been deprecated - please see the most recent release notes for more information.

Getting Started III (Interactive Bitmaps)


In this tutorial, we further explore GestureWorks Core and the .NET bindings by consuming gesture events obtained from GestureWorks Core and manipulating on-screen images represented as sprites within XNA. This tutorial also introduces sprite manipulation XNA such dragging, rotation and scaling, values for which you will obtain from GestureWorks.

For this tutorial you will need the Gestureworks Core multitouch framework; a free trial is available.

In this application, the user is able to drag the sprites (the green GestureWorks logos) around the screen, scale them using the familiar “pinch”-style gesture, and rotate them using the two-finger rotate gesture.

Download the code for all of the C# & XNA multitouch tutorials here:

You may download the API documentation for the .NET bindings in *.chm format here:


Time to complete: 45 minutes

This tutorial assumes that you have completed the steps found in .NET & XNA: Getting Started I (Hello World) and .NET & XNA: Getting Started II (Hello Multitouch). You should have a fully configured Visual Studio project ready for use and should be familiar with consuming GestureWorks point event data. If you have have not yet done so, please follow both Tutorial #1 to prepare a Visual Studio project and Tutorial #2 to become familiar with consuming GestureWorks touch event data.

In addition to the above, this tutorial expects the student to have a basic understanding of the C# language, .NET, and is familiar with object-oriented programming concepts.

Process Overview

Process Detail

1. Create a new Visual Studio XNA Game Project that includes GestureWorksCoreNET

For this tutorial it is recommended that you create a new, empty XNA Game project referencing the GestureWorksCoreNET project using the procedure outlined in .NET & XNA: Getting Started I (Hello World). You may also use the project created in .NET & XNA: Getting Started II (Hello Multitouch), but you will need to delete or comment some of the code added for the previous tutorial if you do so.

2.Add the GestureWorks logo images to the Content project

For this application, two images will be used as textures for the sprites that will appear on-screen. Using the method described in .NET & XNA: Getting Started II (Hello Multitouch) for adding resources to the Content project within your XNA solution, add the follow images to the project:


3.Initialize GestureWorks in code

As you did in the last tutorial, add code to initialize GestureWorks and specify the DLL name in GwNative.cs, adjusting the paths and display size values to match your environment.

Add a using statement for referencing GestureWorksCoreNET within your project after adding the GestureWorksCoreNET project as a reference:

    using GestureWorksCoreNET;
Game1.cs, Game1 class - **add a member field**:
     GestureWorks _gestureWorks = new GestureWorks();
Game1.cs, **Initialize()** method:
    _gestureWorks.InitializeGestureWorks(1920, 1080);

GestureWorksCoreNET\GwNative.cs - update DLL filename to match environment:

    private const string _dllName = "GestureWorksCoreWin7_x64.dll"; // ? update to match DLL name

4. Configure the GraphicsDeviceManager to match your environment

Add the following code to the Game1() constructor, adjusting for the BackBuffer property values to match your display device’s screen resolution.

    graphics.PreferredBackBufferWidth  = 1920;
    graphics.PreferredBackBufferHeight = 1080;

Once again it is recommended that you leave the ToggleFullScreen line commented for easier debugging. When you know you can successfully build and run the application you may uncomment this line. In addition, the on-screen sprites may not be centered correctly on the touch points while this line is commented out.

5. Create a TouchObject class to encapsulate the sprite data

Because you will be tracking and processing more information about each of the sprites on the screen in this application than merely updating the sprite location as in the previous tutorial (such as performing collision detection, rotation, and scaling), create a class to encompass the data. In this example we create a TouchObject class in a new file TouchObject.cs:

linenums:1 |Creating a TouchObject class // TouchObject.cs //
public class TouchObject
    public TouchObject(string name, Texture2D texture, Vector2 startPosition)
        Name             = name;
        Texture          = texture;
        Position         = startPosition;
        Origin           = new Vector2(Texture.Width / 2, Texture.Height / 2);
        Scale            = 1.0f;
        Rotation         = 0.0f;
    public string Name       { get; set; }
    public Texture2D Texture { get; set; }
    public Vector2 Position  { get; set; }
    public Vector2 Origin    { get; set; }
    public float Scale       { get; set; }
    public float Rotation    { get; set; }
    public Rectangle BoundingBox
            int width  = (int)((float)Texture.Width  * Scale);
            int height = (int)((float)Texture.Height * Scale);
            return new Rectangle(
                (int)Position.X - width  / 2,
                (int)Position.Y - height / 2,
    public void Draw(SpriteBatch spriteBatch)
                         SpriteEffects.None, 0f);

You'll also have to add a couple of using statements to the using block in your new TouchObject class file as it references some XNA types:

    using Microsoft.Xna.Framework;
    using Microsoft.Xna.Framework.Graphics;
==== 6. Create TouchObject instances and add gestures to each ====
We will create a TouchObject collection and add a TouchObject instance for each of the sprites we wish to manipulate on-screen.
Add a member field to the Game1 class:
    List<TouchObject> _touchObjects = new List<TouchObject>();
Within Game1's LoadContent() method, add the following under the spritebatch line:
    Texture2D textureXna = Content.Load<Texture2D>("logo_gestureworks_xna");
    _touchObjects.Add(new TouchObject(
                    new Vector2(graphics.GraphicsDevice.Viewport.Width / 1.50f,
                                graphics.GraphicsDevice.Viewport.Height / 2)));
    _touchObjects.Add(new TouchObject(
                    new Vector2(graphics.GraphicsDevice.Viewport.Width / 3.00f,
                                graphics.GraphicsDevice.Viewport.Height / 2)));
Note that the Vector2 parameters specify the start positions of the logos on the screen.
We now want to register a touch object and three gestures for each of the objects in our TouchObject collection with GestureWorks.
Just after the code block above, add:
    foreach  (TouchObject to in _touchObjects)
        _gestureWorks.AddGesture(to.Name, "n-drag");
        _gestureWorks.AddGesture(to.Name, "n-scale");
        _gestureWorks.AddGesture(to.Name, "n-rotate");
This adds the n-drag, n-rotate, and n-scale gestures to each TouchObject in the collection so that we can receive drag, rotate, and scale events that are delivered for each object. The n- prefix in their names denotes that these gestures can be performed with any number of fingers.
This process of registering a touch object and adding gestures to that touch object is a standard process with regard to the usage of GestureWorks. Note that GestureWorks tracks its objects only by name - it does not know anything about the TouchObject class - we could have named our class "GameObject" or any other name; it is the programmer’s responsibility to associate objects defined in code with data obtained from GestureWorks via the ConsumeGestureEvents() and ConsumePointEvents() methods (discussed below).
==== 7. Add some helper functions for collision detection and rotation around a point cluster center ====
There are two features of this application that require some additional code: we have to detect when a touch point is within one of the sprites' bounding boxes (i.e. collision detection), and we have to rotate the sprite around the center of the touch point cluster as opposed to merely the center of the sprite. While implementation of these common routines varies, this example uses the following three methods to assist in these operations.
You can add these anywhere within the Game1 class (in the example they appear below the Draw() method):
private bool Collision(TouchPoint touchPoint, TouchObject touchObject)
    //Setup a bounding box for the touch point
    Rectangle touchPointBox = new Rectangle((int)touchPoint.X - 10,
                                            (int)touchPoint.Y - 10,
    //Test to see if the touch point's bounding box intersects with the TouchObject
    if (touchPointBox.Intersects(touchObject.BoundingBox))
        return true;
        return false;
private void HandleRotate(TouchObject touchObject, GestureEvent gesture)
    //Since GestureWorks uses degrees, we have to convert to radians for XNA usage
    float theta = MathHelper.ToRadians(gesture.Values["rotate_dtheta"]);
    //Specify the rotation value of the TO
    touchObject.Rotation += theta;
    //Only update the position if there are two or more TouchPoints associated with the gesture
    if (gesture.N > 1)
        //Rotate the TouchObject around the center of the Gesture, not the object
        touchObject.Position = RotateAboutCenter(touchObject.Position.X, touchObject.Position.Y, gesture.X, gesture.Y, theta);
private Vector2 RotateAboutCenter(float currentX, float currentY, float gestureX, float gestureY, float angle)
    Vector2 position = Vector2.Zero;
    double local_x = (double)(currentX - gestureX);
    double local_y = (double)(currentY - gestureY);
    double length = Math.Sqrt(local_x * local_x + local_y * local_y);
    position.X = (int)(length * Math.Cos((double)angle + Math.Atan2(local_y, local_x)) + (double)gestureX);
    position.Y = (int)(length * Math.Sin((double)angle + Math.Atan2(local_y, local_x)) + (double)gestureY);
    return position;

8. Consume gesture and point events, manipulate on-screen sprites based on gesture event data

Essential to the usage of the GestureWorks framework are the consumption of the gesture and point events. In the .NET bindings, the methods used for each are ConsumeGestureEvents() and ConsumePointEvents(), respectively. An application should consume these events in an update or draw loop - something that is processed on each frame. We then use the data obtained from these events to affect user-defined objects, in this case the TouchObject instances. In this example we place the event consumption code for both points and gestures in the Update() method.

First we tell GestureWorks to process a frame, then we obtain the gesture and point events:

GestureEventArray gestureEvents = _gestureWorks.ConsumeGestureEvents();
PointEventArray   pointEvents   = _gestureWorks.ConsumePointEvents();

Now that we have the event data from GestureWorks, we can use this data to update our TouchObject instances with new position, scale, and rotation data.

First we need to see if there has been a new touch point added by looping through the PointEvents obtained from GestureWorks, and if so, determine whether that point lies within one of the sprites' bounding boxes. If so, we add that touch point (via AddTouchPoint) to GestureWorks so that GestureWorks can process it.

foreach (PointEvent pointEvent in pointEvents)
    if (pointEvent.Status == TouchStatus.TOUCHADDED)
        foreach (TouchObject to in _touchObjects)
            if (Collision(pointEvent.Position, to))
                _gestureWorks.AddTouchPoint(to.Name, pointEvent.PointId);

Basically, this will allow us to know whether to alter the position of a sprite in a drag event (actually n-drag) which is shown below.

Now we loop through the GestureEvents obtained from GestureWorks and affect our sprites based on the data obtained. In short, for each of our TouchObject instances we check to see if the gesture target matches the name of the TouchObject, then adjust the TouchObject properties based on the gesture type.

foreach (GestureEvent gesture in gestureEvents)
    foreach (TouchObject to in _touchObjects)
        if (to.Name == gesture.Target)
            if (gesture.GestureID == "n-drag")
                to.Position = new Vector2(to.Position.X + gesture.Values["drag_dx"],
                                          to.Position.Y + gesture.Values["drag_dy"]);
            else if (gesture.GestureID == "n-scale")
                to.Scale += gesture.Values["scale_dsx"];
            else if (gesture.GestureID == "n-rotate")
                HandleRotate(to, gesture);

As you can see, we are testing for the gestures we registered in step #6 above and manipulating the property values of the TouchObject using the values returned from GestureWorks. In this way, GestureWorks is doing all of the heavy-lifting in terms of the transformation of the position information for each type of gesture, e.g. the rate of decay of a drag or rotate gesture.

More on Gestures

Typically, each GestureEvent is used to update an object’s properties within an application (in this example, a TouchObject) based on the data received, with rotate events changing an object’s rotation, drag events changing an object’s X and Y coordinates (its position), and scale events being used to change the object’s scale (its size). Of course, rotate, drag and scale are not the only gesture types available, but for the purposes of this tutorial they are used for demonstrating some of the more commonly used gesture events.

For the most part, relevant gesture event data is stored in the Values property of the GestureEvent object. Within the .NET bindings, this property is a Dictionary<string, float>. The string key corresponds to an attribute’s name as it exist the GML (please see for more information on gesture attributes). In this tutorial, these attribute names can be viewed in the basic_manipulations.gml file we loaded step #3 above.

It is also important to note that these values are generally expressed as deltas; that is, they represent differentials between the current state and the previous state, e.g. drag_dx and drag_dy each represent the change in position between the current and previous frames.

9. Draw the TouchObject sprites on the screen

At this point, we can add code to draw the TouchObject sprites on the screen on each frame update. Add the following code to the Draw() method in Game1.cs and build and run your application to see how interaction with the sprites behaves:

    spriteBatch.Begin(SpriteSortMode.BackToFront, BlendState.AlphaBlend);
    foreach (TouchObject to in _touchObjects)

The example solution includes an additional line in the Draw() method just above the foreach loop that draws the GestureWorks logo sprite in the upper-left corner of the screen. To match the example project exactly, you may optionally add the following line above the foreach in the Draw() method:

    spriteBatch.Draw(_textureGwC, Vector2.Zero, Color.White);

Note also that this requires the _textureGwC be defined as a member field of the Game1 class and the texture loaded in LoadContent():

Game1 member field:

    Texture2D _textureGwC;

Texture load in the LoadContent() method:

    _textureGwC = Content.Load<Texture2D>("logo_gestureworks_core");

10. Build and Run

Before we run the application, change the exit code in the Update() method so that hitting ESC will exit the app:

    // Allows the game to exit
    if (Keyboard.GetState().IsKeyDown(Keys.Escape))

Now, build and run the application (F5) and try dragging the green Gestureworks logos around the screen, as well as scaling and rotating them.

If everything looks OK, uncomment the graphics.ToggleFullScreen(); line in the Game1 constructor and set the background to GraphicsDevice.Clear(Color.DarkGray); in the Draw() method.


Coming soon…


In this tutorial we expanded on the knowledge gained in .NET & XNA: Getting Started II (Hello Multitouch) by manipulating on-screen sprites using gesture data obtained from GestureWorks. This tutorial covers a number of concepts that are central to the usage of GestureWorks:

  • Registering touch objects
  • Adding gestures to the registered touch objects
  • Manipulating .NET object instances based on data obtained from GestureWorks

A principal concept that was briefly touched upon in step #6 above is that due to the language- and framework-independent nature of GestureWorks, it is the programmer’s responsibility to associate the local implementation of an object with an object as tracked by GestureWorks. To review: GestureWorks doesn’t know anything about the TouchObject class that we defined, but we’ve registered each object with GestureWorks using the object’s Name property as an identifier; we then manipulate our TouchObjects based on the data contained in the matching GestureEvent (the GestureEvent whose Target property matches our TouchObject's Name property).

Continuing Education

It is the intention of this and the other Tutorials and Legacy Tutorials to get the programmer started using the GestureWorks core framework, and is by no means an exhaustive explanation of all of the features of GestureWorks Core; in fact, we’ve only barely scratched the surface!

For more information on GestureWorks Core, GestureML, and CreativeML, please visit the following sites:

Previous tutorial: .NET & XNA: Getting Started II (Hello Multitouch)

tutorials/legacy/net_xna/interactive_bitmaps.txt · Last modified: 2019/01/21 19:24 (external edit)