A Gesturing library for the Kinect, GestIT

Today's project is an interesting one as it means to provide a more generic compositional and declarative gesture system and library.

GestIT

A library for managing gestures in a compositional and declarative way.
The library supports the definition of gestures defining high level gestures while maintaining the possibility to decompose them in smaller parts, and to assign handlers to their sub-components.

The library is abstract with respect to the recognition platform, thus it can be applied for describing gestures that are recognized by very different devices, such as touch-screens for multi-touch interaction or the Microsoft Kinect for full-body gestures.
Gestures are defined starting from two basic concepts ground terms and composition operators

The ground terms represent features that can be tracked by developers for recognizing gestures. For instance, in a multi-touch application they are the events that allow tracking the finger positions (usually called touch start, move and end), while for full body gesture they are the events related to joint positions. Ground terms can be optionally associated to a predicate that has to be verified in order to receive the notification of a feature change. For instance, if we consider the movement of a body joint as a feature, it is possible to specify a predicate that computes whether the movement is linear or not.

The composition operators allow the connection of both ground terms or composed gestures in order to obtain a complex gesture definition. The operators supported are the following:

  • Iterative Operator, represented by the symbol *, expresses the repetition of a gesture recognition an indefinite number of times.
  • Sequence Operator, represented by the symbol >>, expresses that the connected sub-gestures (two or more) have to be performed in sequence, from left to right.
  • Parallel Operator, represented by the symbol symbol ||, expresses that the connected sub-gestures (two or more) can be recognized at the same time.
  • Choice Operator, represented by the symbol [], expresses that it is possible to se-lect one among the connected components in order to recognize the whole gesture.
  • Disabling Operator, represented by the symbol [>, expresses that a gesture stops the recognition of another one, typically used for stopping iteration loops.
  • Order Independence, represented by the symbol |=|, expresses that the connected sub-gestures can be performed in any order.
The composition result is an expression that defines the temporal relationships among the various low-level device events that are involved in the gesture recognition. Event handlers can be attached to all the expression terms (either ground or complex) and they are separated from the gesture description itself, which is reusable in different graphic controls and for different behaviours.

Project Information URL: http://gestit.codeplex.com/

Project Source URL: http://gestit.codeplex.com/SourceControl/list/changesets

One thing to note is that the BodyGestIT (i.r. using the Kinect) uses the Emgu.CV project (noted in the code comments). I used version 2.4 and that seemed to compile fine...

image

image

image

SNAGHTML8bf1b0

Tags:

Follow the discussion

  • Oops, something didn't work.

    Getting subscription
    Subscribe to this conversation
    Unsubscribing
    Subscribing

Comments closed

Comments have been closed since this content was published more than 30 days ago, but if you'd like to continue the conversation, please create a new thread in our Forums, or Contact Us and let us know.