Jump to content

Working with Gestures in Windows 7

0
  adfm's Photo
Posted Mar 29 2010 04:39 PM

You may have noticed the recent resurgence in touch screen computing. Windows 7 includes the tools you need to handle gestural input. In this excerpt from Kiriaty, Moroney, Goldshtein, & Fliess' Introducing Windows® 7 for Developers you'll learn how to work with gestures in Windows 7.


Whenever the user touches a touch-sensitive device that is connected to the computer and that touch activity translates to a gesture, the Windows 7 multitouch platform sends gesture messages (WM_GESTURE) to your application by default. This is the free, out-of-the-box behavior. But if you're reading this, it is safe to assume that you want to learn how to work with gestures.

Gestures are one-finger or two-finger input that translates into some kind of action that the user wants to perform. When the gesture is detected (by the operating system, which is doing all the work for you), the operating system sends gesture messages to your application. Windows 7 supports the following gestures:

  • Zoom

  • Single-finger and two-finger pan

  • Rotate

  • Two-finger tap

  • Press and tap

Handling the WM_GESTURE Message

To work with gestures, you'll need to handle the WM_GESTURE messages that are sent to your application. If you are a Win32 programmer, you can check for WM_GESTURE messages in your application's WndProc functions. The following code shows how gesture messages can be handled in Win32 applications:

LRESULT CALLBACK WndProc(HWND hWnd, UINT message, WPARAM wParam, LPARAM lParam)

{

	int wmId, wmEvent;

	PAINTSTRUCT ps;

	HDC hdc;

	switch (message){

 	case WM_GESTURE:

 	/* insert handler code here to interpret the gesture */

 	break;

 	...

	default:

 	return DefWindowProc(hWnd, message, wParam, lParam);

	}

	return 0;

}

..

WM_GESTURE is the generic message used for all gestures. Therefore, to determine which gesture you need to handle, first you need to decode the gesture message. The information about the gesture is found in the lParam parameter, and you need to use a special function—GetGestureInfo—to decode the gesture message. This function receives a pointer to a GESTUREINFO structure and lParam; if it's successful, the function fills the gesture information structure with all the information about the gesture:

GESTUREINFO gi;

ZeroMemory(&gi, sizeof(GESTUREINFO));

gi.cbSize = sizeof(gi);

BOOL bResult = GetGestureInfo((HGESTUREINFO)lParam, &gi);

Here you can see that you prepare the GESTUREINFO, gi, structure by clearing its content with zeros except for its size and then passing its pointer to the function to fill it with the gesture message information.

After obtaining a GESTUREINFO structure, you can check dwID, one of the structure members, to identify which gesture was performed. However, dwID is just one of the structure members. There are several other important members:

  • dwFlags The state of the gesture, such as begin, inertia, or end.

  • dwID The identifier of the gesture command. This member indicates the gesture type.

  • cbSize The size of the structure, in bytes; to be set before the function call.

  • ptsLocation A POINTS structure containing the coordinates associated with the gesture. These coordinates are always relative to the origin of the screen.

  • dwInstanceID and dwSequenceID These are internally used identifiers for the structure, and they should not be used or handled.

  • ullArguments This is a 64-bit unsigned integer that contains the arguments for gestures that fit into 8 bytes. This is the extra information that is unique for each gesture type.

  • cbExtraArgs The size, in bytes, of extra ullArguments that accompany this gesture.

Now you can complete the switch-case clause to handle all the different Windows 7 gestures, as shown in the following code snippet:

void CMTTestDlg::DecodeGesture(WPARAM wParam, LPARAM lParam)

{

	// create a structure to populate and retrieve the extra message info

	GESTUREINFO gi;

	gi.cbSize = sizeof(GESTUREINFO);

	ZeroMemory(&gi, sizeof(GESTUREINFO));

	GetGestureInfo((HGESTUREINFO)lParam, &gi);

	// now interpret the gesture

	switch (gi.dwID){

 	case GID_ZOOM:

 	// Code for zooming goes here

 	break;

 	case GID_PAN:

 	// Code for panning goes here

 	break;

 	case GID_ROTATE:

 	// Code for rotation goes here

 	break;

 	case GID_TWOFINGERTAP:

 	// Code for two-finger tap goes here

 	break;

 	case GID_PRESSANDTAP:

 	// Code for roll over goes here

 	break;

 	default:

 	// You have encountered an unknown gesture

 	break;

	CloseGestureInfoHandle((HGESTUREINFO)lParam);

}

Note

In the preceding code segment, you can see how we set the stage for handling each gesture separately, as the dwID flag indicates the type of gesture. Also note that at the end of the function we call the CloseGestureInfoHandle function, which closes resources associated with the gesture information handle. If you handle the WM_GESTURE message, it is your responsibility to close the handle using this function. Failure to do so can result in memory leaks.

Note

In production code, you are expected to check the return value of functions to make sure the function succeeds. For simplicity, we do not include such checks in our code snippets.

If you look carefully in the Windows 7 Software Development Kit (SDK), you'll find that the dwID member can also have values indicating when a gesture is starting (GID_BEGIN), which is essentially when the user places his fingers on the screen. The SDK also defines the GID_END flag, which indicates when a gesture ends. Gestures are exclusive, meaning you can't achieve the effect of zooming and rotating at the same time as using gestures. Your application can receive at a given time either a zoom or rotate gesture, but not both. But gestures can be compound, because the user can perform several gestures during one long touch episode. GID_BEGIN and GID_END are the start and end markers for such gesture sequences. To achieve the effect of zooming and rotating at the same time, you need to handle raw touch events and use the manipulation process as described in the next chapter.

Most applications should ignore the GID_BEGIN and GID_END messages and pass them to DefWindowProc. These messages are used by the default gesture handler; the operating system cannot provide any touch support for legacy applications without seeing these messages. The dwFlags member contains the information needed to handle the beginning and ending of a particular gesture.

By now, you can see that handling gesture messages is not that difficult. Handling gesture messages has a fixed process that includes configuration (which we will describe later in the chapter), decoding the gesture message, and handling each specific gesture according to your application's needs. Next, you'll learn what unique information each gesture includes and how to handle it.

Use the Pan Gesture to Move an Object

With the pan gesture, you can control the scrolling of content in a scrollable area. Or you can apply the pan gesture to a specific object, moving it in any direction by simply touching it with one or two fingers and moving it. This is also known as transformation because you are partially transforming the object from being located in one location to being located in another. The following figure illustrates how panning works:

Attached Image

In the illustration, you can see two touch points, marked with the numbers 1 and 2. By default, the pan gesture supports both single-finger and two-finger panning. You'll see how to configure pan gestures and other gestures later in this chapter.

Now let's see what code you need to implement in your GID_PAN switch to achieve this panning effect. Our application is simple; it displays a single rectangle. We show only the parts of the code that are required to handle gesture messages. Because this is part of a larger example that is written in C++, we do not describe in detail any of the other elements, such as painting the rectangle.

The gesture info structure includes the dwFlags member, which is used to determine the state of the gesture and can include any of the following values:

  • GF_BEGIN Indicates that the gesture is starting

  • GF_INERTIA Indicates that the gesture has triggered inertia

  • GF_END Indicates that the gesture is ending

We'll use the GF_BEGIN flag to save the initial start coordinates of the touch point as a variable as a reference for the following steps. The gesture information includes the ptsLocation member that contains the X and Y coordinates of the touch point. Then for each consecutive pan message, we'll extract the new coordinates of the touch point. By using the initial start position that we saved before and the new coordinates, we can calculate the new position and apply the move operation to the object. Finally, we'll repaint the object in its new position. The following code snippet shows the entire GID_PAN switch:

case GID_PAN:

	switch(gi.dwFlags)

	{

	case GF_BEGIN:

 	_ptFirst.x = gi.ptsLocation.x;

 	_ptFirst.y = gi.ptsLocation.y;

 	ScreenToClient(hWnd,&_ptFirst);

 	break;



	default:

 	// We read the second point of this gesture. It is a middle point

 	// between fingers in this new position

 	_ptSecond.x = gi.ptsLocation.x;

 	_ptSecond.y = gi.ptsLocation.y;

 	ScreenToClient(hWnd,&_ptSecond);

 	// We apply the move operation of the object

 	ProcessMove(_ptSecond.x-_ptFirst.x,_ptSecond.y-_ptFirst.y);

 	InvalidateRect(hWnd,NULL,TRUE);

 	// We have to copy the second point into the first one to prepare

 	// for the next step of this gesture.

 	_ptFirst = _ptSecond;

 	break;

	}

	break;

Here you can see that in the case of GF_BEGIN flag, we use _ptFirst, a simple POINT structure, to save the initial starting touch coordinates from the gesture information in ptsLocation. We call the ScreenToClient function to convert the screen coordinates of a given point on the display to the window of our application, because the coordinates in the gesture information are always relative to the origin of the screen.

The next pan message that arrives is handled by the default case. Now we save the coordinates in the _ptSecond variable and again calculate the coordinates relative to our application window. Then we simply subtract the first touch point from the second touch point to find the new location and call ProcessMove, which is a helper function, to update the new coordinates of the rectangle. We call InvalidateRect to repaint the whole window to show the rectangle at the new coordinates. Finally, we save the latest touch point in the _ptFirst for reference for the next gesture message. When a two-finger pan gesture is used, the coordinates of the touch point in the gesture information structure, ptsLocation, represent the current position of the pan that is the center of the gesture. The ullArgument member indicates the distance between the two touch points.

The Windows 7 pan gesture also includes support for inertia. Inertia is used to create some sort of continuation to the movement that was generated by the gesture. After you take your finger off the touch-sensitive device, the system calculates the trajectory based on the velocity and angle of motion and continues to send WM_GESTURE messages of type GID_PAN that are flagged with GF_INERTIA, but it reduces the speed of the movement up to a complete stop of the motion. The inertia-flagged messages continue to include updated coordinates, but with each message the delta between the previous coordinates decreases to the point of a complete stop. You're given the option to distinguish between normal pan gestures and inertia to enable you to opt for special behavior for inertia if needed.

Use the Zoom Gesture to Scale an Object

In the previous section, you learned how to move an object using the pan gesture. The pan gesture is widely used for document reading as well as for more graphical purposes such as picture viewing and manipulation. For all these cases, the zoom gesture is also widely used by both developers and users.

The zoom gesture is usually implemented by users as a pinch movement involving two touch points, where the user moves her fingers closer together to zoom out and moves them farther apart to zoom in. For simplicity, we'll refer to zoom in as zoom and explicitly say zoom out for the opposite gesture. The zoom gesture allows you to scale the size of your objects.

The following figure illustrates how the zoom gesture works, with the circles labeled as 1 and 2 indicating the touch points and the arrows indicating the direction the fingers move in to produce the effect:

Attached Image

Now let's see what code you need to implement in your GID_ZOOM switch to achieve the desired zooming effect. As with the previous gesture example, we'll focus only on the parts of the code that are required to handle the gesture messages.

As before, we'll use GF_BEGIN to store a few parameters that will come handy when the next zoom gesture message arrives. Again, we'll save ptsLocation in _ptFirst. For a zoom gesture, ptsLocation indicates the center of the zoom. As we did previously, we call the ScreenToClient function to convert the screen coordinates of a given point on the display to the window of our application. In addition to saving the center location of the zoom gesture, we also save the distance between the two touch points, which can be found in the ullArgument GestureInfo structure. Later, the distance between the fingers allows us to calculate the zoom ratio.

The next zoom message that arrives is handled by the default case. Again, we save the coordinates in the _ptSecond variable and call the ScreenToClient function. Next, we calculate the zoom center point and the zoom ratio. Finally, we update the window to reflect the zoom center point and zooming ratio of the rectangle. Here is a short snippet showcasing these arguments:

case GID_ZOOM:

switch(gi.dwFlags)

{

case GF_BEGIN:

	_dwArguments = LODWORD(gi.ullArguments);

	_ptFirst.x = gi.ptsLocation.x;

	_ptFirst.y = gi.ptsLocation.y;

	ScreenToClient(hWnd,&_ptFirst);

	break;

default:

	// We read here the second point of the gesture. This is the middle point between

fingers.

	_ptSecond.x = gi.ptsLocation.x;

	_ptSecond.y = gi.ptsLocation.y;

	ScreenToClient(hWnd,&_ptSecond);



	// We have to calculate the zoom center point

	ptZoomCenter.x = (_ptFirst.x + _ptSecond.x)/2;

	ptZoomCenter.y = (_ptFirst.y + _ptSecond.y)/2;



	// The zoom factor is the ratio between the new and old distances.

	k = (double)(LODWORD(gi.ullArguments))/(double)(_dwArguments);



	// Now we process zooming in/out of the object

	ProcessZoom(k,ptZoomCenter.x,ptZoomCenter.y);

	InvalidateRect(hWnd,NULL,TRUE);



	// Now we have to store new information as starting information for the next step

	_ptFirst = _ptSecond;

	_dwArguments = LODWORD(gi.ullArguments);

	break;

}

break;

Here you can see that in addition to saving the location of the zoom gesture in the GF_BEGIN switch, we also extract from gi.ullArgument the distance between the two touch points using the LODWORD macro. The ullArgument is 8 bytes long, and the extra information about the gesture is stored in the first 4 bytes.

The next zoom message that arrives is handled by the default case. Again, we save the location of the gesture, and from the two sets of touch points we calculate the zoom center point and store it in ptZoomCenter. Now we need to calculate the zoom factor. This is done by calculating the ratio between the new distance and the old distance. Then we call the ProcessZoom helper function, which updates the new coordinates to reflect the zoom factor and center point. After that, we call InvalidateRect to force the window to repaint the rectangle. Finally, we save the latest touch point in _ptFirst and the latest distance between the two touch points in _dwArguments for reference for the next zoom message.

The main difference between the default legacy zoom gesture support and the zoom support just described is the knowledge about the center of the zoom gesture. In the legacy zoom gesture support, the operating system always zooms to the center of the control or to the current cursor location. However, using the zoom gesture to handle zooming on your own allows you to use knowledge about the center of the zoom gesture to zoom in and zoom out around that point, providing the user with a richer, more accurate zooming experience.

Use the Rotate Gesture to Turn an Object

Zoom and pan gestures are the most commonly used gestures. It is safe to assume that most users will naturally perform the right-handed gestures—panning and pinching to get the desired effect. This is one of the main reasons for providing the legacy support for these gestures. In the previous section, you saw how you can customize the handlers for these gestures. In this section, we'll address the rotate gesture, which after the pan and zoom gestures is probably the most widely used gesture in applications that use surfaces to display content, such as photo viewing and editing or mapping applications.

Imagine there is a real picture placed flat on your desk. It is a picture of a landscape, and it's turned upside down. To turn the picture right side up, you need to place your hand (or just two fingers) on it and move your hand in a circular motion to rotate the picture to the desired position. This is the same gesture that you can perform in Windows 7. The rotate gesture is performed by creating two touch points on an object and moving the touch points in a circular motion, as shown in the following illustration, with the numbers 1 and 2 representing the two touch points and the arrows representing the direction of movement from those points. As in the previous examples, we'll examine what kind of coding effort it takes to rotate our rectangle.

Attached Image

We use the GID_ROTATE value to identify the rotate gesture. As with the previous gesture handlers, we use the GF_BEGIN flag to save the initial state of the gesture. In our case, we store 0 (zero) in _dwArguments, which is used as the variable holding the rotation angle.

In the default case, we again save the location of the gesture. The ptsLocation member represents the center between the two touch points, and it can be considered as the center of rotation. As with all gesture messages, the ullArguments member holds the extra information about the gesture. In the rotate gesture, this is the cumulative rotation angle. This cumulative rotation angle is relative to the initial angle that was formed between the first two touch points, and the initial touch points are considered the "zero" angle for the current rotate gesture. That is the reason we don't need to save the initial angle in the GF_BEGIN case—we consider that angle to be zero degrees. This makes sense because you want to capture the user motion and project that relative to the object position on the screen rather than using fixed angle positions, which will force you to do a lot of calculation in the initial state.

case GID_ROTATE:

	switch(gi.dwFlags)

	{

	case GF_BEGIN:

 	_dwArguments = 0;

 	break;

	default:

 	_ptFirst.x = gi.ptsLocation.x;

 	_ptFirst.y = gi.ptsLocation.y;

 	ScreenToClient(hWnd,&_ptFirst);

 	// Gesture handler returns cumulative rotation angle. However, we

 	// have to pass the delta angle to our function responsible

 	// for processing the rotation gesture.

 	ProcessRotate(

 	GID_ROTATE_ANGLE_FROM_ARGUMENT(LODWORD(gi.ullArguments))

 	- GID_ROTATE_ANGLE_FROM_ARGUMENT(_dwArguments),

 	_ptFirst.x,_ptFirst.y

 	);

 	InvalidateRect(hWnd,NULL,TRUE);

 	_dwArguments = LODWORD(gi.ullArguments);

 	break;

	}

	break;

Here you can see that, as with all gestures, we save the location of the gesture and convert the point to reflect the relative coordinates to our window. Next, we extract the rotation angle from the ullArgument using the LODWORD macro. The angle is represented in radians, and we use the GID_ROTATE_ANGLE_FROM_ARGUMENT macro to convert it to degrees. As mentioned earlier, the rotate gesture handlers return the cumulative rotation angle, but we need to use the difference between the two angles to create the rotation motion. Now we pass the delta angle and the center of the rotation to the ProcessRotate helper function that updates the X and Y coordinates of the object to reflect the rotation motion. We invalidate the window to repaint the rectangle. Finally, we save the current cumulative angle in _dwArguments to be able to subtract between the angles when the next rotate gesture message arrives.

As with the zoom gesture, ptsLocation holds the center of rotation, which allows you to have fine control over how you rotate the object. Instead of rotating the object around the center point of the object, you can do so around a specific rotation point to achieve higher fidelity to the user gesture and to provide a better user experience.

Use a Two-Finger Tap to Mimic a Mouse Click

So far, we have covered pan, zoom, and rotate gestures that are following a natural finger gesture, and we expect these gestures to behave as their names suggest. But there are other forms of input we use in our day-to-day work with computers, such as the mouse click, double-click, and right-click. These mouse events are used by almost every Windows application, and the Windows 7 multitouch platform needs to provide a way to mimic these behaviors. Two-finger tap gestures and press-and-tap gestures can be used as the equivalent touch gestures for clicking and right-clicking, respectively.

A two-finger tap is a simple gesture—just tap once with two fingers on the object you want to manipulate. When you handle the two-finger tap gesture, it's easy to see how you can use a double two-finger tap to mimic a mouse double-click. But because there is a difference between single-finger touch (usually used to pan) and a two-finger tap, you might consider just using a two-finger tap as a mouse double-click. The following figure illustrates a two-finger tap, with the numbers 1 and 2 indicating touch points that are touched simultaneously. In this case, we've used a two-finger tap to add two diagonals lines to the rectangle, as you can see in the image on the right.

Attached Image

We use the GID_TWOFINGERTAP value to identify the two-finger tap gesture. As with the previous gesture handlers, ullArguments contains extra information about the gesture. In the case of the two-finger tap gesture, ullArguments contains the distance between both fingers and pstLocation indicates the center of the two-finger tap. In the following code snippet, we don't need to save or use the distance between the two touch points or the center of the gesture. We are going to set a flag and a timer that expires after 200 milliseconds, and we'll reset the flag we just set, unless a consecutive second two-finger tap gesture message arrives before the timer expires. If the second gesture message arrives within 200 milliseconds, we call the ProcessTwoFingerTap function, which paints an X on the rectangle. The code looks like this:

case GID_TWOFINGERTAP:

	if( _intFlag == 0)

	{

 	_ bWaitForSecondFingerTap = 1;

 	setTimer();

	}

	else if( _intFlag == 1)

	{

 	//handle double-click

 	ProcessTwoFingerTap();

 	InvalidateRect(hWnd,NULL,TRUE);

 	_intFlag = 0;

	}

break;

Here you can see that we're handling only a basic case of the message and do not inspect the message flags for the GF_BEGIN message as in the previous gesture. We just execute all the logic in the switch case of GID_TWOFINGERTAP. You might want to consider removing the flag and timer from the code just shown and handle only a single two-finger tap, mainly because we shouldn't confuse the user by trying to distinguish between single-finger panning and a two-finger tap, and there is no single tap. Therefore, you can consider a single-finger tap to be the equivalent of a mouse click and, if needed, a two-finger tap can be considered to be the equivalent of a double-click.

Use the Press-and-Tap Gesture to Mimic a Mouse Right-Click

In the previous section, we showed you how to handle the two-finger tap gesture which, among other usages, can mimic a mouse click or double-click. But what about mouse right-clicks? What hand gesture can simulate that? Well, the goal is to simulate a mouse right-click, which is actually clicking on the second mouse button. This translates into putting down one finger to create the first touch point and then tapping with the second finger—that is, form a second touch point for a short period of time. The following figure illustrates that operation by showing, left to right, the sequence of touch points.

Attached Image

We use the GID_PRESS_AND_TAP value to identify this gesture. As with the previous gesture handlers, ullArguments contains extra information about the gesture. In the case of the press-and-tap gesture, ullArguments contains the distance between both fingers and pstLocation indicates the position that the first finger touched. You can use this information to customize the graphical representation of the first finger or any other similar highlighting effect. For our simple demonstration, we will not store any information for this gesture handler, nor will we use the GF_BEGIN flag. Simply, we are going to handle the ProcessPressAndTap function to randomly recolor the boarders of the rectangle, as the following code snippet shows:

	case GID_PRESSANDTAP:

 	ProcessPressAndTap();

 	InvalidateRect(hWnd,NULL,TRUE);

	break;

Configuring Windows 7 Gestures

As we mentioned before, gesture messages (WM_GESTURE) are sent to your application by default. Well, most of them are. All gesture types except the single-finger pan and rotate are sent to your application by default. With that said, you can choose and configure which gesture messages you want to support in your application at any given time. To determine which gesture messages you want to receive, you'll need to call SetGestureConfig and pass to it as an argument an array of gesture configuration structures. With the GESTURECONFIG structure, you can set and get the configuration for enabling gesture messages.

The GESTURECONFIG structure has the following members:

  • dwID The identifier for the type of configuration that will have messages enabled or disabled. It can be any one of the "GID_" gesture messages (zoom, pan, rotate, two-finger tap, or press and tap).

  • dwWant Indicates which messages to enable in relation to the gesture defined in dwID

  • dwBlocks Indicates which messages to disable in relation to the gesture defined in dwID

You can use the following code snippet to enable all the gestures using the special GC_AllGESTURES flag:

GESTURECONFIG gc = {0,GC_ALLGESTURES,0};

SetGestureConfig(hWnd, 0, 1, &gc,

 	sizeof(GESTURECONFIG));

If you want to have finer control over the gestures you want to receive messages for, you can use the gesture configuration structure and specify which gesture message you want to get and which you want to block. For example, the following code shows how you can disable all gestures:

GESTURECONFIG gc[] = {{ GID_ZOOM, 0, GC_ZOOM },

 	{ GID_ROTATE, 0, GC_ROTATE},

 	{ GID_PAN, 0, GC_PAN},

 	{ GID_TWOFINGERTAP, 0, GC_TWOFINGERTAP},

 	{ GID_PRESSANDTAP, 0 , GC_PRESSANDTAP}

 	};



UINT uiGcs = 5;

bResult = SetGestureConfig(hWnd, 0, uiGcs, gc, sizeof(GESTURECONFIG));

This code sets five different gesture configuration structures; one for each gesture. As you can see, the first parameter of each structure is the "GID_" touch gesture message. And in each one of these structures the second parameter is set to zero, indicating we chose not to handle any message, as is the last parameter, indicating we want to block the specific gesture message.

The right time to call SetGestureConfig is during the execution of the WM_GESTURENOTIFY handler. WM_GESTURENOTIFY is a message sent to the application just before the operating system starts sending WM_GESTURE to your application's window. Basically, the operating system is telling you, "Hi, I am going to start sending you gesture messages real soon." In fact, this message is sent after the user has already placed his fingers on the touch-sensitive device and started performing a certain gesture. And it is the right place to define the list of gestures that your application will support. By populating the right GESTURECONFIG structures to reflect the gestures you wish to handle and calling the SetGestureConfig function, you can choose gestures that you want to handle in your application.

Advance Gesture Configuration

Except for the pan gesture, all gestures only can be turned on or off. Because the pan gesture is probably the most popular gesture, the Windows 7 multitouch platform allows you to set extra configuration parameters for it. If you remember, early in this chapter we mentioned the horizontal and vertical pan gestures, as well as inertia in regard to panning. It turns out you can be very specific when it comes to pan gesture configuration. You can set the dwWant member of the configuration structure to any of the following flag values:

  • GC_PAN

  • GC_PAN_WITH_SINGLE_FINGER_VERTICALLY

  • GC_PAN_WITH_SINGLE_FINGER_HORIZENTALLY

  • GC_PAN_WITH_GUTTER

  • GC_PAN_WITH_INERTIA

Most values are self-explanatory. GC_PAN is used to enable all gestures. The next flag that requires some explanation is GC_PAN_WITH_GUTTER. A gutter defines the boundaries to the pannable area within which your pan gesture still works. The gutter boundary limits the perpendicular movement of the primary direction, either a vertical pan or a horizontal pan. When a certain threshold is reached, the current sequence of pan gestures is stopped. So if you are panning vertically, you don't have to pan in a perfectly straight line. You can perform somewhat diagonal vertical pan gestures. You can deviate about 30 degrees from the main perpendicular and still have the touch movement considered as a pan gesture, as shown in Figure 5.3, “Using gutters for panning gestures” where the left and right gutter boundaries define a sort of a funnel. Within that area, a pan gesture still works. After putting down the first touch point, shown at number 1, you can perform the pan gesture anywhere in the area defined by the funnel. If during the gesture you pan out of the gutter, shown at number 3, the pan gesture is discontinued and the operating system stops sending you WM_GESTURE pan messages.

Figure 5.3. Using gutters for panning gestures

Attached Image

By turning the gutter off, you allow the pan gesture to enter free-style mode, enabling the operating system to send pan messages for every panning gesture performed on the pannable area with no regard to vertical or horizontal panning. This can make for a good experience when moving objects such as pictures on a particular surface or when panning a map. Imagine a mapping application in which you want to provide a full 2D panning experience. The following code snippet shows how you can configure the Windows 7 Multitouch platform to allow zoom, rotate, and panning gestures vertically and horizontally with gutters turned off. This is a useful configuration for a mapping application in which you want to allow the user to pan, zoom, and rotate the point of view of the map.

DWORD dwPanWant = GC_PAN | GC_PAN_WITH_SINGLE_FINGER_VERTICALLY |

GC_PAN_WITH_SINGLE_FINGER_HORIZONTALLY;

DWORD dwPanBlock = GC_PAN_WITH_GUTTER;

	GESTURECONFIG gc[] = {{ GID_ZOOM, GC_ZOOM, 0 },

 	{ GID_ROTATE, GC_ROTATE, 0},

 	{ GID_PAN, dwPanWant , dwPanBlock}

 	};

	UINT uiGcs = 3;

SetGestureConfig(hWnd, 0, uiGcs, gc, sizeof(GESTURECONFIG));

A good example of this functionality is the Photo Viewer application that ships with Windows 7, which already has support for zoom and rotate functionality specified via mouse input. Now this functionality is also backed by multitouch gesture support with a relatively small amount of effort.



0 Replies