DEV Community

Cover image for Take Control of Mouse and Touch Events to Create Your Own Gestures
Joe Honton
Joe Honton

Posted on

Take Control of Mouse and Touch Events to Create Your Own Gestures

Developers looking for an easy way to listen for gestures will find no support from the browser. Gestures must be built from the underlying pointer events and mouse events APIs. Further complicating matters, those APIs are not symmetrical.

Handling the raw mouse and touch events is the key to creating a gesture API.

These are the steps a developer will need to take to recognize gestures:

  1. Capture the starting and ending position of each finger or mouse pointer.
  2. Compute the distance and direction of each pointer’s movement.
  3. Calculate the geometric relationship between multiple pointers.
  4. Determine a pointer’s speed using the system clock.
  5. Check whether any special touch zones should be applied.
  6. Suppress any automatic browser-generated actions.
  7. Discard any unwanted raw events.

The algebra for each of these is in this full length article.

Key points:

  • Simple gestures like tap, press, and doubletap can be recognized from a single stationary pointer.
  • Gestures like horizontalflick and verticalflick can be distinguished from swipeleft/swiperight and scrollup/scrolldown by monitoring the system clock.
  • Two finger-gestures can recognize a change in their relative distance as a pinch or spread.
  • Two fingers moving in tandem can be recognized as horizontalpan, verticalpan, or a twofingertap.
  • Two fingers with a change in the sweep angle can be recognized as a clockwise or counterclockwise gesture.

For demonstration purposes, many of these have been implemented in the gesture API used by the Simply Earth website. When viewed on the desktop, the mouse plus Ctrl, Alt, Shift combinations are used to initiate gestures. When viewed on mobile devices, two-fingers are used to initiate all of the same gestures.

Top comments (0)