DEV Community

Cover image for Enhancing User Experience with Gestures in Jetpack Compose
Audu Ephraim
Audu Ephraim

Posted on

Enhancing User Experience with Gestures in Jetpack Compose

In the context of an Android application “gesture” refers to a specific pattern of user interaction with the device touch screen.

It can be a hand-drawn shape or a series of timed points on the screen. Each gesture can have one or multiple strokes.

A common gesture starts when the screen is initially touched and concludes when the final finger or pointing device is lifted off the screen.

Swiping motion to move to the next page of an ebook or pinching movement which involves two touches to zoom in and out of an image are common examples of gestures.

Gestures in Jetpack Compose

Jetpack compose offers different ways for the detection of gestures within an application. In this article, we will implement three different gestures:

  • Pinching gestures

  • Rotation gestures

  • Translation gestures
    In most cases, Compose provides two ways to detect gestures. One way involves the use of gesture modifiers with built-in visual effects. Another option is to use the PointerInputScope interface which requires more coding for more advanced gesture detection

This article will explore gesture detection by creating an Android project to see how these gestures are implemented.

Creating the demo Project

Start Android Studio and create a new empty activity project named gesturePractice
Select a minimum API level of API 26: Android 8.0
Within the main activity delete the greeting function and add a new composable named GesturePractice
Edit onCreate and greeting preview to call GesturePractice

Detecting Pinch Gestures

Pinch gestures are commonly used to alter the scale of content, effectively creating a zoom-in and-out effect. This gesture is recognized through the use of the transformable() modifier.

This modifier requires a state of the TransformableState type as parameters, which can be instantiated via the rememberTransformableState() function.

This function accepts a trailing lambda where three parameters are passed:

  • Scale change: A Float value is updated when pinch gestures are performed.

  • Offset Change: An Offset instance containing the current x and y offset values. This value is updated when a gesture causes the target component to move (referred to as translations).

  • Rotation Change: A Float value representing the current angle change when detecting rotation gestures.

Let's start by making these changes to GesturePractice in mainActivity.kt to implement pinch gesture:

@Composable
fun GesturePractice(){
   var scale by remember{ mutableFloatStateOf(1f) }

   val state = rememberTransformableState{scaleChange, offsetChange, rotationChange ->
       scale*=scaleChange
   }

   Box(
       contentAlignment = Alignment.Center,
       modifier = Modifier.fillMaxSize()
   ) {
       Box (
           modifier = Modifier
               .graphicsLayer {
                   scaleX = scale
                   scaleY = scale
               }
               .transformable(state = state)
               .background(Color.Blue)
               .size(100.dp)
       ){

       }

   }
}
Enter fullscreen mode Exit fullscreen mode
  1. var scale by remember{ mutableFloatStateOf(1f) }: This line declares a mutable state variable scale and initializes it with a value of 1f. The remember function ensures that the value of scale is preserved across recompositions.

2.
val state = rememberTransformableState{scaleChange, offsetChange, rotationChange -> scale*=scaleChange }: This line creates a TransformableState object that is remembered across recompositions. The lambda function updates the scale state variable whenever a scale change gesture is detected.
3.
Box(contentAlignment = Alignment.Center, modifier = Modifier.fillMaxSize()): This creates a Box composable that fills the maximum size of its parent and aligns its children at the center.
4.
Inside this Box, another Box is created with specific modifiers:
.graphicsLayer { scaleX = scale; scaleY = scale }: This applies a scaling transformation to the Box. The scale factor is determined by the scale state variable.
.transformable(state = state): This makes the Box respond to transformation gestures (like scaling) as defined by the state object.
.background(Color.Blue): This sets the background color of the Box to blue.
.size(100.dp): This sets the size of the Box to 100.dp (density-independent pixels).
To test our app run it on a physical device or an emulator. Once running perform a pinch gesture on the blue box to zoom in and out.

If you use an emulator, hold the keyboard ctrl key(Cmd on MacOS) while clicking and dragging to simulate multiple touches.

Detecting Rotation Gestures

To implement support for rotation gestures we need to add 3 lines of code to the already existing code above

@Composable
fun GesturePractice(){
   var scale by remember{ mutableFloatStateOf(1f) }

   var angle by remember { mutableFloatStateOf(1f) }

   val state = rememberTransformableState{scaleChange, offsetChange, rotationChange ->
       scale*=scaleChange
       angle += rotationChange
   }

   Box(
       contentAlignment = Alignment.Center,
       modifier = Modifier.fillMaxSize()
   ) {
       Box (
           modifier = Modifier
               .graphicsLayer {
                   scaleX = scale
                   scaleY = scale
                   rotationZ = angle

               }
               .transformable(state = state)
               .background(Color.Blue)
               .size(100.dp)
       ){

       }

   }
}
Enter fullscreen mode Exit fullscreen mode

1.
var angle by remember { mutableFloatStateOf(1f) }: This line declares a mutable state variable angle and initializes it with a value of 1f. The remember function ensures that the value of the angle is preserved across recompositions
2.
rotationZ = angle: This line applies a rotation transformation to the composable. The angle of rotation is determined by the angle state variable.

When we compile and run this and perform both pinch and rotation gestures, both the size and angle of the box will change

Detecting Translation Gestures

This involves the change in the position of a component. We can also add a few lines of code to support translation gestures.

@Composable
fun GesturePractice(){
   var scale by remember{ mutableFloatStateOf(1f) }

   var angle by remember { mutableFloatStateOf(1f) }

   var offset by remember { mutableStateOf(Offset.Zero) }

   val state = rememberTransformableState{scaleChange, offsetChange, rotationChange ->
       scale*=scaleChange
       angle += rotationChange
       offset += offsetChange
   }

   Box(
       contentAlignment = Alignment.Center,
       modifier = Modifier.fillMaxSize()
   ) {
       Box (
           modifier = Modifier
               .graphicsLayer {
                   scaleX = scale
                   scaleY = scale
                   rotationZ = angle
                   translationX = offset.x
                   translationY = offset.y

               }
               .transformable(state = state)
               .background(Color.Blue)
               .size(100.dp)
       ){

       }

   }
}
Enter fullscreen mode Exit fullscreen mode

1.
var offset by remember { mutableStateOf(Offset.Zero) }: This line declares a mutable state variable offset and initializes it with Offset. Zero represents a point at the origin (0,0) in a 2D plane. The remember function ensures that the value of offset is preserved across recompositions.
2.
translationX = offset.x; translationY = offset.y: These lines apply a translation transformation to the composable. The translation distances along the X and Y axes are determined by the x and y properties of the offset state variable, respectively.

You can test out the translation gesture only on a physical device.

Conclusion

We’ve seen how to implement translation, rotation, and pinch (scaling) gestures in Jetpack Compose. These gestures enhance the interactivity of our apps, making them more user-friendly.
The power of state and recomposition in Jetpack Compose allows us to create dynamic and responsive UIs.
This exploration is just the start. There’s a vast array of gestures and touch interactions to discover, each with the potential to enrich the user experience.
This article marks the beginning of a series where we’ll explore more complex gestures, handle multiple simultaneous gestures, and much more. I hope to see you there!

Top comments (0)