DEV Community

Gualtiero Frigerio
Gualtiero Frigerio

Posted on

Introduction to Canvas in SwiftUI

This article is about another nice introduction in SwiftUI at WWDC21, the Canvas view.
Canvas is also the name Apple gave to the live preview of SwiftUI views in Xcode, so you may find this post by accident while looking for that specific feature.
In a sentence, the new Canvas view is a particular view in SwiftUI that enables you to draw in GraphicContext in a more “traditional” way.
The Canvas is a SwiftUI view, but it doesn’t work with a ViewBuilder like other views. That’s why I said a more “traditional” way, you’re not going to put SwiftUI views inside a Canvas, maybe you can think of it like the draw method in a UIView.

In this article I’ll show the same example with Canvas (requires iOS 15 as you may guess) and without it, by using only SwiftUI views.
The example is quite simple, we want our user to draw on the screen with the finger. I’ll capture the user’s touches the same way, so you’ll only see a different way of drawing a set of shapes with SwiftUI.
As usual, all the code is available on GitHub. In particular you can find the Canvas part here and the old implementation in this project.

Capture touch

In order to get the user’s finger position on the screen we can use a gesture recogniser. This is the code of the old implementation (see here) but is it similar for Canvas except I don’t need the GeometryReader.

GeometryReader { geometry in
    Rectangle()
        .gesture(DragGesture(coordinateSpace:.local).onChanged( { value in
            addPoint(value, geometry:geometry)
        })
        .onEnded( { value in
            endGesture(size:geometry.size)
        }))
}
Enter fullscreen mode Exit fullscreen mode

You can attach a gesture recogniser to a view with the .gesture modifier. In this case I’m using a DragGesture, so I can track the user moving its finger across the screen. The onChanged closure is called every time there is a new point to track, and once the user lifts the finger from the display onEnded is called.

private func addValue(_ value: DragGesture.Value) {
    let point = value.location
    let size = viewModel.size
    // we skip a point if x or y are negative
    // or if they are bigger than the width/height
    // so we don't draw points outside the view
    // I need to store size in the view model
    // to avoid the warning
    // Modifying state during view update, this will cause undefined behavior.
    if point.y < 0 ||
        point.y > size.height ||
        point.x < 0 ||
        point.x > size.width {
        viewModel.endPath()
        return
    }
    viewModel.addPoint(value.location)
}

private func endGesture() {
    viewModel.endPath()
}
Enter fullscreen mode Exit fullscreen mode

as you can read in the comments by knowing the view size (that’s why I used a GeometryReader) I can ignore locations outside the view. All the points are needed to build a shape, as shown below.

// this is in the view model
func addPoint(_ point:CGPoint) {
    var shape = getCurrentShape()
    shape.points.append(point)
    shapes.removeLast()
    shapes.append(shape)
}

struct SimpleShape {
    var points:[CGPoint] // points to make a path
    var color:Color // stroke color
    var width:CGFloat // stroke line width
    var id:UUID = UUID() // necessary to use in ForEach

    mutating func addPoint(_ point:CGPoint) {
        points.append(point)
    }
}
Enter fullscreen mode Exit fullscreen mode

Now that we have a collection of shapes, we need to draw them on screen.

Draw with SwiftUI views

Let’s start with the approach using SwiftUI views with a ViewBuilder. When I pasted the code to show you how to add a gesture recogniser I cut off the rest of the view you can see below

var body: some View {
    ZStack {
        GeometryReader { geometry in
            Rectangle()
                .gesture(DragGesture(coordinateSpace:.local).onChanged( { value in
                    addPoint(value, geometry:geometry)
                })
                .onEnded( { value in
                    endGesture(size:geometry.size)
                }))
        }
        ForEach(viewModel.shapes, id:\.id) { shape in
            DrawShape(points: shape.points)
                .stroke(lineWidth: shape.width)
                .foregroundColor(shape.color)
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

The Stack allows us to put the GeometryReader and the rectangle below the view we actually want to show to the user.
As we saw previously, the gesture has been translated to a set of shapes. We can now iterate them and draw every shape.

struct DrawShape: Shape {
    var points: [CGPoint]

    func path(in rect: CGRect) -> Path {
        var path = Path()
        if points.count == 0 {
            return path
        }

        path.move(to: points[0])
        for index in 1.. < points.count  { 
            path.addline(to: points[index]) 
        } 
        return path
    }
}
Enter fullscreen mode Exit fullscreen mode

I don’t use a View but I use a Shape, when you implement a Shape in SwiftUI you need to provide a Path, as you can see I build the path from an array of points by adding a line between each one of them. The Path can be drawn on screen with a color and a line width, the .stroke and .foregroundColor modifiers you saw previously.

Draw with Canvas

All right, this is the part you were waiting for, the new Canvas view, you can find the code here.
As I said before we can collect the user’s drawing with a gesture recogniser and we have a view model to store the shapes.

Canvas { context, size in
    viewModel.size = size
    let frame = CGRect(x: 0, y: 0, width: size.width, height: size.height)
    for shape in viewModel.shapes {
        context.stroke(shape.path(in: frame),
                       with: .color(.primary),
                       lineWidth: 5.0)
    }
}
.gesture(DragGesture(coordinateSpace:.local).onChanged( { value in
    addValue(value)
})
.onEnded( { value in
    endGesture()
}))
Enter fullscreen mode Exit fullscreen mode

This is all the code necessary to perform the same drawing.
Canvas provides a closure with two variables. We have a GraphicsContext we can use to draw on the screen and the CGSize of the view. This is why I don’t need a GeometryReader this time, the size is provided by Canvas itself.
As I mentioned at the beginning of the article this view is different from other SwiftUI views. There isn’t a view builder here, inside the closure you don’t return a View, we just interact with the context.
We can use a for loop instead of ForEach, and we call the stroke method on the context to draw a shape.
You can also fill a shape, or draw an image or text in the context. It is even possible to use a CGContext, so you may be able to use some old code that uses CGContext inside SwiftUI without bridging to UIKit.

Draw an image

Let’s end the article with another example: drawing some images on the screen inside a Canvas.

this is a sample effect, I want to draw 20 images of a clock with a different opacity. I can add some animation too, by moving them every second via a TimelineView. If you’re not familiar with it, here is my previous article on the subject.
In a sentence, the TimelineView allow us to refresh a SwiftUI view at a given time internal, like every minute, every second or specific dates in the future.

TimelineView(.periodic(from: Date(), by: 1.0)) { timeContext in
    Canvas { context, size in
        let date = timeContext.date
        var currentPoint = CGPoint(x: 0, y: 0)
        var image = context.resolve(Image(systemName: "clock"))
        image.shading = .color(.green)

        for i in 0..<20 {
            var innerContext = context
            innerContext.opacity = 0.1 * Double(i)
            currentPoint.x += 20
            currentPoint.y = offsetFromDate(date: date)
            innerContext.draw(image, at: currentPoint)
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

First, we need an image. If we plan to use the same image more than once in our GraphicsContext, we may want to create it with GraphicsContext.resolve, to improve performance. Otherwise, we can instantiate a SwiftUI image and ask the context to draw it.
Another interesting thing it the ability to copy the original GraphicsContext like I do in the for loop. This way, I can have a new context and change its opacity to obtain the effect you saw above. It is possibile to translate or rotate a GraphicsContext, so having a copy may be handy to perform certain operations only on a part of the original context instead of applying it to the whole Canvas. The code is available here

This was just a quick introduction to Canvas, I wanted to find an example to cover with SwiftUI with and without the Canvas. I plan to explore it more in the future so stay tuned and happy coding 🙂

Original post

Discussion (0)