I have recently taken a ride with a train. I had a closer look at the LED display that shows the current station, the destination and the current time. It made me wonder if it's possible to simulate that in Flutter. Let's find out!
The simulated LED display should display text dynamically. We don't want to create our own LED font in which the "LED combination" of every letter is statically stored. We want an LED display that can display every text with every font.
So let's implement a screen that has a TextField
in which the user can input text. Above that we display the very same text on a simulated LED display.
Theory
What we are doing here, is a classic rasterisation. So basically something you see every day millions of times, whether you know it or not. That's because every display is based on pixels. No matter how good the resolution of an image is and even if it's a vector image, at the time of being displayed, the image needs to be mapped on a discrete dimension that is limited by physics. That doesn't only happen to images, of course, but also to text. Today's displays have such a high resolution that the viewer does not really notice that. Anti-aliasing also makes it appear so smooth that pixels don't come to one's mind.
But how can we access the pixels of a widget in Flutter? Well that's not that easy, because this is a rather low-level part of the rendering engine the developer is not supposed to access. So there are no APIs that let us access that.
So we need to think of another way. Instead of directly accessing the pixels of a rendered widget, we should rather take note of the Image package. This package has a getPixelSafe() method, allowing the caller to access a pixel given the x and y arguments.
Okay, we now know how to access a pixel in some format of a third party package. How is that going to help? Well, if we manage to convert a text into an image, we can access its pixels from there. Good news: our good old friend, the Canvas class in combination with PictureRecorder is able to create a Picture with a toImage() function.
There are a more steps in between which make it a rather complex conversion chain, so let's visualize what's going on there:
You don't need to understand every single step. What's important to remember, though: directly accessing pixels of Widgets is not possible. Thus we need to to an intermediate conversion to a rendered image. This requires a canvas and an external package. Let's head over to the implementation. Then everything will become clearer.
The implementation
We are going to have a bunch of classes to make the code more readable. The heart of our app will be the DisplaySimulator
this widget represents the part that actually mimics the LED display. Because we want to have full control and draw the display ourselves, we also need a CustomPainter
. This one will be called DisplayPainter
.
Additionally, we need a component that takes a string and converts it to a two-dimensional list of pixels. This is what the ToPixelsConverter
will be responsible of.
Let's start with the DisplaySimulator
:
import 'dart:typed_data';
import 'package:flutter/material.dart';
import 'package:flutter/services.dart';
const canvasSize = 100.0;
class DisplaySimulator extends StatefulWidget {
DisplaySimulator({
this.text,
this.border = false,
this.debug = false
});
final String text;
final bool border;
final bool debug;
@override
_DisplaySimulatorState createState() => _DisplaySimulatorState();
}
class _DisplaySimulatorState extends State<DisplaySimulator> {
ByteData imageBytes;
List<List<Color>> pixels;
@override
Widget build(BuildContext context) {
_obtainPixelsFromText(widget.text);
return Column(
children: <Widget>[
SizedBox(height: 96,),
_getDebugPreview(),
SizedBox(height: 48,),
_getDisplay(),
],
);
}
Widget _getDebugPreview() {
if (imageBytes == null || widget.debug == false) {
return Container();
}
return Image.memory(
Uint8List.view(imageBytes.buffer),
filterQuality: FilterQuality.none,
width: canvasSize,
height: canvasSize,
);
}
Widget _getDisplay() {
return Container();
}
void _obtainPixelsFromText(String text) async {
// Here we will set imageBytes and pixels
}
}
The DisplaySimulator
has three constructor arguments: the text which is obviously the text that is supposed to be displayed on the simulated display. The second argument border determines whether a border should be shown around the display. This can give a nice look. Third one is debug. As we discovered in the theory part, a lot of conversions happen from the initial string to the final effect. The crucial one is probably from the string to the byte data of the image. To make debugging easier, we add the possibility to display this intermediate conversion.
As a first iteration we only show the debugging part, not yet the actual display. So _getDisplay()
only returns an empty Container
whereas _getDebugPreview()
returns an Image made from the buffer we expect to get from our ToPixelsConverter
.
We need the imageBytes
for the debugging part and the pixels
for the display.
Okay, we know what the result of the conversion should look like. Let's make a model class for that conversion result:
class ToPixelsConversionResult {
ToPixelsConversionResult({
this.imageBytes,
this.pixels
});
final ByteData imageBytes;
final List<List<Color>> pixels;
}
Now we implement the converter from string to pixels:
import 'dart:typed_data';
import 'dart:ui' as ui;
import 'package:flutter/material.dart';
import 'package:flutter/services.dart';
import 'package:flutter_digital_text_display/text_to_picture_converter.dart';
import 'package:image/image.dart' as imagePackage;
class ToPixelsConverter {
ToPixelsConverter.fromString({
@required this.string,
@required this.canvasSize,
this.border = false
});
String string;
Canvas canvas;
bool border;
final double canvasSize;
Future<ToPixelsConversionResult> convert() async {
final ui.Picture picture = TextToPictureConverter.convert(
text: this.string, canvasSize: canvasSize
);
final ByteData imageBytes = await _pictureToBytes(picture);
final List<List<Color>> pixels = _bytesToPixelArray(imageBytes);
return ToPixelsConversionResult(
imageBytes: imageBytes,
pixels: pixels
);
}
Future<ByteData> _pictureToBytes(ui.Picture picture) async {
final ui.Image img = await picture.toImage(canvasSize.toInt(), canvasSize.toInt());
return await img.toByteData(format: ui.ImageByteFormat.png);
}
List<List<Color>> _bytesToPixelArray(ByteData imageBytes) {
List<int> values = imageBytes.buffer.asUint8List();
imagePackage.Image decodedImage = imagePackage.decodeImage(values);
List<List<Color>> pixelArray = new List.generate(canvasSize.toInt(), (_) => new List(canvasSize.toInt()));
for (int i = 0; i < canvasSize.toInt(); i++) {
for (int j = 0; j < canvasSize.toInt(); j++) {
int pixel = decodedImage.getPixelSafe(i, j);
int hex = _convertColorSpace(pixel);
pixelArray[i][j] = Color(hex);
}
}
return pixelArray;
}
int _convertColorSpace(int argbColor) {
int r = (argbColor >> 16) & 0xFF;
int b = argbColor & 0xFF;
return (argbColor & 0xFF00FF00) | (b << 16) | r;
}
}
We have one named constructor that has three arguments:
- string: This is required as it's the text we are going to display. We are just going to pass through the argument from the parent widget
- canvasSize: The size of the canvas the text is rendered on. This is important is in combination with the font size it determines the size and count of the pixels
- border: Boolean that determines whether to display a border around the display. Also passed through from the parent
The only public method of this class is convert()
. It takes the string
and let it be converted to a Picture
by the TextToPictureConverter
, we are going to implement in a minute. This conversion is necessary to get the ByteData
. This piece of information is used to iterate over the pixels, which are then turned into Color objects. If we store the original color, we can simulate multicolored LED displays.
It's worth noting that the image library uses the KML color format which has also a hexadecimal representation, but the red and blue part is switched. Thus we need to convert #AABBGGRR
to #AARRGGBB
. This is essentially what _convertColorSpace()
does.
We use Monospace font family to make every character have the same width and height.
Great, now that we have the conversion ready, we can use it to display our debug widget:
void _obtainPixelsFromText(String text) async {
ToPixelsConversionResult result = await ToPixelsConverter.fromString(
string: text, border: widget.border, canvasSize: canvasSize
).convert();
setState(() {
this.imageBytes = result.imageBytes;
pixels = result.pixels;
});
}
Now, in order to test everything, let's quickly hack a Home
widget that embeds the DisplaySimulator
and puts a TextField
underneath to allow the user to change the text that's being displayed:
import 'package:flutter/material.dart';
import 'display_simulator.dart';
class Home extends StatefulWidget {
@override
_HomeState createState() => _HomeState();
}
class _HomeState extends State<Home> {
String text;
@override
void initState() {
text = '';
super.initState();
}
@override
Widget build(BuildContext context) {
return SingleChildScrollView(
child: Align(
alignment: Alignment.topCenter,
child: Column(
children: [
DisplaySimulator(
text: text,
border: false,
debug: true,
),
SizedBox(height: 48),
_getTextField()
],
)
)
);
}
Container _getTextField() {
BorderSide borderSide = BorderSide(color: Colors.blue, width: 4);
InputDecoration inputDecoration = InputDecoration(
border: UnderlineInputBorder(borderSide: borderSide),
disabledBorder: UnderlineInputBorder(borderSide: borderSide),
enabledBorder: UnderlineInputBorder(borderSide: borderSide),
focusedBorder: UnderlineInputBorder(borderSide: borderSide),
);
return Container(
width: 200,
child: TextField(
maxLines: null,
enableSuggestions: false,
textAlign: TextAlign.center,
style: TextStyle(
color: Colors.yellow,
fontSize: 32,
fontFamily: "Monospace"
),
decoration: inputDecoration,
onChanged: (val) {
setState(() {
text = val;
});
},
)
);
}
}
Okay, if we start the app, we see the input field but not the debug widget. Although it seems that it sometimes flickers and appears. That's an issue being discussed on Github and the fix seems fairly simple: using the argument gaplessPlayback
:
Widget _getDebugPreview() {
if (imageBytes == null || widget.debug == false) {
return Container();
}
return Image.memory(
Uint8List.view(imageBytes.buffer),
gaplessPlayback: true,
filterQuality: FilterQuality.none,
width: canvasSize,
height: canvasSize,
);
}
Awesome. We enter text and are able to see a rendered image containing the very same text. Now we are going to use the pixel array to display the actual LED display.
We need to implement the getDisplay()
method of the DisplaySimulator
widget. At first, I've tried to do that using the widget tree with something like this:
Widget _getDisplay() {
if (pixels == null) {
return Container();
}
return Container(
color: Colors.black87,
child:
Row(
children: [
for (int i = 0; i < pixels.length; i++)
Column(
children: [
for (int j = 0; j < pixels.length; j++)
Container(
decoration: BoxDecoration(
borderRadius: BorderRadius.all(Radius.circular(4)),
color: pixels[i][j],
),
height: 4,
width: 4,
),
]
)
],
)
);
}
Unfortunately, the performance was very poor. The display was lagging every time I typed a new character. Also, the the text easily went across the border. I then decided to go for the CustomPaint
approach. This makes our getDisplay()
method rather small:
Widget _getDisplay() {
if (pixels == null) {
return Container();
}
return CustomPaint(
size: Size.square(MediaQuery.of(context).size.width),
painter: DisplayPainter(pixels: pixels, canvasSize: canvasSize)
);
}
The display logic can be found in the CustomPainter
called DisplayPainter
:
import 'dart:ui';
import 'package:flutter/material.dart';
class DisplayPainter extends CustomPainter {
DisplayPainter({
this.pixels, this.canvasSize
});
List<List<Color>> pixels;
double canvasSize;
@override
void paint(Canvas canvas, Size size) {
if (pixels == null) {
return;
}
canvas.drawRect(Rect.fromLTWH(0, 0, size.width, size.height), Paint()..color = Colors.black);
double widthFactor = canvasSize / size.width;
Paint rectPaint = Paint()..color = Colors.black;
Paint circlePaint = Paint()..color = Colors.yellow;
for (int i = 0; i < pixels.length; i++) {
for (int j = 0; j < pixels[i].length; j++) {
var rectSize = 1.0 / widthFactor;
var circleSize = 0.3 / widthFactor;
canvas.drawRect(
Rect.fromCenter(
center: Offset(
i.toDouble() * rectSize + rectSize / 2,
j.toDouble() * rectSize + rectSize / 2
),
width: rectSize,
height: rectSize
),
rectPaint
);
if (pixels[i][j].opacity < 0.3) {
continue;
}
canvas.drawCircle(
Offset(
i.toDouble() * rectSize + rectSize / 2 - circleSize / 2,
j.toDouble() * rectSize + rectSize / 2 - circleSize / 2,
),
circleSize,
circlePaint
);
}
}
}
@override
bool shouldRepaint(CustomPainter oldDelegate) {
return true;
}
}
We start with a mono-colored display. For this, we only draw every pixel with an opacity of more than 30 %. This lets us get rid of the anti-aliased pixels. We then use a static color (yellow in this case) to draw every pixel of the canvas based image. We then stretch the painted image across the hole width, which is the screen width (because we call it with MediaQuery.of(context).size
). This makes it look like this:
Very cool! Our first working display.
If we remove the condition that only display pixels with opacity more that 30 % and let the paint color be yellow with the luminance determined by the opacity of the pixel, we can invert the effect.
canvas.drawCircle(
Offset(
i.toDouble() * rectSize + rectSize / 2 - circleSize / 2,
j.toDouble() * rectSize + rectSize / 2 - circleSize / 2,
),
circleSize,
circlePaint..color = Colors.yellow.withOpacity(1-pixels[i][j].computeLuminance())
);
This looks as follows:
You might wonder, if this works for images too. The good news: we are handling ByteData
here. Actually, that's the same format that is returned by rootBundle.load
. So if we rewrite the code a little bit, we can also display images.
Future<ByteData> _pictureToBytes(ui.Picture picture) async {
return await rootBundle.load('assets/flutter-logo.png');
}
A mono-colored display is cool as long as we only have plain text. But the Flutter logo looks a little bit sad like this. And what about Emojis? If we type them now, we only get a yellow blob. So how about actually using the color information we previously stored in the pixels list?
canvas.drawCircle(
Offset(
i.toDouble() * rectSize + rectSize / 2 - circleSize / 2,
j.toDouble() * rectSize + rectSize / 2 - circleSize / 2,
),
circleSize,
circlePaint..color = pixels[i][j]
);
Final words
Accessing the raw pixels of a widget is not easy and in the high abstraction level Flutter provides us, not possible. However, it can be achieved using a trick: a conversion chain from text to canvas to picture actually allows us to access the pixels. With this piece of data we can easily create a simulated LED display by arranging the pixels with a little bit of distance next to each other and drawing them as circles.
Maybe you have additional ideas what cool effects can be achieved using this information?
Top comments (5)
you should create a package and publish
Good idea actually :). I will see if I find the time. Thanks for the appreciation.
Helpful article! Thanks! If you are interested in this, you can also look at my article about Flutter templates. I made it easier for you and compared the free and paid Flutter templates. I'm sure you'll find something useful there, too. - dev.to/pablonax/free-vs-paid-flutt...
Awesome. Its essentially like a half dot pattern effect
Looks amazing!