DEV Community

Cover image for Let's build... a retro text art generator!

Posted on • Updated on

Let's build... a retro text art generator!

Text art, often called "ASCII art", is a way of displaying images in a text-only medium. You've probably seen it in the terminal output of some of your favorite command line apps.

For this project, we'll be building a fully browser-based text art generator, using React and TypeScript. The output will be highly customizable, with options for increasing brightness and contrast, width in characters, inverting the text and background colors, and even changing the character set we use to generate the images.

All the code is available on GitHub, and there's a live demo you can play around with too!


Here's what we'll be building


The basic algorithm is as follows:

  1. Calculate the relative density of each character in the character set (charset), averaged over all its pixels, when displayed in a monospace font. For example, . is very sparse, whereas # is very dense, and a is somewhere in between.

  2. Normalize the resulting absolute values into relative values in the range 0..1, where 0 is the sparsest character in the charset and 1 is the densest.

    If the "invert" option is selected, subtract the relative values from 1. This way, you get light pixels mapped to dense characters, suitable for light text on a dark background.

  3. Calculate the required aspect ratio (width:height) in "char-pixels", based on the rendered width and height of the characters, where each char-pixel is a character from the charset.

    For example, a charset composed of half-width characters will need to render more char-pixels vertically to have the same resulting aspect ratio as one composed of full-width characters.

  4. Render the target image in the required aspect ratio, then calculate the relative luminance of each pixel.

  5. Apply brightness and contrast modifying functions to each pixel value, based on the configured options.

  6. As before, normalize the absolute values into relative values in the range 0..1 (0 is the darkest and 1 is lightest).

  7. Map the resulting luminance value of each pixel onto the character closest in density value.

  8. Render the resulting 2d matrix of characters in a monospace font.

With the HTML5 Canvas API, we can do all this without leaving the browser! 🚀

Show me the code!

Without further ado...

Calculating character density

CanvasRenderingContext2D#getImageData gives a Uint8ClampedArray of channels in the order red, green, blue, alpha. For example, a 2×2 image in these colors (the last pixel is transparent):

Red, green, blue, transparent

Would result in the following data:

    // red    green  blue   alpha
       255,   0,     0,     255, // top-left pixel
       0,     255,   0,     255, // top-right pixel
       0,     0,     255,   255, // bottom-left pixel
       0,     0,     0,     0,   // bottom-right pixel
Enter fullscreen mode Exit fullscreen mode

As we're drawing black on transparent, we check which channel we're in using a modulo operation and ignore all the channels except for alpha (the transparency channel).

Here's our logic for calculating character density:

const CANVAS_SIZE = 70
const FONT_SIZE = 50


const RECT: Rect = [0, 0, CANVAS_SIZE, CANVAS_SIZE]

export enum Channels {


export type Channel = Exclude<Channels, Channels.Modulus>

export const getRawCharDensity =
    (ctx: CanvasRenderingContext2D | OffscreenCanvasRenderingContext2D) =>
    (ch: string): CharVal => {
        ctx.fillText(ch, LEFT, BASELINE)

        const val = ctx
                (total, val, idx) =>
                    idx % Channels.Modulus === Channels.Alpha
                        ? total - val
                        : total,

        return {
Enter fullscreen mode Exit fullscreen mode

Note that we subtract the alpha values rather than adding them, because denser characters are darker (lower RGB values) than sparser ones. This means all the raw values will be negative. However, that doesn't matter, as we'll be normalizing them shortly.

Next, we iterate over the whole charset, keeping a track of min and max:

export const createCanvas = (width: number, height: number) =>
        ? new OffscreenCanvas(width, height)
        : (Object.assign(document.createElement('canvas'), {
          }) as HTMLCanvasElement)

export const getRawCharDensities = (charSet: CharSet): RawCharDensityData => {
    const canvas = createCanvas(CANVAS_SIZE, CANVAS_SIZE)

    const ctx = canvas.getContext('2d')!

    ctx.font = `${FONT_SIZE}px monospace`
    ctx.fillStyle = '#000'

    const charVals = [...charSet].map(getRawCharDensity(ctx))

    let max = -Infinity
    let min = Infinity

    for (const { val } of charVals) {
        max = Math.max(max, val)
        min = Math.min(min, val)

    return {
Enter fullscreen mode Exit fullscreen mode

Finally, we normalize the values in relation to that min and max:

export const getNormalizedCharDensities =
    ({ invert }: CharValsOptions) =>
    ({ charVals, min, max }: RawCharDensityData) => {
        // minimum of 1, to prevent dividing by 0
        const range = Math.max(max - min, 1)

        return charVals
            .map(({ ch, val }) => {
                const v = (val - min) / range

                return {
                    val: invert ? 1 - v : v,
            .sort((a, b) => a.val - b.val)
Enter fullscreen mode Exit fullscreen mode

Calculating aspect ratio

Here's how we calculate aspect ratio:

// separators and newlines don't play well with the rendering logic
const SEPARATOR_REGEX = /[\n\p{Z}]/u

const REPEAT_COUNT = 100

const pre = appendInvisible('pre')

const _getCharScalingData =
    (repeatCount: number) =>
        ch: string,
    ): {
        width: number
        height: number
        aspectRatio: AspectRatio
    } => {
        pre.textContent = `${`${ch.repeat(repeatCount)}\n`.repeat(repeatCount)}`

        const { width, height } = pre.getBoundingClientRect()

        const min = Math.min(width, height)

        pre.textContent = ''

        return {
            width: width / repeatCount,
            height: height / repeatCount,
            aspectRatio: [min / width, min / height],
Enter fullscreen mode Exit fullscreen mode

For performance reasons, we assume all characters in the charset are equal width and height. If they're not, the output will be garbled anyway.

Calculating image pixel brightness

Here's how we calculate the relative brightness, or technically the relative perceived luminance, of each pixel:

const perceivedLuminance = {
    [Channels.Red]: 0.299,
    [Channels.Green]: 0.587,
    [Channels.Blue]: 0.114,
} as const

export const getMutableImageLuminanceValues = ({
}: ImageLuminanceOptions) => {
    if (!img) {
        return {
            pixelMatrix: [],
            flatPixels: [],

    const { width, height } = img

    const scale = resolutionX / width

    const [w, h] = [width, height].map((x, i) =>
        Math.round(x * scale * aspectRatio[i]),

    const rect: Rect = [0, 0, w, h]

    const canvas = createCanvas(w, h)

    const ctx = canvas.getContext('2d')!

    ctx.fillStyle = '#fff'


    ctx.drawImage(img, ...rect)

    const pixelData = ctx.getImageData(...rect).data

    let curPix = 0

    const pixelMatrix: { val: number }[][] = []

    let max = -Infinity
    let min = Infinity

    for (const [idx, d] of pixelData.entries()) {
        const channel = (idx % Channels.Modulus) as Channel

        if (channel !== Channels.Alpha) {
            // rgb channel
            curPix += d * perceivedLuminance[channel]
        } else {
            // append pixel and reset during alpha channel

            // we set `ch` later, on second pass
            const thisPix = { val: curPix, ch: '' }

            max = Math.max(max, curPix)
            min = Math.min(min, curPix)

            if (idx % (w * Channels.Modulus) === Channels.Alpha) {
                // first pixel of line
            } else {
                pixelMatrix[pixelMatrix.length - 1].push(thisPix)

            curPix = 0

    // one-dimensional form, for ease of sorting and iterating.
    // changing individual pixels within this also
    // mutates `pixelMatrix`
    const flatPixels = pixelMatrix.flat()

    for (const pix of flatPixels) {
        pix.val = (pix.val - min) / (max - min)

    // sorting allows us to iterate over the pixels
    // and charVals simultaneously, in linear time
    flatPixels.sort((a, b) => a.val - b.val)

    return {
Enter fullscreen mode Exit fullscreen mode

Why mutable, you ask? Well, we can improve performance by re-using this matrix for the characters to output.

In addition, we return a flattened and sorted version of the matrix. Mutating the objects in this flattened version persists through to the matrix itself. This allows for iterating in O(n) instead of O(nm) time complexity, where n is the number of pixels and m is the number of chars in the charset.

Map pixels to characters

Here's how we map the pixels onto characters:

export type CharPixelMatrixOptions = {
    charVals: CharVal[]
    brightness: number
    contrast: number
} & ImageLuminanceOptions

let cachedLuminanceInfo = {} as ImageLuminanceOptions &
    ReturnType<typeof getMutableImageLuminanceValues>

export const getCharPixelMatrix = ({
}: CharPixelMatrixOptions): CharPixelMatrix => {
    if (!charVals.length) return []

    const luminanceInfo = Object.entries(imageLuminanceOptions).every(
        ([key, val]) =>
            cachedLuminanceInfo[key as keyof typeof imageLuminanceOptions] ===
        ? cachedLuminanceInfo
        : getMutableImageLuminanceValues(imageLuminanceOptions)

    cachedLuminanceInfo = { ...imageLuminanceOptions, ...luminanceInfo }

    const charPixelMatrix = luminanceInfo.pixelMatrix as CharVal[][]
    const flatCharPixels = luminanceInfo.flatPixels as CharVal[]

    const multiplier = exponential(brightness)
    const polynomialFn = polynomial(exponential(contrast))

    let charValIdx = 0
    let charVal: CharVal

    for (const charPix of flatCharPixels) {
        while (charValIdx < charVals.length) {
            charVal = charVals[charValIdx]

            if (polynomialFn(charPix.val) * multiplier > charVal.val) {

            } else {
        } = charVal!.ch

    // cloning the array updates the reference to let React know it needs to re-render,
    // even though individual rows and cells are still the same mutated ones
    return [...charPixelMatrix]
Enter fullscreen mode Exit fullscreen mode

The polynomial function increases contrast by skewing values toward the extremes. You can see some examples of polynomial functions at easings.netquad, cubic, quart, and quint are polynomials of degree 2, 3, 4, and 5 respectively.

The exponential function simply converts numbers in the range 0..100 (suitable for user-friendly configuration) into numbers exponentially increasing in the range 0.1..10 (giving better results for the visible output).

Here are those two functions:

export const polynomial = (degree: number) => (x: number) =>
    x < 0.5
        ? Math.pow(2, degree - 1) * Math.pow(x, degree)
        : 1 - Math.pow(-2 * x + 2, degree) / 2

export const exponential = (n: number) => Math.pow(10, n / 50 - 1)
Enter fullscreen mode Exit fullscreen mode


Finally, here's how we render the text art to a string:

export const getTextArt = (charPixelMatrix: CharPixelMatrix) => => =>'')).join('\n')
Enter fullscreen mode Exit fullscreen mode

The UI for this project is built in React ⚛ and mostly isn't as interesting as the algorithm itself. I might write a future post about that if there's interest in it.

I had a lot of fun and learned a lot creating this project! 🎉 Future additional features, in approximate order of implementation difficulty, could include:

  • Allowing colorized output.
  • Moving at least some of the logic to web workers to prevent blocking of the main thread during expensive computation. Unfortunately, the OffscreenCanvas API currently has poor support outside of Chromium-based browsers, which limits what we could do in this respect while remaining cross-browser compatible without adding quite a bit of complexity.
  • Adding an option to use dithering, which would improve results for small charsets or charsets with poor contrast characteristics.
  • Taking into account the sub-char-pixel properties of each character to give more accurate rendering. For example, _ is dense at the bottom and empty at the top, rather than uniformly low-density.
  • Adding an option to use an edge detection algorithm to improve results for certain types of images.
  • Allowing for variable-width charsets and fonts. This would require a massive rewrite of the algorithm and isn't something I've ever seen done before, but it would theoretically be possible.

I'm not planning on implementing any of these features in the near future, but those are some ideas to get you started for anyone that wants to try forking the project.

Thanks for reading! Don't forget to leave your feedback in the comments 😁

Top comments (0)