I set out to learn something new and mash up two things I've been playing around with — language parsing and serverless functions. The result?
expr.run!
It's a service that lets you do algebraic math using URLs. It parses algebraic expressions in the path and returns a JSON result containing either the result or a parsing error with appropriate status codes.
A GET request like https://expr.run/1+3/2-sqrt(4)
will return the result {result:0.5,error:null}
and a status code of 200
. If you make a typo like https://expr.run/2^3/pi
it will return the following value with a status code of 400
and a verbose description of where parsing failed:
{
"result": null,
"error": "Error: Expected \"*\", \"+\", \"-\", \"/\", end of input, or whitespace but \"^\" found.\n --> undefined:1:2\n |\n1 | 2^3/pi\n | ^"
}
The algebraic expressions support the following:
-
+ - * /
operators - decimal numbers (in the form of
-123.456e-78
) -
()
expression grouping and proper evaluation order - function calls from the
Math
JavaScript namespace - the constants
pi
(𝜋) ande
(Euler's number) - whitespace between tokens is ignored
That's it!
So, how does it work and what does it run on?
Part 1: parsing algebraic expressions
For this part, I've decided to use Peggy which enables compiling parser expression grammars into JavaScript/TypeScript parsers for use in any JS environment. The grammar I wrote is based on the example provided in the online playground and extended with function calls, floating point numbers and constants.
I won't go into too much detail into how the grammar works, but in essence, parsing starts at the first rule defined in the file, which then breaks down the tokens into smaller and smaller rules until you hit a final token which then returns a parsed value. In this case, the value is just a number, which makes the parsing function also compute the final result instead of creating an AST representation of the expression.
Expression
= head:Term tail:(_ ("+" / "-") _ Term)* {
return tail.reduce(function(result, element): number {
if (element[1] === "+") { return result + element[3]; }
if (element[1] === "-") { return result - element[3]; }
}, head);
}
Term
= head:Factor tail:(_ ("*" / "/") _ Factor)* {
return tail.reduce(function(result, element): number {
if (element[1] === "*") { return result * element[3]; }
if (element[1] === "/") { return result / element[3]; }
}, head);
}
Factor
= "(" _ expr:Expression _ ")" { return expr; }
/ Constant
/ FunctionCall
/ Number
Constant "constant"
= "pi"i { return Math.PI; }
/ "e"i { return Math.E; }
FunctionCall "function call"
= fn:FunctionName _ "(" params:(Expression|.., _ "," _|) ")" {
return evaluate(fn, params);
}
FunctionName "function name"
= _ fn:([a-z]i [a-z0-9_]i*) { return text().trim(); }
Number "decimal number"
= _ [+-]? [0-9]+ ("." [0-9]+)? ([eE] [+-]? [0-9]+)? { return parseFloat(text()); }
_ "whitespace"
= [ \t\n\r]*
evaluate
is an extra function defined in scope that calls the matching Math.*
functions:
function evaluate(fnName: string, params: unknown[]): number {
if (fnName in Math) {
if (Math[fnName].length > params.length)
throw new TypeError(`Math.${fnName} requires ${Math[fnName].length} arguments`);
return Math[fnName](...params);
}
throw new TypeError(`Unknown function: ${fnName}`);
}
Compiling this file into TS requires using a plugin called ts-pegjs
which produces the final TS parser:
const peggy = require("peggy");
const tspegjs = require("ts-pegjs");
const { readFile, writeFile } = require("node:fs/promises");
const { resolve } = require("node:path");
const srcDir = resolve(__dirname, "../src/");
const source = resolve(srcDir, "calculator/grammar.pegjs");
const target = resolve(srcDir, "calculator/parser.ts");
readFile(source, "utf8")
.then((file) =>
peggy.generate(file, {
plugins: [tspegjs],
output: "source",
format: "es",
})
)
.then((parser) => writeFile(target, parser));
This processes the file grammar.pegjs
and transforms it into parser.ts
which we can now import from:
import { parse } from './parser.ts';
console.log(parse('1+2+3'));
Great, one problem down, two to go.
Part 2: Cloudflare workers
Cloudflare workers enable you to respond to requests by providing handler functions that receive Request
objects and returns promises of Response
objects. If you've used the fetch
API, these should be quite familiar to you.
The simplest way to create a minimal worker is to export as default an object containing the fetch
method:
export default {
async fetch(request: Request) {
return new Response('Hello world!');
}
}
I won't go into detail on how to deploy workers to Cloudflare, you're going to want to RTFM. The free tier lets you use the platform with a generous amount of traffic and CPU time per request.
We'll want our serverless function to do the following:
- show instructions in HTML when hitting the root URL;
- extract, parse and evalute the expression and return either the result or the parsing error; and
- add CORS headers to enable cross-origin
fetch
requests.
import { parse, PeggySyntaxError } from "./calculator/parser.js";
// this generates the full HTML page, omitted for brevity
const instructions = (baseUrl: string) => `...`;
const CORS_HEADERS = {
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Methods": "GET,HEAD,POST,OPTIONS",
};
async function handleOptions(request: Request) {
if (
request.headers.get("Origin") !== null &&
request.headers.get("Access-Control-Request-Method") !== null &&
request.headers.get("Access-Control-Request-Headers") !== null
) {
// Handle CORS preflight requests.
return new Response(null, {
headers: new Headers({
...CORS_HEADERS,
"Access-Control-Allow-Headers": request.headers.get(
"Access-Control-Request-Headers"
)!,
}),
});
} else {
// Handle standard OPTIONS request.
return new Response(null, {
headers: {
Allow: "GET, HEAD, POST, OPTIONS",
},
});
}
}
export default {
async fetch(request: Request) {
if (request.method == "OPTIONS") {
return handleOptions(request);
}
const url = new URL(request.url);
const path = url.pathname;
if (path == "/") {
return new Response(instructions(url.origin), {
headers: { "Content-Type": "text/html" },
});
}
const expr = decodeURIComponent(path.replace(/^\//, ""));
try {
return new Response(
JSON.stringify({ result: parse(expr), error: null }),
{
headers: new Headers({
...CORS_HEADERS,
"Content-Type": "application/json",
"Cache-Control": "public, max-age=604800",
}),
}
);
} catch (err) {
let message: string;
if (typeof (err as any)?.format == "function") {
message = (err as PeggySyntaxError).format([{ text: expr }]);
} else {
message = err?.toString() ?? "Unknown error";
}
return new Response(JSON.stringify({ result: null, error: message }), {
status: 400,
statusText: "Bad request",
headers: new Headers({
...CORS_HEADERS,
"Content-Type": "application/json",
"Cache-Control": "public, max-age=604800",
}),
});
}
},
};
And that's the entirety of the Cloudflare Worker implementation! All that needs doing now is to bundle everything up and deploy to the cloud.
Part 3: bundling and deploying
I used vite
to generate both a CommonJS and ES module version bundles for this project using the following configuration (you could get away with just ES if you don't need both):
import { defineConfig } from "vite";
import { resolve } from "node:path";
export default defineConfig({
build: {
lib: {
entry: resolve(__dirname, "src/index.ts"),
formats: ["cjs", "es"],
fileName: "index",
},
target: "es2020",
},
});
The tsconfig.json
file is quite straightforward:
{
"compilerOptions": {
"target": "ESNext",
"lib": ["ESNext", "DOM"],
"strict": true,
"module": "ESNext",
"moduleResolution": "NodeNext"
},
"include": ["src", "tasks"]
}
After building the parser and running vite build
the dist
directory contains both the CJS and ES versions named index.js
and index.cjs
.
Using wrangler you can then upload either version to deploy it to the cloud. Before running this command, you'll need to create a worker and use its name and a date appropriate according to their documentation:
wrangler deploy dist/index.js --name <name> --compatibility-date <date>
After deploying the 4-some kilobytes of gzipped code, the worker will be up and running in a matter of seconds and ready to respond to any algebraic problem you can throw at it!
You can also use fetch('https://expr.run/...')
in your own apps/web pages to compute things on the fly without having to roll your own parser.
Conclusion
Often times, learning something requires a significant investment in time, but with the right goal in mind, you can make fast progress and be productive without feeling overwhelmed.
Build something silly. Go on, have fun.
Top comments (0)