Declarative HTML5 Canvas

Bringing an SVG-like API to Canvas rendering through the magic of front-end JavaScript frameworks.

Recently, I got the itch to resurrect a project I had put on the backburner about a year ago: a cross-platform font editor called Glyphy . After running into some difficulties transforming to and from SVG coordinate space to render an interactive Bézier curve editor, I decided to give the Canvas API a try.

The resulting math was much easier to reason about, but the Canvas element uses an imperative API, and I missed the nice, declarative view descriptions you get with SVG markup. This is an overview of how I squared that circle to get the best of both worlds: using UI components to abstract Canvas draw calls.

Background

When I last left off with Glyphy, I had gotten as far as completing most of the tedious setup work — parsing OpenType’s binary format to XML via FontTools , deserializing the XML to an in-memory data structure with a custom setup (inspired by Rust’s Serde framework ), parsing the CharString code for each glyph to a bytecode stack, and bootstrapping the front-end using my design system library, Electric — but I hadn’t actually managed to render any glyphs yet. Taking a look at the long list of PostScript operators I would need to implement left me feeling pretty intimidated, and ultimately I got distracted by the next shiny object that entered my field of vision.

But recently I was inspired to pick it back up again. Doing some work in Unreal Engine to implement vector graphics rendering had given me the confidence to take another look at my PostScript interpreter, and selecting the typefaces to use for this blog reminded me what a huge nerd I am for typography — and how disappointing it is that there aren’t many viable options for hobby-level type design software on Windows.

As it turns out, the litany of PostScript operators essentially boils down to the same handful of commands that should be familiar to anyone who’s worked with 2D graphics libraries before:

  • MoveTo(x, y) to set the initial coordinates for a new contour.
  • LineTo(x, y) to draw a straight line.
  • BezierCurveTo(cpx1, cpy1, cpx2, cpy2, x, y) to draw a Bézier curve — with the cp__ parameters representing the “handles” you would manipulate in an application like Adobe Illustrator to adjust the velocity of the curve at a given point.

The rest of the couple dozen or so path construction operators defined by the spec are just different expressions of those same commands, designed to slim down the resulting file size by avoiding repetition or making some operands implicit. After expanding them to the familiar set of MoveTo / LineTo / BezierCurveTo commands, we can trivially apply them to either an HTMLCanvasElement’s 2D rendering context, or an SVGPathElement by stringifying them to its d attribute.

The “problem” with SVG

But there is one minor hiccup with either option, and an additional one when it comes to SVG in particular.

  1. OpenType glyphs are defined in the classical Cartesian coordinate space, with the (0,0) origin point at the bottom left (technically, at the font’s baseline in the Y axis), with Y values ascending as you move up, whereas the DOM places the origin at the top-left, with Y values ascending as you move down.
  2. For SVG, you also need to wrangle the viewBox attribute , which defines the bounding box of the SVG’s viewport with respect to the coordinates specified on the SVG child elements. If you force the parent SVG element to render at a wider or narrower aspect ratio than that specified by the viewBox, the browser seems to implicitly center the specified viewport within the actual viewport, adding an additional layer of complexity.

I am, at the core of my essence, either an artist with an underdeveloped right-brain or an engineer with an underdeveloped left-brain, depending on how you want to look at it. In a fight, my analytical side would probably win out, but only barely. This unique position enables me to fill an occupational niche that I really love, at the intersection of design and engineering. But when it comes to solving exceptionally challenging problems in either domain…

What I’m trying to say is that I kind of suck at math. I’ve managed to brute-force my way to a pretty decent working intuition for linear algebra (mostly by banging my head against 3D game development problems for the better part of the past five years), so I have no problem building a matrix to transform between glyph-space coordinates and browser-space coordinates and position everything exactly where I want it to appear on the screen. But trying to account for the SVG viewBox situation on top of that was enough to collapse my mental model of the problem like a house of cards.

I managed to hack together this solution, which is workable for rendering a single glyph on its own:

/**
 * @param zoomFactor
 *    Represents how large the bounding box is compared to the glyph height as
 *    measured from `upperBound` to `lowerBound`.
 */
export function getViewBox(
	font: Font,
	glyph: Glyph,
	zoomFactor: number,
	upperBound: (number | keyof FontMetrics) = "ascender",
	lowerBound: (number | keyof FontMetrics) = "descender",
): ViewBox {
	const upper = typeof upperBound === "number"
		? upperBound
		: font[upperBound] ?? font.ascender;
	
	const lower = typeof lowerBound === "number"
		? lowerBound
		: font[lowerBound] ?? font.descender;
	
	const width = glyph.advance ?? 0;
	const remainder = zoomFactor - 1;
	const height = (upper - lower) * zoomFactor;
	const y = ((upper - lower) * remainder) / 2 - lower;
	
	return new ViewBox(0, -y, width, height);
}

But when it comes to building an interactive Bézier curve editor, with a rich, dynamic UI and informative data readouts like the exact glyph-space coordinates of the mouse cursor at any given time, I need to have a firm grasp on the math at play to implement features in a robust way and at a reasonable pace. I wrote the code above less than a month ago, and I would have a seriously hard time explaining, for example, why that y expression produces the desired result. It may as well be an arcane incantation that pleases the eldritch spirits in my GPU.

With the Canvas API, we don’t have to worry about that extra viewport layer. We have the canvas element, which uses the familiar DOM coordinate system (with one confounding variable, the devicePixelRatio, which is trivial to deal with), and then we have our glyph-space coordinates, which we can project into the canvas using a relatively straightforward matrix transformation.

A declarative Canvas API

The one bummer about migrating from SVG to Canvas is the API. Canvas is imperative, in contrast to the declarative nature of XML markup. To “draw” a circle with SVG, you would write something like this:

<svg viewBox="0 0 256 256">
	<circle cx="128" cy="128" r="64" fill="#FFF" />
</svg>

To draw the same circle on a canvas, you would write something like this:

function draw(ctx: CanvasRenderingContext2D) {
	const w = ctx.canvas.width;
	const h = ctx.canvas.height;
	ctx.clearRect(0, 0, w, h);
	
	ctx.beginPath();
	ctx.arc(w/2, h/2, Math.min(w, h)/4, 0, 2*Math.PI);
	
	ctx.fillStyle = "#FFF";
	ctx.fill();
}

You could certainly come up with a more declarative JavaScript API for interacting with the canvas via functions, but I really like the XML family of markup languages for describing UI. The labelled closing tags make it possible to define deeply nested hierarchies of UI elements without losing track of your place in the tree, and attributes are named rather than positional like JS function arguments, so it’s always clear what the value assignments actually represent.

The goal we’re aiming for is something that ends up being very close to SVG for base geometric primitives. In Angular (my daily driver of choice), it looks like this:

<my-canvas>
	<my-circle [cx]="128" [cy]="128" [r]="64" fill="#FFF"></my-circle>
</my-canvas>

The equivalent in React would look something like this:

import Canvas, { Circle } from "../components/canvas";

export const App = () => (
	<Canvas>
		<Circle cx={128} cy={128} r={64} fill="#FFF" />
	</Canvas>
);

export default App;

Implementation

The key to making this work is hierarchical dependency injection. This is a seldom-explored feature in user-land Angular, though the Angular team uses it extensively in the Common and Forms modules and throughout Angular Material. If you’re coming from React, Vue, or SolidJS, you may know this pattern as the “Context API.”

Whatever the host framework, the idea is the same: An instance of some interface is provided at a particular node of the view tree, and then any children of that node who depend on that interface can get a reference to the provided instance by requesting it from the runtime.

We’ll use this pattern to make the canvas host aware of its children who want to render to it. When canvas children are added or removed from the view tree, or when existing canvas children undergo some change that requires them to re-render, they’ll notify the host, which will clear the canvas and invoke a callback for each child, giving it an opportunity to issue draw calls to the canvas.

That’s a pretty abstract explanation and probably more than a little unclear, so we’ll walk through a concrete implementation in Angular (very similar to the one Glyphy is using ), and then we’ll do the same for React. Finally, we’ll look at a clever technique to implement hierarchical dependency injection with plain-old web components, so you can get in on the fun even if you’re not operating in the context of a framework runtime.

Angular

In Angular, we’ll take advantage of the fact that dependency injection is actually a two-way street. That is, not only can a child inject dependencies provided by its parents, but a parent can “query” for dependencies in its children. This is a little odd, but it does make sense for our use case. Since the canvas host is the one driving the render loop, it needs to invoke methods on its child elements rather than the other way around.

Canvas Component

We’ll start by defining the interface through which the host will interact with its children, and an “injection token” which gives the host a generic identifier to query for in lieu of a concrete type.

canvas/canvas.types.ts
import { InjectionToken } from "@angular/core";
import { Observable } from "rxjs";

export interface RenderElement {
	readonly changes: Observable<void>;
	onDraw(context: CanvasRenderingContext2D): void;
}

export const RENDER_ELEMENT = new InjectionToken<RenderElement>("RenderElement");

We’ll start the canvas host by setting up the typical boilerplate for an HTML canvas.

canvas/canvas.component.ts
import { Component, ElementRef, OnInit, ViewChild } from "@angular/core";

@Component({
	selector: "cv-canvas",
	templateUrl: "./canvas.component.html",
	styleUrls: ["./canvas.component.scss"],
	standalone: true,
})
export class CanvasComponent implements OnInit {
	@ViewChild("canvas", { static: true })
	private _canvasRef!: ElementRef<HTMLCanvasElement>;
	
	get #canvas(): HTMLCanvasElement {
		return this._canvasRef.nativeElement;
	}
	
	#context: CanvasRenderingContext2D | null = null;
	
	ngOnInit(): void {
		const { width, height } = this.#canvas.getBoundingClientRect();
		this.#canvas.width = width * devicePixelRatio;
		this.#canvas.height = height * devicePixelRatio;
		
		this.#context = this.#canvas.getContext("2d");
	}
}
canvas/canvas.component.html
<canvas #canvas class="canvas">
	<ng-content></ng-content>
</canvas>
canvas/canvas.component.scss
:host {
	display: block;
	position: relative;
}

.canvas {
	position: absolute;
	inset: 0;
	width: 100%;
	height: 100%;
}

If you’re not overly familiar with Angular: @ViewChild issues a query for some dependency within our component’s template. In this case, we use a string that matches the #canvas template variable to get a reference to the canvas element. (This is conceptually similar to React’s ref attributte.)

static: true indicates that the element isn’t conditional or dynamic in any way — it should always be present when this component is instantiated — which makes it available in the OnInit hook, before our view has fully initialized.

<ng-content> is Angular’s implementation of the <slot> concept — basically analogous to React’s children prop.

Next, we’ll add our DI query, and a #render method to render each of our child RenderElements.

canvas/canvas.component.ts
import {
	// ...
	ContentChildren,
	// ...
	QueryList,
} from "@angular/core";

import { RenderElement, RENDER_ELEMENT } from "./canvas.types";

// ...
export class CanvasComponent implements OnInit {
	// ...
	
	@ContentChildren(RENDER_ELEMENT)
	private _elements?: QueryList<RenderElement>;
	
	// ...
	
	#render(): void {
		const ctx = this.#context;
		if (!ctx) return;
		
		ctx.clearRect(0, 0, ctx.canvas.width, ctx.canvas.height);
		
		if (this._elements)
			for (let element of this._elements)
				element.onDraw(ctx);
	}
}

@ContentChildren is similar to @ViewChild, with two major differences:

  • It operates on the nodes passed into our <ng-content> slot, rather than the nodes that we directly declared in our template
  • It maintains a dynamic list of results, rather than a single one

Here we’re querying for the RENDER_ELEMENT token we declared earlier, which means that Angular will populate the QueryList with providers of that token.

We want to call our #render method under a few circumstances:

  • When the QueryList changes (i.e., because RenderElements were added, removed, or changed their location in the view at runtime)
  • When one or more of the RenderElements emits a changes event, indicating that it needs to be re-rendered
  • When our canvas element’s size changes, we’ll want to resize its pixel buffer accordingly and redraw everything

In Angular, the primary mechanism for this type of reactive programming today is the third-party RxJS library . To be honest, RxJS is enormously complex, and its role in Angular looks to be soon largely superseded by Angular’s forthcoming Signals implementation , so I’m going to gloss over a lot of the details here.

All you really need to know are that both Angular’s QueryList and our RenderElement interface have a changes property holding an RxJS observable, which is essentially an event emitter. We can subscribe to those observables to be notified when they emit. We’ll use those subscriptions to call our #render method, but we’ll throttle the event streams by window.requestAnimationFrame first so that we only render at most once per frame.

@ContentChildren queries are only available for us to read beginning with the AfterContentInit lifecycle hook, so we’ll implement that and use it to set up our subscriptions. (Annoyingly, we’ll also need to monkey-patch Angular’s QueryList type definition to address a longstanding issue that will trigger an “implicit any” error from TypeScript if we try to use it as-is.)

canvas/canvas.component.ts
import {
	AfterContentInit,
	// ...
} from "@angular/core";
import {
	animationFrameScheduler,
	merge,
	Observable,
	shareReplay,
	startWith,
	switchMap,
	throttleTime,
} from "rxjs";

interface IQueryList<T> extends QueryList<T> {
	changes: Observable<IQueryList<T>>;
}

// ...
export class CanvasComponent implements OnInit, AfterContentInit {
	// ...
	
	@ContentChildren(RENDER_ELEMENT)
	private _elements?: IQueryList<RenderElement>;
	
	// ...
	
	ngAfterContentInit(): void {
		const elements$ = this._elements!.changes.pipe(
			startWith(this._elements!),
			shareReplay({ bufferSize: 1, refCount: true }),
		);
		
		const elementChanges$ = elements$.pipe(
			switchMap(queryList => merge(
				...queryList.map(element => element.changes)
			))
		);
		
		merge(elements$, elementChanges$)
			.pipe(
				throttleTime(0, animationFrameScheduler, {
					leading: true,
					trailing: true,
				})
			)
			.subscribe(() => {
				this.#render();
			});
	}
	// ...
}

RxJS is really great at three things:

  1. Effortlessly managing complex combinations of event streams from multiple sources like we’re doing here
  2. Confusing the hell out of anyone without an encyclopedic knowledge of the operators it provides
  3. Making your application leak like a sieve

We’ll address the third point there by making sure none of our subscriptions can outlive our component. To do that, we would have historically used another observable as a notifier for RxJS’s takeUntil operator, which we would emit once in an OnDestroy hook. But Angular recently added its own utility operator, takeUntilDestroyed, that does basically the same thing.

canvas/canvas.component.ts
import {
	// ...
	DestroyRef,
	// ...
	inject,
	// ....
} from "@angular/core";
import { takeUntilDestroyed } from "@angular/core/rxjs-interop";

// ...
export class CanvasComponent implements OnInit, AfterContentInit {
	// ...
	#destroyRef = inject(DestroyRef);
	// ...
	ngAfterContentInit(): void {
		const elements$ = this._elements.changes.pipe(
			// ...
			takeUntilDestroyed(this.#destroyRef),
		);
		
		const elementChanges$ = elements$.pipe(
			// ...
			takeUntilDestroyed(this.#destroyRef),
		);
		
		merge(elements$, elementChanges$)
			.pipe(
				// ...
				takeUntilDestroyed(this.#destroyRef),
			)
			.subscribe(() => {
				this.#render();
			});
	}
	// ...
}

Now, the only thing remaining to finish our CanvasComponent is to handle element resizing.

canvas/canvas.component.ts
import {
	// ...
	OnDestroy,
	// ....
} from "@angular/core";

// ...
export class CanvasComponent implements OnInit, AfterContentInit, OnDestroy {
	// ...
	#resizeObserver!: ResizeObserver;
	// ...
	ngOnInit(): void {
		// ...
		this.#resizeObserver = new ResizeObserver(([entry]) => {
			const { width, height } = entry.contentRect;
			this.#canvas.width = width * devicePixelRatio;
			this.#canvas.height = height * devicePixelRatio;
			
			this.#render();
		});
		
		this.#resizeObserver.observe(this.#canvas);
	}
	// ...
	ngOnDestroy(): void {
		this.#resizeObserver.disconnect();
	}
	// ...
}

To test that everything is working as expected so far, we can render a blank magenta background in our #render method, and add the component to our app.

canvas/canvas.component.ts
// ...
export class CanvasComponent implements OnInit, AfterContentInit, OnDestroy {
	// ...
	#render(): void {
		// ...
		ctx.fillStyle = "#FF00FF";
		// ctx.clearRect(0, 0, ctx.canvas.width, ctx.canvas.height);
		ctx.fillRect(0, 0, ctx.canvas.width, ctx.canvas.height);
		// ...
	}
}
src/main.ts
import "zone.js/dist/zone";
import { Component } from "@angular/core";
import { CommonModule } from "@angular/common";
import { bootstrapApplication } from "@angular/platform-browser";

import { CanvasComponent } from "./canvas/canvas.component";

@Component({
	selector: "cv-app",
	standalone: true,
	imports: [
		CommonModule,
		CanvasComponent,
	],
	templateUrl: "./main.html",
	styleUrls: ["./main.scss"],
})
export class App {}

bootstrapApplication(App);
src/main.html
<cv-canvas></cv-canvas>
src/main.scss
:host {
	display: block;
	position: relative;
	width: 100vw;
	height: 100vh;
}

cv-canvas {
	width: 100%;
	height: 100%;
}

If there is now a full-page wall of magenta searing your retinas, we’re good to move on to our first child element!

Circle Component

Let’s keep it simple to start, and implement the circle from our API mockup. Technically, this will be a Directive instead of a Component, because it won’t have any meaningful template or CSS styling. I will, however, give it an element selector — this is atypical for directives, which normally use attribute selectors. But we really don’t care about the host element at all — it’s just a vehicle for the API we’re trying to achieve.

We could hypothetically make these elements useful by using the ARIA API to give screen readers an idea of what our canvas represents — and that is something I actually plan to investigate for Glyphy — but it’s beyond the scope of this toy example.

canvas/circle.directive.ts
import {
	Directive,
	EventEmitter,
	Input,
	OnChanges,
	Output,
} from "@angular/core";

import { PaintStyle, RenderElement, RENDER_ELEMENT } from "./canvas.types";

@Directive({
	selector: "cv-circle",
	standalone: true,
	providers: [{
		provide: RENDER_ELEMENT,
		useExisting: CircleDirective,
	}],
})
export class CircleDirective implements RenderElement, OnChanges {
	@Input() cx = 0;
	@Input() cy = 0;
	@Input() r = 0;
	
	@Input() fill: PaintStyle = "black";
	@Input() stroke?: PaintStyle;
	@Input() strokeWidth = 1;
	
	@Output() changes = new EventEmitter<void>();
	
	ngOnChanges(): void {
		this.changes.emit();
	}
	
	onDraw(ctx: CanvasRenderingContext2D): void {
		if (!this.r || !this.fill && (!this.stroke || !this.strokeWidth))
			return;
		
		const cx = this.cx * devicePixelRatio;
		const cy = this.cy * devicePixelRatio;
		const r = this.r * devicePixelRatio;
		
		ctx.beginPath();
		ctx.arc(cx, cy, r, 0, 2*Math.PI);
		
		if (this.fill) {
			ctx.fillStyle = this.fill;
			ctx.fill();
		}
		
		if (this.stroke && this.strokeWidth) {
			ctx.strokeStyle = this.stroke;
			ctx.lineWidth = this.strokeWidth * devicePixelRatio;
			ctx.stroke();
		}
	}
}

A few things worth noting here:

  • The providers array in the Directive decorator is how this class hooks into the DI framework so that it can be discovered by the Canvas’s @ContentChildren query.

    The array element can take multiple forms , but this is typically the one you would use in cases like this, where you want to provide some abstract token that represents an interface implementation rather than a concrete @Injectable type.

  • We declared changes as an Observable<void> in RenderElement, but we’re defining it as an Angular EventEmitter<void> here. What gives?

    Angular’s EventEmitter class is sneakily an RxJS observable under the hood. Declaring it as an EventEmitter with the @Output decorator means that we could bind a listener to that event in our app template if we really wanted to. We don’t have a use for that, but it’s a free bonus that could theoretically be useful for debugging, so why not?

  • If you’re wondering why we’ve been multiplying everything by devicePixelRatio, that’s because the DOM uses a device-independent “pixel” unit virtually everywhere, which doesn’t necessarily correspond to the physical pixels on the user’s display. But the canvas’s width and height properties configure the size of an actual pixel buffer that will be blitted to the canvas surface as a raster bitmap.

    If we didn’t scale our canvas coordinates to account for the difference between the DOM’s px unit and the display’s physical device pixels, we would end up with blurry rendering on displays with high DPIs, or even for users who use the browser-zoom feature to scale up the document.

With that done, we can add a circle to the canvas in our app template and take a look at the results:

src/main.ts
// ...
import { CircleDirective } from "./canvas/circle.directive";
// ...
@Component({
	// ...
	imports: [
		// ...
		CircleDirective,
	],
	// ....
})
// ...
src/main.html
<cv-canvas>
	<cv-circle [cx]="128" [cy]="128" [r]="64"></cv-circle>
</cv-canvas>

That puts a static black circle on the screen at a fixed position relative to the top-left corner of the canvas. Which is… neat, I guess? Let’s make it a little more interesting — and prove that our update logic is working the way we expect — by making it interactive.

Adding some interactivity

First, let’s move the circle inputs to our controller, and add a rudimentary hit test to change the cursor style when we’re “hovering” over it.

src/main.ts
// ...
export class App {
	cx = 128;
	cy = 128;
	r = 64;
	
	@HostBinding("style.cursor")
	get cursorStyle() {
		if (this.#isHovering) return "grab";
		return null;
	}
	
	#isHovering = false;
	
	onPointerMove(event: PointerEvent): void {
		this.#isHovering = this.#hitTest(event);
	}
	
	#hitTest(event: PointerEvent): boolean {
		const dx = event.clientX - this.cx;
		const dy = event.clientY - this.cy;
		const distSquared = (dx*dx) + (dy*dy);
		
		return distSquared <= (this.r * this.r);
	}
}
src/main.html
<cv-canvas (pointermove)="onPointerMove($event)">
	<cv-circle [cx]="cx" [cy]="cy" [r]="r"></cv-circle>
</cv-canvas>

You should now see the cursor turn into a little grabby hand when you mouse over the circle in the canvas. Now let’s try and move the circle around.

src/main.ts
// ...
import { Component, HostBinding, OnDestroy } from "@angular/core";
// ...
import { fromEvent, race, Subject, takeUntil } from "rxjs";
// ...
export class App implements OnDestroy {
	// ...
	@HostBinding("style.cursor")
	get cursorStyle() {
		if (this.#isDragging) return "grabbing";
		if (this.#isHovering) return "grab";
		return null;
	}
	
	#isDragging = false;
	#isHovering = false;
	
	#onDestroy$ = new Subject<void>();
	
	ngOnDestroy(): void {
		this.#onDestroy$.next();
		this.#onDestroy$.complete();
	}
	
	onPointerDown(event: PointerEvent): void {
		if (!this.#hitTest(event))
			return;
		
		this.#isDragging = true;
		
		fromEvent<PointerEvent>(event.target!, "pointermove")
			.pipe(
				takeUntil(race(
					fromEvent(window, "pointerup"),
					fromEvent(window, "pointerleave"),
					this.#onDestroy$,
				)),
			)
			.subscribe({
				next: event => {
					this.cx += event.movementX;
					this.cy += event.movementY;
				},
				complete: () => {
					this.#isDragging = false;
				},
			});
	}
	// ...
}
src/main.html
<cv-canvas
	(pointerdown)="onPointerDown($event)"
	(pointermove)="onPointerMove($event)"
>
	<cv-circle [cx]="cx" [cy]="cy" [r]="r"></cv-circle>
</cv-canvas>

Now you should be able to grab onto the circle and drag it around the viewport! For one last flourish, let’s make the scroll wheel resize the circle.

src/main.ts
// ...
export class App implements OnDestroy {
	// ...
	onWheel(event: WheelEvent): void {
		let factor = -(event.deltaY / devicePixelRatio / 85);
		if (factor < 0) factor = 1 / Math.abs(factor);
		
		this.r = Math.max(1, Math.min(1000, this.r * factor));
	}
	// ...
}
src/main.html
<cv-canvas
	(pointerdown)="onPointerDown($event)"
	(pointermove)="onPointerMove($event)"
	(wheel)="onWheel($event)"
>
	<cv-circle [cx]="cx" [cy]="cy" [r]="r"></cv-circle>
</cv-canvas>

React

For the React implementation, I’m going to assume you’re already pretty familiar with the framework (because, for better or worse, most front-end developers are).

The trickiest part here will be translating our setup to React’s Context API. As mentioned at the start of the Angular implementation, the parent component that hosts the canvas element needs to be the driver of the render loop — it should invoke some callback provided by its children whenever a re-render is called for. But we can’t query our children for a particular interface implementation like we did in Angular.

Canvas Context

I’m admittedly much less familiar with React than I am with Angular, so I would welcome any suggestions for how to do this more idiomatically, but the solution I arrived at was to provide a class instance as a context object to all children of the canvas element. When children who are interested in rendering to the canvas are added to the view tree, they’ll register their onDraw callback with the context object. We’ll then invoke each of those callbacks whenever React renders our Canvas component. Here’s what that looks like:

src/Canvas/CanvasContext.tsx
import { createContext } from "react";

export interface ICanvasContext {
	onDraw(callback: (context: CanvasRenderingContext2D) => void): void;
}

export const CanvasContext = createContext<ICanvasContext | null>(null);
src/Canvas/Canvas.tsx
import {
	forwardRef,
	type HTMLProps,
	type ReactNode,
	type RefObject,
	useMemo,
	useRef,
} from "react";

import { CanvasContext, type ICanvasContext } from "./CanvasContext";

export interface Props extends HTMLProps<HTMLDivElement> {
	cursor?: string;
	children?: ReactNode | ReactNode[];
}

export const Canvas = forwardRef<HTMLDivElement, Props>(
	({ cursor, children, ...props }, ref) => {
		const canvasRef = useRef<HTMLCanvasElement>(null);
		const context = useMemo(() => new CanvasContextImpl(canvasRef), [canvasRef]);
		
		context._reset();
		
		requestAnimationFrame(() => {
			context._render();
		});
		
		return (
			<div {...props}
				ref={ref}
				style={{
					width: "100vw",
					height: "100vh",
					position: "relative",
					cursor,
					...(props.style ?? {})
				}}
			>
				<canvas ref={canvasRef}
					style={{
						position: "absolute",
						inset: 0,
						width: "100%",
						height: "100%",
					}}
				>
					<CanvasContext.Provider value={context}>
						{children}
					</CanvasContext.Provider>
				</canvas>
			</div>
		);
	}
);

export default Canvas;

interface Fn<Args extends any[], R> {
	(...args: Args): R;
}

class CanvasContextImpl implements ICanvasContext {
	#elements: (Fn<[CanvasRenderingContext2D], void> | null)[] = [];
	#ptr = 0;
	
	#canvasRef: RefObject<HTMLCanvasElement>;
	#context: CanvasRenderingContext2D | null = null;
	
	get #canvas() {
		return this.#canvasRef.current;
	}
	
	constructor (canvasRef: RefObject<HTMLCanvasElement>) {
		this.#canvasRef = canvasRef;
		if (this.#canvas) {
			this.#initializeContext();
		}
	}
	
	onDraw(callback: Fn<[CanvasRenderingContext2D], void>): void {
		if (this.#elements.length <= this.#ptr) {
			this.#elements.push(callback);
		} else {
			this.#elements[this.#ptr] = callback;
		}
		
		this.#ptr++;
	}
	
	_reset(): void {
		for (let i = 0; i < this.#elements.length; ++i)
			this.#elements[i] = null;
		
		this.#ptr = 0;
	}
	
	_render(): void {
		if (!this.#canvas) return;
		if (!this.#context) {
			this.#initializeContext();
		}
		if (!this.#context) return;
		
		const ctx = this.#context;
		ctx.clearRect(0, 0, ctx.canvas.width, ctx.canvas.height);
		
		for (let onDraw of this.#elements) {
			if (!onDraw) break;
			onDraw(ctx);
		}
	}
	
	#initializeContext(): void {
		if (!this.#canvas) return;
		
		const { width, height } = this.#canvas.getBoundingClientRect();
		this.#canvas.width = width * devicePixelRatio;
		this.#canvas.height = height * devicePixelRatio;
		
		this.#context = this.#canvas.getContext("2d");
	}
}

There are a few interesting optimizations we need to do here that we didn’t really have to think about in the Angular implementation:

  • Virtually everything in React is transient by default, but it’s not trivially cheap to initialize a canvas element’s rendering context or to set its width and height properties, so we want to avoid needing to do that every time our component renders. To that end, we instantiate our CanvasContextImpl class with useMemo (dependent only on the canvasRef) to keep it reference-stable between React renders.

  • If we can avoid it, we also don’t really want to thrash the garbage collector by reallocating the array of child callbacks every frame, but each of our children will invoke the context object’s onDraw method anew every time our canvas re-renders. To mitigate the performance impact, we’ve added a pseudo-private _reset method for our host component to invoke, which will “reset” the array without actually resizing or disposing it. We keep a separate “pointer” into the array just past its last populated index, and as long as the array still has some unused capacity remaining, we insert new callbacks at that index instead of pushing them.

    To be honest, this is the type of micro-optimization that’s likely out of place in a JavaScript app — the browser’s JS runtime may already be doing that sort of optimization for us, and we may even be preventing it from doing more effective optimizations by making the array heterogeneous to accomodate the null entries. Profiling would be a good idea here, but since this is a toy example for a blog post and not actual production code, I haven’t gone to the trouble. If you decide to look into it, do let me know what you find!

We’ll round out the Canvas implementation by handling resizes, much like we did in Angular:

src/Canvas/Canvas.tsx
import {
	// ...
	useEffect,
	// ...
} from "react";
// ...
	({ cursor, children, ...props }, ref) => {
		// ...
		const context = useMemo(() => new CanvasContextImpl(canvasRef), [canvasRef]);
		
		const onResize = useCallback(([entry]: ResizeObserverEntry[]) => {
			const { width, height } = entry.contentRect;
			context._resize(width, height);
		}, [context]);
		
		const resizeObserver = useMemo(() => new ResizeObserver(onResize), [onResize]);
		
		useEffect(() => {
			if (canvasRef.current)
				resizeObserver.observe(canvasRef.current);
			
			return () => {
				if (canvasRef.current)
					resizeObserver.unobserve(canvasRef.current);
			}
		}, [canvasRef, resizeObserver]);
		// ...
	}
// ...
class CanvasContextImpl implements ICanvasContext {
	// ...
	_resize(width: number, height: number): void {
		this.#canvas!.width = width * devicePixelRatio;
		this.#canvas!.height = height * devicePixelRatio;
		this._render();
	}
	// ...
}

Porting from Angular

The remainder of the React implementation is a pretty straightforward port of the Angular code, so I won’t spill too much ink over it. Maybe the one notable quirk is that our child Circle needs to return an empty React.Fragment to avoid TypeScript complaining that it doesn’t qualify as a component (though React itself didn’t seem to mind the omission).

src/Canvas/Circle.tsx
import { useContext } from "react";

import { CanvasContext } from "./CanvasContext";

export interface Props {
	cx: number;
	cy: number;
	r: number;
}

export default ({ cx, cy, r }: Props) => {
	const canvas = useContext(CanvasContext);
	canvas?.onDraw(ctx => {
		ctx.beginPath();
		ctx.arc(
			cx * devicePixelRatio,
			cy * devicePixelRatio,
			r * devicePixelRatio,
			0,
			2 * Math.PI,
		);
		
		ctx.fillStyle = "black";
		ctx.fill();
	});
	
	return <></>;
}
src/Canvas/index.ts
export { default, type Props as CanvasProps } from "./Canvas";
export { default as Circle, type Props as CircleProps } from "./Circle";
src/App.tsx
import { useEffect, useState } from "react";

import Canvas, { Circle } from "./Canvas";
import "./style.css";

export default function App() {
	const [cx, setCx] = useState(128);
	const [cy, setCy] = useState(128);
	const [r, setR] = useState(64);
	
	const [isDragging, setDragging] = useState(false);
	const [isHovering, setHovering] = useState(false);
	const [cursor, setCursor] = useState<string|undefined>();
	
	useEffect(() => {
		if (isDragging) {
			setCursor("grabbing");
		} else if (isHovering) {
			setCursor("grab");
		} else {
			setCursor(undefined);
		}
	}, [isHovering, isDragging]);
	
	function beginDrag(_: React.PointerEvent): void {
		if (isHovering) {
			setDragging(true);
		}
	}
	
	function endDrag(): void {
		setDragging(false);
	}
	
	function onPointerMove(event: React.PointerEvent): void {
		setHovering(hitTest(event));
		if (isDragging) {
			setCx(cx + event.movementX);
			setCy(cy + event.movementY);
		}
	}
	
	function onWheel(event: React.WheelEvent): void {
		let factor = -(event.deltaY / devicePixelRatio / 85);
		if (factor < 0) factor = 1 / Math.abs(factor);
		
		adjustRadius(factor);
	}
	
	function hitTest(event: React.PointerEvent): boolean {
		const { clientX, clientY } = event;
		const dx = clientX - cx;
		const dy = clientY - cy;
		const d2 = (dx*dx) + (dy*dy);
		
		return (d2 <= (r*r));
	}
	
	function adjustRadius(factor: number, min = 1, max = 1000): void {
		setR(Math.max(min, Math.min(max, r * factor)));
	}
	
	return (
		<Canvas
			cursor={cursor}
			onPointerDown={beginDrag}
			onPointerUp={endDrag}
			onPointerMove={onPointerMove}
			onWheel={onWheel}
		>
			<Circle cx={cx} cy={cy} r={r} />
		</Canvas>
	);
}

A pesky issue with React.StrictMode

One mildly infuriating problem, which I don’t have a good answer to at the moment, is that React’s StrictMode forces every component to render twice in dev builds as a debugging aide. Angular does something similar by running an extra change detection cycle in development, but the fact that we’re relying on React’s component renders to collect draw calls from our children means that we’ll end up with two onDraw callbacks for every single RenderElement added to our canvas. Memoization and useCallback are no help either — onDraw gets invoked with two equivalent but referentially-unique callbacks no matter what we try to do about it in user-land.

This sounds like just a performance issue at first, but canvas draw calls are alpha-blended, so it actually leads to a visibly different result in development vs production, which is really not ideal — you don’t want to merge and deploy a PR only to find that it looks significantly different than you expected out in the real world.

I ultimately resorted to removing the StrictMode wrapper in my test project, but if you were going to try something like this in real-world production code, you would definitely want to come up with a more robust solution. Adding an if branch to skip every other callback when not in production is hacky and inelegant, but would likely do the trick.

Bonus: Web Components

One of the biggest hassles of working on the web today is how fragmented the entire ecosystem is. Multiple bundlers and build systems, multiple TypeScript compilers, multiple mechanisms for module import/export/resolution, and of course a zillion or so mutually incompatible front-end frameworks. I’ve walked through solutions for the two most popular frameworks as of writing, but what if you’re working in Vue, or SolidJS, or even just vanilla JS/HTML/CSS?

The web components spec is widely implemented across modern browsers, which gives us a way to build UI features that are framework-agnostic. The only hiccup is that the crux of our solution is designed around hierarchical dependency injection. Dependency injection as part of the web components spec doesn’t even really make sense to talk about: the whole point of dependency injection is to enable dependency inversion — depending on abstract interfaces rather than concrete types — which is a concept that’s inseparable from static type systems.

Fortunately, the web components API gives us all of the tools we need to bootstrap a really straightforward DI implementation! I wish I could take credit for this idea: I yoinked it from Justin Fagnani , a Google engineer working on Lit . I recommend checking out the talk that inspired this solution on YouTube.

Dependency injection without a framework

The seed of the idea is that custom elements (web components) can dispatch custom events. CustomEvent is a generic derivative of Event, with a detail property that we can populate with arbitrary data. An implementation of the classic bottom-up DI model (which Justin covers in his talk) goes something like this:

  • A custom element with an abstract dependency emits a custom event, populated with some identifier representing the interface it depends on.
  • A custom element which provides a dependency is listening for that event type. If it provides a match for the requested identifier, it mutates the custom event data to populate it with the provided instance, and then stops the event from propagating any further up the tree.
  • Since DOM events bubble synchronously, the dependent can retrieve the provided instance from the event immediately after dispatching it.

From there, it’s a short walk to get to the top-down “query” model we used in our Angular implementation:

  • In addition to listening for events from below, the dependency provider emits its own event, populated with both the interface ID and the concrete instance.
  • To query for dependencies provided by descendants, the ancestor element listens for the provider event and retrieves any instances they’re interested in.

I’ll use Lit for the WC implementation of our drag-the-circle demo, but you could follow the steps above with (or without) any web components framework, as long as you have some mechanism for managing (and preferably batching) reactive updates. You’ll notice some striking similarities between this and the Angular implementation, which is no surprise since Angular, Lit, and the web components spec itself all have Googley genes in their DNA.

Porting from Angular

First, we’ll knock out the stuff that should be pretty familiar at this point:

src/canvas/canvas.types.ts
export interface RenderElement {
	onDraw(context: CanvasRenderingContext2D): void;
}

export type PaintStyle
	= string
	| CanvasGradient
	| CanvasPattern
	;
src/canvas/canvas.element.ts
import { LitElement, PropertyValues, css, html } from "lit";
import { customElement, state } from "lit/decorators.js";
import { Ref, createRef, ref } from "lit/directives/ref.js";

import { RenderElement } from "./canvas.types";

@customElement("cv-canvas")
export class CanvasElement extends LitElement {
	static override styles = css`
		:host {
			display: block;
			position: relative;
		}
		
		.canvas {
			position: absolute;
			inset: 0;
			width: 100%;
			height: 100%;
		}
	`;
	
	override render() {
		queueMicrotask(() => this.#renderCanvas());
		
		return html`
			<canvas class="canvas" ${ref(this.#canvasRef)}>
				<slot></slot>
			</canvas>
		`;
	}
	
	@state()
	private _elements: RenderElement[] = [];
	
	#canvasRef: Ref<HTMLCanvasElement> = createRef();
	get #canvas(): HTMLCanvasElement | undefined {
		return this.#canvasRef.value;
	}
	
	#context: CanvasRenderingContext2D | null = null;
	#resizeObserver: ResizeObserver;
	
	constructor () {
		super();
		
		this.#resizeObserver = new ResizeObserver(([entry]) => {
			const { width, height } = entry.contentRect;
			this.#canvas!.width = width * devicePixelRatio;
			this.#canvas!.height = height * devicePixelRatio;
			
			this.#renderCanvas();
		})
	}
	
	override firstUpdated(changes: PropertyValues): void {
		super.firstUpdated(changes);
		
		if (!this.#canvas)
			throw new Error("HTMLCanvasElement not found!");
		
		const { width, height } = this.#canvas.getBoundingClientRect();
		this.#canvas.width = width * devicePixelRatio;
		this.#canvas.height = height * devicePixelRatio;
		
		this.#context = this.#canvas.getContext("2d");
		this.#resizeObserver.observe(this.#canvas);
	}
	
	override disconnectedCallback(): void {
		super.disconnectedCallback();
		
		this.#resizeObserver.disconnect();
	}
	
	#renderCanvas(): void {
		const ctx = this.#context;
		if (!ctx) return;
		
		ctx.clearRect(0, 0, ctx.canvas.width, ctx.canvas.height);
		
		for (let element of this._elements)
			element.onDraw(ctx);
	}
}
src/canvas/circle.element.ts
import { LitElement, PropertyValues, html } from "lit";
import { customElement, property } from "lit/decorators.js";

import { PaintStyle, RenderElement } from "./canvas.types";

@customElement("cv-circle")
export class CircleElement extends LitElement implements RenderElement {
	@property() cx = 0;
	@property() cy = 0;
	@property() r = 0;
	
	@property() fill: PaintStyle = "black";
	@property() stroke?: PaintStyle;
	@property() strokeWidth = 1;
	
	onDraw(ctx: CanvasRenderingContext2D): void {
		if (!this.r || !this.fill && (!this.stroke || !this.strokeWidth))
			return;
		
		const cx = this.cx * devicePixelRatio;
		const cy = this.cy * devicePixelRatio;
		const r = this.r * devicePixelRatio;
		
		ctx.beginPath();
		ctx.arc(cx, cy, r, 0, 2*Math.PI);
		
		if (this.fill) {
			ctx.fillStyle = this.fill;
			ctx.fill();
		}
		
		if (this.stroke && this.strokeWidth) {
			ctx.strokeStyle = this.stroke;
			ctx.lineWidth = this.strokeWidth * devicePixelRatio;
			ctx.stroke();
		}
	}
}
src/app.ts
import { LitElement, css, html } from "lit"
import { customElement, state } from "lit/decorators.js"

import "./canvas/canvas.element";
import "./canvas/circle.element";

@customElement("cv-app")
export class AppRoot extends LitElement {
	static override styles = css`
		:host {
			display: block;
			position: relative;
			width: 100vw;
			height: 100vh;
		}
		:host(.hovering) {
			cursor: grab;
		}
		:host(.dragging) {
			cursor: grabbing;
		}
		
		cv-canvas {
			width: 100%;
			height: 100%;
		}
	`;
	
	override render = () => html`
		<cv-canvas
			@pointermove=${this.#onPointerMove}
			@pointerdown=${this.#onPointerDown}
			@pointerup=${this.#onPointerUp}
			@wheel=${this.#onWheel}
		>
			<cv-circle
				.cx=${this._cx}
				.cy=${this._cy}
				.r=${this._r}
			></cv-circle>
		</cv-canvas>
	`;
	
	@state() private _cx = 128;
	@state() private _cy = 128;
	@state() private _r = 64;
	
	#_isHovering = false;
	get #isHovering() { return this.#_isHovering; }
	set #isHovering(value) {
		this.classList.toggle("hovering", value);
		this.#_isHovering = value;
	}
	
	#_isDragging = false;
	get #isDragging() { return this.#_isDragging; }
	set #isDragging(value) {
		this.classList.toggle("dragging", value);
		this.#_isDragging = value;
	}
	
	#onPointerMove = (event: PointerEvent) => {
		this.#isHovering = this.#hitTest(event);
		
		if (this.#isDragging) {
			this._cx += event.movementX;
			this._cy += event.movementY;
		}
	}
	
	#onPointerDown = () => {
		this.#isDragging = this.#isHovering;
	}
	
	#onPointerUp = () => {
		this.#isDragging = false;
	}
	
	#onWheel = (event: WheelEvent) => {
		let factor = -(event.deltaY / devicePixelRatio / 85);
		if (factor < 0) factor = 1 / Math.abs(factor);
		
		this._r = Math.max(1, Math.min(1000, this._r * factor));
	}
	
	#hitTest(event: PointerEvent): boolean {
		const dx = event.clientX - this._cx;
		const dy = event.clientY - this._cy;
		const d2 = (dx*dx) + (dy*dy);
		
		return d2 <= (this._r * this._r);
	}
}

Implementing dependency injection

Now we can add in the DI magic. First some helper types:

src/di/di.types.ts
interface Ctor<T, Args extends any[] = any[]> {
	new (...args: Args): T;
	prototype: T;
}

type AbstractCtor<T, Args extends any[] = any[]>
	= (abstract new (...args: Args) => T)
	& {
		prototype: T;
	};

type WithOpt<T, K extends keyof T>
	= Omit<T, K>
	& Partial<Pick<T, K>>
	;

export class UniqueToken {
	get id() { return this.#id; }
	readonly #id: symbol;
	
	constructor (name: string) {
		this.#id = Symbol(name);
	}
}

export type Token<T>
	= UniqueToken
	| Ctor<T>
	| AbstractCtor<T>
	;

export interface Provider<T> {
	token: Token<T>;
	value: T;
}

export type PendingProvider<T>
	= WithOpt<Provider<T>, "value">
	;

Then we’ll create our custom event types. We’re only actually using the top-down querying method in our toy example, so InjectionRequest will go unused, but I figured it wouldn’t hurt to round out the example.

src/di/di.events.ts
import { Provider, PendingProvider, Token } from "./di.types";

export enum DIEvent {
	InjectionRequest = "di::inject",
	DependencyProvision = "di::provide",
	ProviderRemoval = "di::remove",
}

/**
 * An event that can be dispatched to inject a dependency provided by an
 * ancestor node.
 * 
 * Providers should listen for this event. If they provide a match for the
 * event's `detail.token`, they should populate its `detail.value` property and
 * call `stopPropagation()` to prevent further bubbling.
 */
export class InjectionRequest<T> extends CustomEvent<PendingProvider<T>> {
	declare readonly type: DIEvent.InjectionRequest;
	
	constructor (token: Token<T>) {
		super(DIEvent.InjectionRequest, {
			bubbles: true,
			cancelable: true,
			composed: true,
			detail: { token },
		});
	}
}

/**
 * An event that can be dispatched to inform ancestors that this node provides a
 * dependency.
 */
export class DependencyProvision<T> extends CustomEvent<Provider<T>> {
	declare readonly type: DIEvent.DependencyProvision;
	
	constructor (token: Token<T>, value: T) {
		super(DIEvent.DependencyProvision, {
			bubbles: true,
			cancelable: true,
			composed: true,
			detail: { token, value },
		});
	}
}

/**
 * An event that can be dispatched to inform ancestors when a node providing a
 * dependency is removed from the tree.
 */
export class ProviderRemoval<T> extends CustomEvent<Provider<T>> {
	declare readonly type: DIEvent.ProviderRemoval;
	
	constructor (token: Token<T>, value: T) {
		super(DIEvent.ProviderRemoval, {
			bubbles: true,
			cancelable: true,
			composed: true,
			detail: { token, value },
		});
	}
}

declare global {
	interface HTMLElementEventMap {
		[DIEvent.InjectionRequest]: InjectionRequest<any>;
		[DIEvent.DependencyProvision]: DependencyProvision<any>;
		[DIEvent.ProviderRemoval]: ProviderRemoval<any>;
	}
}

We also need a way to inform the Canvas host when a RenderElement’s properties change at runtime, so we’ll add another custom event type for that.

src/canvas/canvas.events.ts
export enum CanvasEvent {
	RenderElementChange = "cv::change",
}

export class RenderElementChangeEvent extends CustomEvent<void> {
	declare readonly type: CanvasEvent.RenderElementChange;
	
	constructor () {
		super(CanvasEvent.RenderElementChange, {
			bubbles: true,
			cancelable: true,
			composed: true,
		});
	}
}

declare global {
	interface HTMLElementEventMap {
		[CanvasEvent.RenderElementChange]: RenderElementChangeEvent;
	}
}

We’ll create a UniqueToken as a runtime representation of our RenderElement interface, just like we did with Angular’s InjectionToken.

src/canvas/canvas.types.ts
import { Token, UniqueToken } from "../di/di.types";
// ...
export const RENDER_ELEMENT: Token<RenderElement> = new UniqueToken("RenderElement");

Then we’ll dispatch the appropriate events in our Circle element’s lifecycle callbacks.

import { LitElement, PropertyValues } from "lit";
// ...
import { DependencyProvision, ProviderRemoval } from "../di/di.events";
import { RenderElementChangeEvent } from "./canvas.events";
import { PaintStyle, RENDER_ELEMENT, RenderElement } from "./canvas.types";

@customElement("cv-circle")
export class CircleElement extends LitElement implements RenderElement {
	// ...
	override connectedCallback(): void {
		super.connectedCallback();
		this.dispatchEvent(new DependencyProvision(RENDER_ELEMENT, this));
	}
	
	override updated(changes: PropertyValues): void {
		super.updated(changes);
		this.dispatchEvent(new RenderElementChangeEvent());
	}
	
	override disconnectedCallback(): void {
		super.disconnectedCallback();
		this.dispatchEvent(new ProviderRemoval(RENDER_ELEMENT, this));
	}
	// ...
}

And finally, we’ll handle those events in our Canvas host.

src/canvas/canvas.element
// ...
import { DIEvent, DependencyProvision, ProviderRemoval } from "../di/di.events";
import { CanvasEvent, RenderElementChangeEvent } from "./canvas.events";
import { RENDER_ELEMENT, RenderElement } from "./canvas.types";

@customElement("cv-canvas")
export class CanvasElement extends LitElement {
	// ...
	override connectedCallback(): void {
		super.connectedCallback();
		
		this._elements = [];
		this.addEventListener(DIEvent.DependencyProvision, this.#onRenderElementProvided);
		this.addEventListener(DIEvent.ProviderRemoval, this.#onRenderElementRemoved);
		this.addEventListener(CanvasEvent.RenderElementChange, this.#onRenderElementChanged);
	}
	// ...
	override disconnectedCallback(): void {
		// ...
		this.removeEventListener(DIEvent.DependencyProvision, this.#onRenderElementProvided);
		this.removeEventListener(DIEvent.ProviderRemoval, this.#onRenderElementRemoved);
		this.removeEventListener(CanvasEvent.RenderElementChange, this.#onRenderElementChanged);
	}
	
	#onRenderElementProvided = (event: DependencyProvision<RenderElement>): void => {
		if (event.detail.token === RENDER_ELEMENT)
			this._elements = this._elements.concat(event.detail.value);
	}
	
	#onRenderElementRemoved = (event: ProviderRemoval<RenderElement>): void => {
		if (event.detail.token === RENDER_ELEMENT)
			this._elements = this._elements.filter(el => el !== event.detail.value);
	}
	
	#onRenderElementChanged = (event: RenderElementChangeEvent) => {
		event.stopPropagation();
		this.requestUpdate();
	}
	// ...
}

You may notice that we’re not doing any throttling or debouncing here, unlike the Angular implementation. In Lit, updates are automatically batched via microtask scheduling so that re-renders occur at most once per frame. Attaching the @state decorator to our _elements array and updating it immutably has the same effect as manually calling requestUpdate: an update is queued, and any subsequent update requests that occur in the meantime are coalesced into the same update cycle.

We’ve already tied our #canvasRender calls to LitElement’s render method, so we’ll redraw all canvas elements exactly once for every Lit update cycle. To be honest, the logic here is so elegant and easy to follow that this might be my favorite implementation of the bunch, despite the extra boilerplate.

Abstracting our DI solution with decorators

If we wanted to abstract all the event handling and dispatching to get a general-purpose DI solution for Lit, we could whip up a few decorators to do the job. The decorator implementations are honestly pretty ugly, but the whole purpose of metaprogramming constructs like decorators is to front-load the cognitive overhead of performing these kinds of repetitive tasks. As a reward for the effort, we end up with a nice, declarative way to mix in functionality without needing to remember each step of the ritual every time. Here’s what our elements look like using decorators to replace the manual event dispatching/handling:

src/canvas/canvas.element.ts
import { LitElement, PropertyValues, css, html } from "lit";
import { customElement, state } from "lit/decorators.js";
import { Ref, createRef, ref } from "lit/directives/ref.js";

import { queryProviders } from "../di/di.decorators";
import { CanvasEvent, RenderElementChangeEvent } from "./canvas.events";
import { RENDER_ELEMENT, RenderElement } from "./canvas.types";

@customElement("cv-canvas")
export class CanvasElement extends LitElement {
	static override styles = css`
		:host {
			display: block;
			position: relative;
		}
		
		.canvas {
			position: absolute;
			inset: 0;
			width: 100%;
			height: 100%;
		}
	`;
	
	override render() {
		queueMicrotask(() => this.#renderCanvas());
		
		return html`
			<canvas class="canvas" ${ref(this.#canvasRef)}>
				<slot></slot>
			</canvas>
		`;
	}
	
	@state()
	@queryProviders(RENDER_ELEMENT)
	_elements: RenderElement[] = [];
	
	#canvasRef: Ref<HTMLCanvasElement> = createRef();
	get #canvas(): HTMLCanvasElement | undefined {
		return this.#canvasRef.value;
	}
	
	#context: CanvasRenderingContext2D | null = null;
	#resizeObserver: ResizeObserver;
	
	constructor () {
		super();
		
		this.#resizeObserver = new ResizeObserver(([entry]) => {
			const { width, height } = entry.contentRect;
			this.#canvas!.width = width * devicePixelRatio;
			this.#canvas!.height = height * devicePixelRatio;
			
			this.#renderCanvas();
		});
	}
	
	override connectedCallback(): void {
		super.connectedCallback();
		this.addEventListener(
			CanvasEvent.RenderElementChange,
			this.#onRenderElementChanged,
		);
	}
	
	override firstUpdated(changes: PropertyValues): void {
		super.firstUpdated(changes);
		
		if (!this.#canvas)
			throw new Error("HTMLCanvasElement not found!");
		
		const { width, height } = this.#canvas.getBoundingClientRect();
		this.#canvas.width = width * devicePixelRatio;
		this.#canvas.height = height * devicePixelRatio;
		
		this.#context = this.#canvas.getContext("2d");
		this.#resizeObserver.observe(this.#canvas);
	}
	
	override disconnectedCallback(): void {
		super.disconnectedCallback();
		
		this.#resizeObserver.disconnect();
		this.removeEventListener(
			CanvasEvent.RenderElementChange,
			this.#onRenderElementChanged,
		);
	}
	
	#onRenderElementChanged = (event: RenderElementChangeEvent) => {
		event.stopPropagation();
		this.requestUpdate();
	}
	
	#renderCanvas(): void {
		const ctx = this.#context;
		if (!ctx) return;
		
		ctx.clearRect(0, 0, ctx.canvas.width, ctx.canvas.height);
		
		for (let element of this._elements)
			element.onDraw(ctx);
	}
}
src/canvas/circle.element.ts
import { LitElement, PropertyValues } from "lit";
import { customElement, property } from "lit/decorators.js";

import { provide } from "../di/di.decorators";
import { RenderElementChangeEvent } from "./canvas.events";
import { PaintStyle, RENDER_ELEMENT, RenderElement } from "./canvas.types";

@customElement("cv-circle")
@provide(RENDER_ELEMENT)
export class CircleElement extends LitElement implements RenderElement {
	@property() cx = 0;
	@property() cy = 0;
	@property() r = 0;
	
	@property() fill: PaintStyle = "black";
	@property() stroke?: PaintStyle;
	@property() strokeWidth = 1;
	
	override updated(changes: PropertyValues): void {
		super.updated(changes);
		this.dispatchEvent(new RenderElementChangeEvent());
	}
	
	onDraw(ctx: CanvasRenderingContext2D): void {
		if (!this.r || !this.fill && (!this.stroke || !this.strokeWidth))
			return;
		
		const cx = this.cx * devicePixelRatio;
		const cy = this.cy * devicePixelRatio;
		const r = this.r * devicePixelRatio;
		
		ctx.beginPath();
		ctx.arc(cx, cy, r, 0, 2*Math.PI);
		
		if (this.fill) {
			ctx.fillStyle = this.fill;
			ctx.fill();
		}
		
		if (this.stroke && this.strokeWidth) {
			ctx.strokeStyle = this.stroke;
			ctx.lineWidth = this.strokeWidth * devicePixelRatio;
			ctx.stroke();
		}
	}
}

I won’t copy the decorator implementations here, because it’s a lot of code (and well out of scope for this post), but you can explore them in all their inscrutable glory in the Stackblitz demo here .

Evaluating the result

As mentioned at the top of this entry, my motivation for designing a declarative Canvas API was to use it for my font editing application, Glyphy. I’ve now used the framework described here to build out most of the core functionality of Glyphy’s main glyph editor UI, so we’ll close out this entry by taking a closer look at a real-world, nontrivial use case.

Glyphy’s main editor UI has a lot of moving parts — literally and figuratively. Here’s what it looks like in action currently:

Sincere apologies to Andy Clymer for mutilating a perfectly nice glyph for the sake of this demo.

The vast majority of the UI there — everything below and to the right of the tabs along the top and left edges — is rendered on an HTML canvas using the abstractions described here.

A (mostly) unified API for UI elements

It really helps with productivity when I don’t need to context-switch between my go-to Angular component patterns and the imperative patterns that tend to arise out of more traditional canvas abstractions. If it has a visual representation, it’s an Angular component, and you can mostly expect it to behave like one.

That said, the “mostly” caveat deserves some explanation:

  • Unlike SVG (or HTML), Glyphy’s canvas components don’t interact with CSS at all. This isn’t a huge problem — my design system declares all of its tokens in a POJO configuration file and reflects them to CSS variables when the application is bootstrapped, so theme colors can be queried just as easily (and efficiently) from TypeScript as they can from SCSS. Still, I mostly like to keep my templates and my stylesheets separate, so this makes for a slightly awkward discrepancy.
  • A bigger problem is event handling. Much like our toy drag-the-circle demo, Glyphy is doing things like hit-testing for interactive controls manually in the component controllers. This isn’t a huge departure from my typical workflow — I often use RxJS instead of template bindings to handle events, because my library components usually need to support non-trivial interaction patterns for accessibility. But those components don’t burden their consumers with that complexity like the canvas components do currently. A nice usability upgrade might be to abstract some of the hit-testing/clicking/dragging logic into custom events that emit from the individual canvas components.

The reason I haven’t yet taken a serious look at remedying these limitations is that I want to avoid falling into the trap of trying to reinvent the DOM from scratch. The uncanny valley effect of “it’s exactly like the DOM API you already know, except for all the subtle ways it violates your expectations” is (frankly) my biggest pet peeve about working with React, and that’s not an experience I’m eager to replicate.

Simplified math without a viewBox

Without having to worry about the SVG’s confusing viewBox attribute, we can transform glyph coordinates into DOM coordinates and back using a set of relatively straightforward matrix operations.

The GlyphEditorComponent defines a few matrix observables:

  • glyphToCanvas$ takes the raw glyph coordinates and transforms them into the DOM/client’s coordinate space (which is just canvas-space divided by devicePixelRatio)
  • canvasToGlyph$ takes DOM/client coordinates and transforms them back into the glyph’s coordinate space
  • panAndZoom$ captures the translation and scaling produced by the user panning and zooming in the glyph editor

panAndZoom$ is actually an RxJS BehaviorSubject, which we manually update by listening for user input:

src/app/glyph/glyph-editor.component.ts
// ...
export class GlyphEditorComponent implements OnInit, AfterViewInit, OnDestroy {
	// ...
	private _panAndZoom$ = new BehaviorSubject(Matrix.Identity);
	readonly panAndZoom$ = this._panAndZoom$.pipe(replayUntil(this._onDestroy$));
	// ...
	// Slight simplification to spare you some implementation details
	onPan({ movementX, movementY }: PointerEvent): void {
		const matrix = this._panAndZoom$.value;
		
		this._panAndZoom$.next(Matrix.concat(
			matrix,
			Matrix.translate(movementX, movementY),
		));
	}
	// ...
	@HostListener("wheel", ["$event"])
	onWheel({ deltaY, offsetX, offsetY }: WheelEvent): void {
		const delta = deltaY / (175 * 7.5); // TODO: Adjustable sensitivity
		this.adjustZoom(delta, offsetX, offsetY);
	}
	// ...
	private adjustZoom(delta: number, originX: number, originY: number): void {
		const matrix = this._panAndZoom$.value;
		
		this._panAndZoom$.next(Matrix.concat(
			matrix,
			Matrix.translate(-originX, -originY),
			Matrix.scale(1 - delta),
			Matrix.translate(originX, originY),
		));
	}
}

glyphToCanvas$ starts with a nice default framing for the glyph, and then incorporates the pan and zoom:

src/app/glyph/glyph-editor.component.ts
// ...
export class GlyphEditorComponent implements OnInit, AfterViewInit, OnDestroy {
	// ...
	ngOnInit(): void {
		this.glyphToCanvas$ = combineLatest([
			this._familyService.family$.pipe(
				filter(exists),
				distinctUntilChanged(),
			),
			this.contentRect$,
			this.panAndZoom$,
		]).pipe(
			map(([family, rect, panAndZoom]) => {
				const { ascender, descender } = family;
				
				const glyphHeight = ascender - descender;
				const glyphWidth = this.glyph.advance!;
				
				return Matrix.concat(
					// Center the glyph on the canvas origin
					Matrix.translate(-glyphWidth/2, -glyphHeight/2),
					Matrix.translate(0, -descender),
					// Scale to match the canvas height
					Matrix.scale(rect.height / glyphHeight),
					// Flip vertical
					Matrix.scale(1, -1),
					// Zoom out slightly and center in the canvas
					Matrix.scale(0.8),
					Matrix.translate(rect.width/2, rect.height/2),
					// Apply user pan / zoom
					panAndZoom,
				);
			}),
			shareReplay({ bufferSize: 1, refCount: true }),
			distinctUntilChanged(),
			takeUntil(this._onDestroy$),
		);
		// ...
	}
	// ...
}

And canvasToGlyph$, unsurprisingly, just takes the inverse of glyphToCanvas$:

src/app/glyph/glyph-editor.component.ts
// ...
export class GlyphEditorComponent implements OnInit, AfterViewInit, OnDestroy {
	// ...
	ngOnInit(): void {
		// ...
		this.canvasToGlyph$ = this.glyphToCanvas$.pipe(
			map(matrix => matrix.inverse()),
			replayUntil(this._onDestroy$),
		);
	}
	// ...
}

The Matrix class is a straightforward 3x3 matrix implementation representing affine 2D transformations, which you can peruse at your leisure here .

The canvas-rendering components all take a transform matrix input, which can be passed to the CanvasRenderingContext2D after converting to the expected format:

src/app/render/path.renderer.ts
// ...
export class PathRenderer extends BaseRenderer implements RenderElement {
	// ...
	onDraw(ctx: CanvasRenderingContext2D): void {
		// ...
		if (this.transform !== Matrix.Identity)
			ctx.setTransform(this.transform.mul(devicePixelRatio).toDomMatrix());
		
		// <path-drawing commands here>
		
		ctx.resetTransform();
		// ...
	}
}

And the GlyphEditorComponent provides its glyphToCanvas$ observable as that transform:

src/app/glyph/glyph-editor.component.html
<!-- ... -->
<!-- Again, a simplification to spare you some details -->
<g-path
	[outline]="glyph.outline"
	[transform]="glyphToCanvas$ | async"
	[fill]="theme.getHex('foreground', 50)"
></g-path>
<!-- ... -->

The inverse matrix, canvasToGlyph, is mainly used for hit-testing. To avoid transforming every glyph point into client-space every frame to test each point’s distance to a PointerEvent’s clientX and clientY, we transform the pointer coords into glyph space, sort the glyph points by square-distance to those glyph-space pointer coords, and then transform only the nearest point into client-space to check if it’s within a target radius measured in DOM px units.

This all sounds like a lot — and truthfully, it is. But the majority of this math needed to be done in the SVG implementation as well, so just imagine trying to reliably and consistently account for the SVG viewBox on top of it all, and you can begin to understand why I was desperate for a simpler reference frame.