Unreal Engine deserves a better UI story

An airing of grievances, a problem analysis, and a tentative roadmap to a better future.

As a battle-hardened UI engineer with a wealth of experience to draw from, I expected the onramp to learning Unreal Engine’s UI systems to be smooth. Maybe a little slower and a little steeper than I’d have preferred, with a pothole here and there owing to lack of familiarity, but mostly unsurprising and certainly manageable.

What I found instead defies the “onramp” metaphor entirely. It’s less of a road, less of a drive, and more like playing an exceptionally un-fun platformer, but on a screen that’s too dim and too far away to really see what you’re doing. And you’re pretty sure some of the buttons on your controller are broken, but it’s impossible to know for certain, because you were never told what the buttons are supposed to do in the first place.

Defining the problem

What, concretely, is the issue here? Unreal Engine is one of the most popular game engines in common use today. Plenty of teams and indie devs alike are using it to build rich user interfaces that range from serviceable to beautiful, so surely I must be over-stating the problem?

Depending on your perspective, I probably am. I come from a specific background, which biases me to a particular set of expectations and preferences which may not match your own. While I believe some of the grievances I’m airing here (like the pervasive scarcity of documentation) are broadly-felt pain points and difficult to excuse, others are certainly a product of my particular biases, which leaves plenty of room for reasonable people to disagree.

With that disclaimer out of the way, first I’ll attempt to draw a sharp line around the pain points that I personally feel most acutely. Then, I’ll sketch out a speculative (and very rough) roadmap for what I think could be a viable and practical solution to many of these problems.

A brief introduction to the tools

To begin with, a bit of background is necessary for context, but I’ll try to keep this brief. Unreal Engine’s UI system is actually not just one system, but two. The first component is Slate, which is an engineer-oriented C++ framework. It features a declarative DSL of sorts, built from a labyrinth of C macros and C++ operator overloads. It’s no JSX , but the idea is similar: it allows developers to describe the high-level structure and properties of a widget hierarchy as a single declaration (hence, “declarative”), instead of writing an explicit sequence of instructions to create, configure, and arrange each widget.

What might not be immediately obvious (it wasn’t clear to me until very recently) is that Slate is not just a UI framework — it’s actually one of the most core dependencies of Unreal Engine itself, and can even be used to author applications completely outside of Unreal Engine. Among other things, Slate also provides the cross-platform abstraction layer for windowing and the main event loop. (In my view, it’s at least a little problematic that something as high-level as user-land UI is so tightly coupled to something as fundamental as the core application runtime — but I’m getting ahead of myself.)

The second critical component is UMG (Unreal Motion Graphics). UMG is essentially a designer-oriented, editor-integrated frontend for Slate. Each UMG widget type serves as a thin wrapper around a corresponding Slate widget type, exposing properties to the Unreal Editor through Unreal’s reflection system. This allows designers to arrange and style UI widgets through a WYSIWYG interface and author behavior through Unreal’s ubiquitous Blueprint visual scripting language.

Two separate worlds

You can probably already see one massive drawback to this core architecture. Creating a new widget type means authoring and maintaining two separate representations of the same widget: one for Slate, and one for UMG. Add, remove or modify a user-facing property in the Slate widget, and you have to make the corresponding changes to the UMG widget too.

The design makes a tragic sort of sense when you consider the history of these systems: In the beginning, Slate was the only way to author UI features for Unreal Engine. UMG came along later, to empower UI designers to work more independently of engineers.

Unreal’s reflection system, which is how C++ class members are exposed to the Unreal Editor, requires that a class inherit from the base UObject class. But UObject is also responsible for a ton of core engine and Game Framework behavior , like tracing references for the garbage collector and providing access to the UWorld instance that contains a given object instance. Given that Slate precedes the engine itself in the dependency graph, refactoring the base SWidget to inherit from UObject would be a tall order. Thus, the UWidget base class is born, with the sole purpose of holding a shared pointer to a corresponding SWidget instance and passing through its reflection-superpowered property values and method invocations.

An embarrassment of layout strategies

This gripe is very likely born of my particular proclivities as a longtime web developer, but I think any sane UI developer could probably relate to the frustration: Do we really need so many different widget types solely dedicated to arranging the positions of their children?

On the web, our layout options can be boiled down to three main algorithms that serve the vast majority of use cases:

  1. FlexBox for arranging elements on a single dimension, with optional wrapping
  2. Grid for arranging elements on two dimensions, like a table
  3. Absolute positioning, for when an element needs to be fixed to a particular position relative to some parent, regardless of any other nodes in the parent’s layout context

By contrast, in UMG we have:

  • Vertical Box and Horizontal Box, which (as far as I can tell) are identical but for the orientation
  • Wrap Box, which (I think?) is like Vertical/Horizontal Box, but allows children to wrap
  • Scroll Box, which is basically a Vertical Box with a height constraint that can scroll on overflow
  • Overlay, which is basically a zero-dimensional version of Vertical/Horizontal Box, with children sharing the same XY space and stacking along the Z-axis
  • Canvas Panel, which is basically the equivalent of CSS absolute positioning (and which, incidentally, it is generally agreed upon that you Should Not Use)
  • Grid Panel for 2D layouts
  • Size Box for overriding the “desired size” of a single child
  • Border Box for adding a border and/or padding to a single child
  • Scale Box, which can constrain a child to a particular aspect ratio

It’s a lot, and it’s not at all clear to me why it’s preferable to model each of these layout strategies as an entirely separate widget type, instead of having a set of configurable “layout” properties for some general-purpose container type — or even dispensing with the idea of dedicated “container widgets” entirely, and allowing the user to select and configure the layout strategy for any non-leaf widget.

Where’s the documentation?

Compounding the difficulty of working with all these different layout mechanisms is what I believe to be the single biggest problem with Slate and UMG: the near-complete absence of authoritative information about how most of this stuff is actually supposed to work. I frequently run into surprising results when trying to combine the various container widgets to achieve a particular layout, but it’s never clear to me whether the result I’m seeing is as-designed behavior, a consequence of doing something unexpected or inadvisable, or simply a bug in one or more of the underlying widgets.

Here’s the description of UHorizontalBox from the C++ API reference :

Allows widgets to be laid out in a flow horizontally.

Many Children Flow Horizontal

The Blueprint API reference (which as of writing is the top Google search result for “Unreal Engine Horizontal Box”) is somehow even less helpful.

Dig deep enough, and you might eventually stumble onto Programming and Scripting / Slate UI Programming / General Slate Programming / Slate Widget Examples , which includes some general information about how some of the more common “Box Panel” widgets work, along with some C++ code samples. But the descriptions there still leave quite a bit to the imagination, and many of the container types mentioned above are not covered at all. And anyway, how is a designer working with UMG expected to even discover that page, nested three levels deep under the “Programming and Scripting” category?

Compare that to the official specification for CSS flexbox , a ~14,000-word document covering every nook and cranny of possible behavior in excruciating detail, lest there be a single shred of ambiguity about what should happen in the event of some particular edge case. Of course, that document is intended for implementers, not users — but for that, MDN has a detailed, multi-page guide to flexbox with explanations, diagrams, examples, and comparisons to other layout methods.

Obviously, it would be unreasonable to expect Epic to maintain a document of that scope for every single layout strategy they implement. But UI layout is an inherently complex task. Developer productivity hinges on being able to predict the outcome, with reasonable accuracy, of nesting and combining potentially many different interlocking parts, each working in one of many different modes of operation. The reasonable thing to do, in my opinion, if Epic couldn’t commit the resources to comprehensively specifying and/or documenting a bespoke layout system, would have been to implement some pre-existing set of open specifications, like those defined by the W3C.

I’ve spent the entire length of this section so far talking about layout, because it’s an area that’s particularly vulnerable to ambiguity and misunderstanding, but it’s not by far the only aspect of Slate and UMG that suffers from inadequate documentation. Slate is virtually entirely undocumented, save for a high-level design document and the widget examples page mentioned earlier. Both Slate and UMG have their own landing pages in the documentation, but the topics covered seem to have been selected arbitrarily, and they’re presented without much of a coherent organization. Your best bet for wrapping your head around these systems is to spend years digging through the engine source code, official content examples, and miscellaneous third-party resources while experimenting by trial and error until you’ve managed to organically evolve some sort of working intuition for it.

Performance footguns

If you read through the Slate Architecture design document I linked above, you may have noticed this ominous line under the “Core Tenets” heading, which immediately set my Spidey senses tingling the first time I read it:

Design for developer efficiency where possible. A programmer’s time is expensive; CPUs are fast and cheap.

The second sentence there is an oft-repeated truism of modern software engineering, but I would argue that UI rendering is a questionable problem to apply it to. The whole point of a cost/benefit analysis like that is to optimize the output for some limited quantity of resources at the input. So if, say, you need to bootstrap some non-trivial server-side API, and you’re weighing what tech stack to use for the software implementation, that idea holds weight: you can write the software in a less efficient but simpler programming language, offset the slower software by provisioning your servers with faster CPUs, and still save money on balance.

But here, we’re talking about software that runs on a client, not on some server we control ourselves. In other words, we’re comparing the producer’s cost on one hand, to the consumer’s cost on the other. And for many products (e.g., games targeting console platforms), the latter is a fixed target — we couldn’t ask our consumers to foot the bill for our development shortcuts even if we wanted to.

Unfortunately, this isn’t just abstract, pedantic philosophizing about some language in a design document — some of Slate and UMG’s most useful core abstractions (like property binding and the aforementioned Canvas Panel widget) are so costly in practice that Epic themselves often advise developers to avoid using them. This makes for an onerous trade-off between building widgets that are easy to read, write and maintain, and widgets that don’t choke in performance-critical environments.

Where are the 2D graphics primitives?

This is another one that really irks me as a web developer, though I suspect it’s basically par for the course in gamedev.

On the web, I’ve been spoiled by the ability to very quickly iterate on UI designs — even completely skipping the process of mocking up a design in something like Figma or Adobe CC — because modern browsers are powered by very sophisticated (and highly optimized) 2D rendering engines that allow developers to specify virtually all aspects of UI styling directly through CSS properties. Borders, rounded corners, gradients, drop-shadows, and even completely arbitrary vector graphics are all rendered procedurally on the web.

Now, I know what you’re probably thinking. “Dude, weren’t you just complaining about performance issues, like, two paragraphs ago? Now you want to render literally everything procedurally?” It’s potentially a fair criticism, and to be honest, I don’t currently know enough about the performance implications of this approach versus texture sampling to say much more about it than that. What I do know is that the implications for developer productivity are profound.

If I want to design some custom button style for the web, I can spend all of five minutes writing a few lines of code and manipulating the properties directly in the browser dev tools for real-time feedback:


	.my-button {
		display: inline-block;
		padding: 0.5em 1em;
		border-radius: 0.25em;
		background: #0066CC;
		color: #FFFFFF;
		font-weight: bold;
	}

<button class="my-button">
	Click me!
</button>

Here’s the result of that, which I literally authored just now, directly in the markdown file where I’m writing this blog post:

Click me!

(Admittedly, it doesn’t hurt that I’ve been doing this long enough that I can “see the Matrix” when it comes to CSS, but still — that’s an awfully convenient workflow even if you’re not a CSS expert.)

If I wanted to get the same result in Unreal Engine, I would need to:

  • Open up my design software of choice
  • Design the button with some example text
  • Once I’m happy with it, create some separate document/element/artboard/whatever with a square aspect ratio, and duplicate just the background styling
  • Export that second document as a raster image with a large enough (power-of-two) resolution to account for the highest-DPI screens I intend to support
  • Import that raster image into Unreal Engine and configure it as a UI texture asset
  • Assign the asset to the “Image” property of my Button widget
  • Do some Googling to figure out how to configure it for 9-slice scaling (courtesy of some random blog, since, of course, the Unreal docs don’t explain it)

To be fair: if you’re a math wizard, for something as simple as the above, you could theoretically skip the design software and instead create a UI Material with a signed distance function to round the corners. I don’t think I would qualify that as easier or more convenient than the asset-pipeline approach, but it would at least be much faster to iterate on.

Mapping out a potential solution

I’ve been quietly suffering under the status quo for a long time, because it didn’t seem to me that there was much I could really do about it besides learning to use it. But now that I’ve had enough time with the engine to get nice and cozy with it, I’ve come to a couple of conclusions that have inspired me to explore these issues more deeply, with an eye toward actually addressing them.

First, Unreal’s plugin system and the flexibility of its public API surface are likely sufficient to almost fully replace the entire UI ecosystem without needing to modify the engine source code at all. That’s an important constraint, because I like to stay up-to-date with the latest engine features, and I do not have the resources (or the mental fortitude) to deal with merge conflicts in a codebase of 8 million+ lines of C++. (Nor do I particularly like the way my PC fans scream at me when I try to open the engine solution in Visual Studio.)

Second: it’s not just my lack of familiarity — I don’t think I’ll ever get to a place where I’m really happy with the developer experience on offer here. No disrespect to anyone at Epic — everyone there is clearly very capable, intelligent, and dedicated, and I wouldn’t be giving myself carpal tunnel whining about the UI if the engine on the whole weren’t so extraordinary. Plus, like I said at the top: my expectations and preferences with regard to UI tooling are subjective. Maybe I will never learn to love Slate and UMG, but lots of other people probably already do!

With that said, here’s a rough sketch of a workable (I hope) system that I think I could love, and how I’m planning to maybe get there.

A single Slate widget to rule them all

We can’t directly expose a Slate widget’s properties to the Unreal Editor, but what I think could likely be done is to create a single root SWidget that serves as an unopinionated container for a whole tree of UObject-derived classes that each do the actual heavy-lifting of implementing a UI widget. This is basically a role reversal compared to the Slate/UMG setup: instead of a UWidget being just a thin wrapper around the “real” SWidget, the root SWidget is just a thin wrapper (conceptually, at least) around the “real” UElements.

(Side note: I’m using “UElement” here as a placeholder name for my custom UObject-derived widget class, mainly to disambiguate it from UMG’s UWidget base class. I’m bad at naming things, and Element is the name of the web’s core UI primitive, so I’m rolling with it.)

Basically, I’m imagining a system where a Slate Viewport widget (or something like it) just exposes a low-level rendering API (like a surface handle) and an event stream and calls it a day, letting the UElements (or even some third, independent system) own the entire layout, event-handling and painting workload.

This doesn’t just eliminate the problem of needing two classes for every widget type — it would essentially give me the freedom to roll my own UI framework completely from scratch, without losing any of the things I do love about Unreal.

Of course, this does means reinventing some wheels, but I’m not as worried about that as perhaps a more sane developer might be. I intend to lean heavily on third-party libraries and well-known algorithms to do most of the heavy lifting, so it won’t be reinventing wheels so much as scavenging some existing ones and bolting them onto Unreal Engine’s axles.

Well-known layout algorithms

There are actually two options I’m considering here (though they aren’t necessarily mutually exclusive). The first is to simply yoink the three layout strategies I mentioned earlier from the W3C specs and bring them into my UI framework. The principal advantage of this option is that the web is by far the most widely used platform for UI development, so anyone who feels like hopping aboard the crazy-train with me will be able to take advantage of the veritable tomes of knowledge (and armies of developers) that already exist in that space instead of having to learn a completely novel system from scratch.

The second option, which is a bit novel but promising for a different reason, is to implement Subform’s layout system . It’s not nearly as widely known, but its authors make a compelling argument for ditching some of the web’s more confusing idiosyncrasies in favor of something that, at least theoretically, is much more intuitive for complete newcomers to wrap their heads around. I never actually used Subform when it was around, so I can’t say for sure whether this is actually a good idea, but there are some open-source implementations around, so it shouldn’t be too much effort to give it a shot.

Modern, high-performance reactivity

Reactive UI is a fast-moving space right now, with an explosion of growth in the past ten years or so. To be clear about terminology, what I mean by “reactive UI” is essentially a system that’s fully (and ideally invisibly) event-driven by default. When you change an input to some UI element, that element re-renders to reflect the change — not because it’s polling its inputs every frame, but because the input itself is reactive to changes (e.g., via some observer pattern implementation).

I haven’t deeply investigated this problem space yet, so that’s all I’ll say about it for now. And it’s probably worth mentioning that Epic themselves seem to be doing some work in this space with the UMG Viewmodel plugin. I haven’t had the chance to play with it yet, but it looks promising from a cursory glance — I would love to be able to check this one off my list just by hitting an “Install” button.

A powerful and flexible 2D graphics API

This piece of the puzzle is actually the biggest motivator for me, despite its relative lack of importance to the bigger picture. If I’m being objective, the existing workflow of using the Image widget with baked raster assets and UI Materials is perfectly serviceable. But I really don’t like it.

I started my career as a graphic designer before making a gradual segue to doing full-time engineering work. I still enjoy doing some design work from time to time, both on personal projects and in the scope of my job when it’s appropriate. But despite keeping active subscriptions to Figma and Adobe CC, I very rarely touch design software. It’s just faster for me to iterate on a design directly in the code, because I can immediately see my changes in the context of their natural environment, and I don’t have to worry about keeping two separate implementations of the design in sync whenever I make a change in one place or the other.

In a way, “designing in code” is already something that’s possible in Unreal with UI Materials, but drawing 2D graphics in a vertex + fragment shader is mind-meltingly unintuitive. I actually went to the trouble of implementing an entire military-style aircraft HUD in an Unreal Post-Process Material some time ago. But it was not fun, and it took some serious effort to go back and regain an understanding of those material graphs a year later when I refactored it to UMG.

The thing that makes drawing UI on the GPU so weird is the fact that each shader invocation is operating on just a single pixel, with very little access to the overall context you’re working in. It’s basically a pure mathematical function, taking the pixel coordinates as input and returning a color. Here’s the expression for a circle with radius r, which is by far the simplest example of a 2D signed distance function, since you’re literally just measuring the distance from center:

circle(r,x,y)=sign(rlen(x0.5,y0.5))+12\mathrm{circle}(r, x, y) = \frac{ \mathrm{sign}(r - \mathrm{len}(x - 0.5, y - 0.5)) + 1 }{ 2 }
len(x,y)=x2+y2\mathrm{len}(x, y) = \sqrt{x^2 + y^2}
sign(x)=xx\mathrm{sign}(x) = \frac{x}{|x|}

Unless you’re really familiar with this type of math (or have a much better intuition for math than I do), that’s not all that easy to “grok.” And it only gets (much) more complicated from there if you want to draw other kinds of shapes.

What I really want is a canvas-style drawing API to abstract over the raw shader code. For example, here’s a function that renders the same circle using Skia , the C++ library that powers Chrome’s rendering engine:

void DrawCircle(SkCanvas& canvas, float radius)
{
	SkSurface& surf = *canvas.getSurface();
	float cx = surf.width() * 0.5f;
	float cy = surf.height() * 0.5f;
	
	SkPaint paint;
	paint.setStyle(SkPaint::kFill_Style);
	paint.setColor(SK_ColorWHITE);
	
	canvas.clear(SK_ColorBLACK);
	canvas.drawCircle(cx, cy, radius, paint);
}

Unreal Engine actually has a built-in Canvas API , but it can’t handle any sort of curved path besides text, which makes its utility pretty limited by comparison.

A plan for incremental implementation

Even leaning heavily on existing knowledge and third-party libraries, this is a pretty ambitious project with a lot of moving parts. A phased implementation will be critical to its success.

The ideal roadmap will allow for each incremental milestone on the way to the final destination to yield a fully working product, layering improvements onto the existing Slate/UMG infrastructure one by one, until there’s very little of that original infrastructure visibly remaining by the time the finish line is crossed.

This will enable each phase of the project to be fully testable, and keep the project itself fairly adaptable to unforeseen challenges, since each new phase of development will only depend on the already-completed work that came before. What I’m trying to avoid is the type of situation where you pull the wrong Jenga brick first, everything falls over, and suddenly you can’t get anything working until you’ve rewriten everything.

Phase 1: Vector graphics rendering

This might seem like an odd place to start, since I already mentioned that this is likely the least important “problem” to tackle. However, being able to handle all (or nearly all) 2D rendering through a single versatile, high-level API that’s decoupled from Slate makes it much easier to adress one of the bigger pain points, which is the need to create two separate classes for every custom widget type.

The biggest challenge here will be finding a solution that can play nicely with Unreal’s GPU abstraction layer. Skia is powerful and venerated, but it’s not clear to me whether it would be possible to hook its GPU-targeting backends into Unreal’s render pipeline (or how difficult it might be to pull that off if it is possible). Alternatively, Vello is a promising option since it’s designed for extreme flexibility in this area — but it’s experimental, and using it here would require writing or generating some Rust / C++ FFI bindings, which I’ve never done before.

Phase 2: Single-root SWidget architecture and layout overhaul

I don’t love using the word “and” in a task heading like this, because it feels like trying to do too much at once, which complicates the goals laid out at the top of this section. But I’m having some difficulty imagining a way to tackle one or the other of these without doing both at the same time.

Under the existing regime, the Slate runtime constrains our options for layout by dictating a two-pass ComputeDesiredSize > ArrangeChildren workflow. That means (as I’m seeing it currently):

  • We’re stuck with that lifecycle as long as we keep the 1:1 SWidget to UWidget mapping.
  • We lose any ability to do layout as soon as we migrate away from it, unless we simultaneously provide a replacement — exactly the type of situation I was trying to avoid.

But maybe a better path will reveal itself in the course of working through Phase 1 or scoping out the work for Phase 2.

Phase 3: Reactivity / data-binding overhaul

The bulk of the work here will be on the editor integration. Implementing some variant of the observer pattern is not overly difficult, but it will almost certainly require template types, which don’t play nicely with Unreal’s reflection system. Working around that limitation may be as simple as providing custom DetailsPanelCustomization and serialization implementations, but it’s hard to predict exactly what it will take without a deeper investigation.

Phase 4+: Dependency injection, composable style assets, and more…

Over the long term, there are many more things I’d love to do in order to make this a truly comprehensive UI framework that’s productive and pleasant to work with, but it’s hard to extrapolate much further into the future while keeping a sharp focus on the road immediately ahead. Suffice to say, there’s a lot of work to do, but this is only the beginning.