“Design intelligence” in AEC is often framed as something you bolt onto a 3D model — an AI layer that magically spots issues and proposes better options. In practice, most of the “thinking” comes from a quieter source: how the model is represented. A mesh can look perfect and still fail basic questions like Which rooms does this door connect? or Which walls form the thermal envelope? The difference is whether geometry is paired with semantics (what things are) and topology (how they relate).
This article looks at a realistic pipeline — Revit/IFC → processing → optimized glTF — and argues for a pragmatic target: not autonomous design, but fast, trustworthy feedback loops that make exploration and validation feel immediate.
A good mental model is to separate the stack into three layers. Each layer enables different questions — and fails differently.
| Layer | Typical representation | What it enables | Common failure mode |
|---|---|---|---|
| Geometry | Mesh / triangles (often GLB/glTF) | Real-time viewing, selection, measurement | Tessellation artifacts; heavy payloads |
| Semantics | IFC classes + property sets | Filtering, rule checks, schedule logic | Missing/misaligned properties; mapping drift |
| Topology | Graph of containment/adjacency/connectivity | Space reasoning, circulation, constraint logic | Dirty geometry breaks adjacency; tolerance issues |
The trap is assuming geometry implies meaning. It doesn’t. A façade panel rendered as triangles doesn’t inherently carry “this is part of the envelope,” and a room drawn as boundaries doesn’t necessarily encode adjacency in a way you can query reliably. That extra structure has to be preserved — or rebuilt — on purpose.
IFC remains the most widely recognized open exchange backbone. buildingSMART lists IFC 4.3.2.0 as the latest official version and notes its publication as a final ISO standard (ISO 16739-1). This matters because “design intelligence” workflows often depend on consistent entity types, property sets, and stable identifiers (like IFC GlobalId) across tool boundaries.
But “Revit → IFC” is not a neutral button press. Exporters encode assumptions about mapping, coordinate handling, and exchange requirements. As of 2026, Autodesk’s IFC exporter documentation shows ongoing versioned releases for Revit 2026, including changes tied to IFC4.3 Reference View exchange requirements — a reminder that which exchange view you target changes what you get. Exporter release notes for the open-source Revit-IFC project also show continual fixes (for example, adjustments to space elevation and area element positioning), which can ripple into downstream spatial reasoning.
One practical implication: treat exporter version and settings as part of the model’s “data contract.” If the same project exported on two machines yields meaningfully different IFC, topology and analytics built downstream can become untrustworthy.
A small but telling example: Autodesk even documents confusion around inconsistent IFC exporter version numbering across download sources and release notes. That’s not academic — it affects reproducibility and QA.
Once IFC is in play, many teams target web-friendly real-time formats. glTF is explicitly positioned as “runtime 3D asset delivery”, optimized for efficient transmission and rendering. For interactive 3D, that’s exactly what you want: predictable loading behavior, GPU-friendly meshes, and a packaging option (GLB) that’s easy to distribute.
The catch is that glTF’s core promise is not “carry all BIM meaning.” It can carry metadata, and it can be extended, but it’s not inherently a semantic/topological model of buildings. The Khronos ecosystem has also kept evolving material capabilities via extensions — useful for visual fidelity, but orthogonal to semantic correctness.
This is where real disagreements show up in practice:
Neither is universally “right.” Embedding is simpler to ship; sidecars are often more robust when meshes are merged, instanced, or re-tessellated for performance. The right choice depends on the degree of optimization and the kinds of queries you need to support.
The most common surprise in “BIM → real-time” pipelines is an identity gap: you can render everything, but you can’t reliably map what a user clicks back to the source element and its properties.
Conversion tools can handle geometry well. IfcOpenShell’s IfcConvert explicitly supports exporting IFC into formats including GLB, making it a common bridge from BIM exchange to real-time delivery. But conversion steps frequently reshape hierarchies, split or merge meshes, and rename nodes. If identity is not deliberately preserved, selection becomes ambiguous, and “what changed between option A and B” becomes fragile.
A practical guideline: design the pipeline so that stable IDs survive every stage (or can be deterministically reattached). That might mean embedding IFC GlobalId into node extras, maintaining an external lookup table, or producing a deterministic packing/indexing step before aggressive mesh optimization.
In narrow, revision-heavy scenarios — large federated models, strict diffs, or high-frequency updates — a tailored packaging + indexing step can reduce overhead compared to general-purpose conversions, while still keeping standard formats as boundaries. (This isn’t “more AI”; it’s more discipline.)
Semantic classification is where “intelligence” becomes reliable. It is rarely glamorous work, but it’s foundational:
Because exporter capabilities and exchange requirements evolve (for example, IFC4.3 Reference View requirements appearing in Revit IFC exporter version history), semantic logic must be treated as living infrastructure, not a one-time script.
Once semantics are stable, interaction becomes meaningfully helpful: filter elements by classification, check for missing required properties, or highlight contradictions early — while decisions are still cheap.
Topology is what enables questions designers actually ask: adjacency, containment, circulation, and connectivity.
A minimal abstraction is a graph:
G = (V, E)
This underlay lets you implement interactions that feel like intelligence without pretending the machine is the author:
The hard part is robustness. BIM geometry contains tiny gaps, overlaps, and non-manifold artifacts that can break adjacency inference. Topology extraction needs tolerance rules, fallbacks, and QA checks — otherwise the graph will be brittle and users will stop trusting it.
At some size threshold, a single-file strategy becomes painful: load times spike, memory pressure rises, and interaction latency climbs. That’s where streaming approaches matter.
OGC’s 3D Tiles specification is explicitly designed for streaming massive heterogeneous 3D geospatial datasets and is often positioned for large-scale building/city content delivery. Cesium (a major ecosystem around 3D Tiles) also emphasizes 3D Tiles as an open specification for streamable, optimized 3D geospatial data, including buildings and BIM/CAD content.
Important nuance: 3D Tiles changes delivery, not meaning. Semantics and topology still need explicit handling — either embedded, sidecar, or derived — regardless of how content streams.
A defensible “design intelligence” pipeline is less about choosing one file format and more about choosing what to preserve, what to compress, and what to re-derive:
The payoff is not a building that “thinks” on its own. It’s a workflow where models become queryable, comparable, and responsive — so teams can explore options with confidence, catch issues early, and keep design intent legible across tools.