Facility digital twins usually serve two very different aims: seeing (convincing walkthroughs for stakeholders) and changing (measurement, coordination, and routine updates). The representation you choose — neural radiance fields, explicit meshes/BIM, or a hybrid — determines how far you can push each aim without fighting the toolchain.
Neural radiance fields (NeRFs) model a scene as a continuous 5D function of position and view direction and render images by differentiable volume rendering. This is why they produce striking novel views from posed photos. Explicit meshes/BIM, by contrast, store geometry, topology, and semantics directly — ideal for dimensionally trustworthy edits, object identity, and integration with asset systems. The “right” choice depends on whether the deliverable is closer to seeing or changing.
Three advances have pulled neural methods from “cool demo” into practical preview for buildings:
Implication: neural fields preview quickly and look great, but they don’t intrinsically give you clean, editable geometry. When you must edit or measure, neural-to-surface methods such as NeuS or Neuralangelo can extract meshes from images or neural fields — at the cost of QA on scale, smoothness, and topology before BIM hand-off.
1) Photorealism vs. semantic editability
If immersion and appearance (glossy/transparent materials, clutter) dominate, NeRF/3DGS excel. If you need parametric edits, object identity, and reliable dimensions, meshes/BIM win. Think: render what you’ll mostly see; model what you’ll routinely change.
2) Runtime performance & latency
3DGS demonstrates interactive rates for neural content; mesh pipelines remain broadly optimized in engines and the web (with LODs and prudent materials). Choose by target hardware and delivery channel rather than by ideology.
3) Update cadence & maintenance
Swapping a door leaf or re-routing a cable is local, diff-friendly work in BIM. In a radiance-field pipeline, the corresponding change typically implies partial re-capture and retraining — feasible for showcase areas, cumbersome for day-to-day ops.
4) Interoperability & standards (as of November 2025)
AEC semantics continue to hinge on IFC 4.3, which is formally published as ISO 16739-1:2024. For runtime delivery, glTF 2.0 remains the “JPEG of 3D” with PBR materials and broad engine/web support. OpenUSD is progressing via the Alliance for OpenUSD (AOUSD) working groups, with public updates in 2024–2025 on governance and new groups. These roles matter: IFC for data exchange and asset semantics; glTF for lightweight runtime delivery; USD for non-destructive scene composition and variants.
5) Storage & delivery footprint
Neural models can be compact compared to dense textures; meshes benefit from mature instancing, streaming, and compression. In practice, distribution constraints (web vs. native app) often decide this axis for you.
Quiet customization note: In long, repetitive interiors (plant rooms, corridors), tiling the capture/training domain with strict pose QA can reduce drift and aliasing versus generic defaults; for small, self-contained rooms, off-the-shelf pipelines are usually sufficient.
Table A — Format-purpose map
| Format | Primary role | Strengths | Considerations |
|---|---|---|---|
| IFC 4.3 | BIM semantics & exchange | Lifecycle-scale object identity & properties | Not a rendering format; published as ISO 16739-1:2024. |
| glTF 2.0 | Runtime delivery (PBR) | Efficient loading; web/engine ubiquity | Geometry-first; author LODs & textures carefully. |
| OpenUSD | Scene description | Layering, non-destructive variants, composition | Standardization in progress; active AOUSD WGs. |
A pragmatic facility pattern is BIM/mesh backbone + neural appearance:
Table B — Quick picks
| Use case | Prefer | Why |
|---|---|---|
| Immersive visual walkthroughs | NeRF / 3DGS | Highest realism; now feasible at interactive rates for previews. |
| Space planning / measurement | Mesh / BIM | Editability, to-scale metrics, and object semantics. |
| Mixed ops + showcase | Hybrid | BIM backbone + selective neural appearance; extract geometry only in zones that will change. |
Start from outcomes and distribution constraints. If measurement, hand-offs, and frequent edits dominate, default to Mesh/BIM, then sprinkle neural appearance where it boosts communication. If the KPI is persuasion or remote walkthroughs, lead with NeRF/3DGS, while earmarking critical areas for later geometric extraction. As of late-2025, standards are converging (IFC 4.3/ISO 16739-1:2024; glTF 2.0; active AOUSD WGs), but the logic remains constant: model explicitly what you’ll change; render photorealistically what you’ll mostly just see.
(Where sources disagree on performance/quality claims, prefer the primary paper’s reported metrics and treat repo/blog statements as indicative rather than definitive.)