The file format you standardize on after capture — explicit point clouds or implicit neural scene representations (NSRs, e.g., NeRFs and 3D Gaussian Splatting) — quietly determines your downstream choices: registration methods, QA/QC, modeling tempo, and what owners will accept at handover. This guide compares the two (and the hybrid in-between) so teams can decide with confidence rather than habit.
Point clouds give you explicit XYZ measurements that slot into long-standing QA/QC practices and contract language. NSRs encode appearance and view synthesis in a learned field; they’re excellent for photoreal reviews and compact scene storage but still need a bridge when metrology or archival is required. In practice: clouds maximize measurability, NSRs maximize immersion.
Two families of formats anchor most AEC pipelines:
Delivery norms increasingly expect compressed clouds. Recent USGS Lidar Base Specification updates explicitly add a requirement to deliver classified points in LAZ — a strong signal for project teams building defensible, standards-aligned handoffs.
On the tool front, CloudCompare is the dependable workhorse for cleaning, sectioning, registration checks, and deviation maps. Recent release notes highlight a more robust signed-distance algorithm and ICP options — small details that matter when you’re defending tolerances.
Where clouds shine: measurement fidelity, auditable QC trails, stable archival.
Pain points: heavy I/O, long registration/decimation cycles, bulky model context.
NSRs learn a radiance field from images and camera poses, then render new views without first meshing. The big practical leap has been 3D Gaussian Splatting (3DGS), which represents scenes with anisotropic Gaussians and a visibility-aware renderer — enabling real-time view synthesis while accelerating training compared to prior NeRFs. For stakeholder walk-throughs, that responsiveness changes the conversation.
Interoperability is improving. Nerfstudio provides an ns-export CLI to sample NSRs back into PLY point clouds and OBJ meshes, plus volumetric outputs (TSDF/Poisson/Marching Cubes) when explicit geometry is required for metrology or CAD context. This “back to explicit” step is the bridge into BIM ecosystems.
Where NSRs shine: photoreal reviews, compact artifacts, smooth playback on modern GPUs.
Trade-offs: when tolerances matter, you must export explicit geometry and validate; thin, glossy, or repetitive surfaces can produce sampling artifacts that warrant spot checks.
Mesh-centric workflows have become more practical in everyday authoring environments. Autodesk ReCap Pro 2026 introduced Local Scan-to-Mesh and a Mesh Editor, letting teams convert point clouds to segmented meshes locally, classify them, and hand them to Revit via a dedicated ReCap Mesh Revit (.rcmr) pipeline. Autodesk’s help and product pages document local conversion, editing, and Revit linking/plugin support — reducing reliance on cloud jobs and smoothing day-to-day modeling with lightweight context.
Pragmatically, that enables a clean split: keep point clouds for QA/QC and archival; deliver meshes (from clouds or NSR exports) for model context — without inventing a custom process. Practitioner notes and release communications also flag recent stability and loading improvements across the ReCap 2026.x line, which helps at scale.
Think of clouds vs NSRs as shifting the cost center along the pipeline:
| Stage | Point cloud cost center | NSR cost center | Notes |
|---|---|---|---|
| Ingest/registration | CPU time + manual QA; heavy I/O | Camera calibration + data curation | Good capture protocols benefit both. |
| Processing | Cleaning/decimation; LAZ compression | GPU optimization (training) | LAZ cuts storage/transfer; NSRs pay up front. |
| Review/playback | Large files; downsample for speed | Real-time viewer; compact artifacts | 3DGS enables smooth reviews. |
| Handoff | Standards (E57/LAS/LAZ) | Export to PLY/OBJ via ns-export |
Sampling back is the audit path. |
In narrow, high-throughput scenarios (e.g., thousands of rooms captured in waves), tailored tiling/decimation and confidence-weighted exports can reduce overhead compared to general settings; in most projects, off-the-shelf defaults are sufficient.
Auditors expect distances on explicit geometry. With clouds, you compute deviation maps directly and cite known behavior in tools like CloudCompare. With NSRs, export a mesh or point sample and validate representative sections against the original cloud (C2M or M3C2), logging thresholds and sampling strategy in the QC packet. Recent CloudCompare changes to signed distances and ICP are worth noting in your methods section.
ns-export as needed..rcmr to Revit; keep the raw cloud for verification.| Scenario | Preferred form | Why | Handoff |
|---|---|---|---|
| As-built verification with tight tolerances | Point cloud | Direct, auditable distances | E57/LAS → compress to LAZ. |
| Design reviews / stakeholder buy-in | NSR (3DGS) | Photoreal, real-time walkthroughs | Viewer + selected OBJ/PLY exports. |
| Renovation with both QA and viz needs | Hybrid | Clouds for metrics; NSR for immersion | LAZ + ns-export mesh/point snippets. |
| Model authoring context | Mesh | Lightweight, classifiable in ReCap/Revit | ReCap Local Scan-to-Mesh → Revit link. |
ns-export to PLY/OBJ; volumetric options). docs.nerf.studio/quickstart/export_geometry.htmlAs of the latest releases and specs cited, these recommendations reflect current terminology and capabilities; verify exact tool versions and agency requirements at kickoff.