ORIGINAL RESEARCH article

Front. Neurorobot., 06 March 2026

Volume 20 - 2026 | https://doi.org/10.3389/fnbot.2026.1768219

Enhancing 3D semantic scene completion with refinement module

  • 1. Chair of Robotics, Artificial Intelligence and Real-time Systems, Technical University of Munich, Munich, Germany

  • 2. National Science Center for Earthquake Engineering, Tianjin University, Tianjin, China

  • 3. School of Civil Engineering, Tianjin University, Tianjin, China

Abstract

We propose ESSC-RM, a plug-and-play Enhancing framework for Semantic Scene Completion with a Refinement Module, which can be seamlessly integrated into existing semantic scene completion (SSC) models. ESSC-RM operates in two phases: a baseline SSC network first produces a coarse voxel prediction, which is subsequently refined by a 3D U-Net–based Prediction Noise-Aware Module (PNAM) and Voxel-level Local Geometry Module (VLGM) under multiscale supervision. Experiments on SemanticKITTI show that ESSC-RM consistently improves semantic prediction performance. When integrated into CGFormer and MonoScene, the mean IoU increases from 16.87 to 17.27% and from 11.08 to 11.51%, respectively. These results demonstrate that ESSC-RM serves as a general refinement framework applicable to a wide range of SSC models. Project page: https://github.com/LuckyMax0722/ESSC-RM and https://github.com/LuckyMax0722/VLGSSC.

1 Introduction

Accurate 3D scene understanding is fundamental to autonomous driving, robotics, and embodied perception, where downstream tasks such as detection, reconstruction, mapping, and planning rely on complete geometric and semantic representations of the environment (Guo et al., 2019; Yurtsever et al., 2020; Cao et al., 2022; Ma et al., 2022; Zhao H. et al., 2024; Cao et al., 2024a). However, real-world sensors (LiDAR and RGB cameras) provide only sparse, noisy, and partial observations due to occlusions, limited resolution, restricted field of view, and missing depth information, resulting in incomplete voxelized scenes (Roldão et al., 2021; Cao et al., 2024b). To address this, 3D semantic scene completion (SSC) aims to jointly infer voxel occupancy and semantic labels, a task first formalized by SSCNet (Song et al., 2016).

Despite extensive progress in both LiDAR-based (Roldão et al., 2020; Yan et al., 2020; Xia et al., 2023; Jang et al., 2024) and vision-based SSC (Cao and de Charette, 2021; Li Y. et al., 2023; Jiang et al., 2024; Tang et al., 2023), a considerable gap remains between predictions and ground truth. LiDAR-based models suffer from sparsity; BEV-based methods (Yang et al., 2021) lose fine-grained details; RGB-based approaches degrade due to depth ambiguity and unclear 2D–3D projection (Lee et al., 2024); and distillation pipelines depend heavily on task-specific teacher designs (Xia et al., 2023). Moreover, SSC architectures differ substantially, making it difficult to develop a unified refinement strategy that generalizes across models without modifying their internal structures.

To bridge these limitations, this paper proposes ESSC-RM, a unified coarse-to-fine refinement framework that directly enhances the voxel predictions of arbitrary SSC models. ESSC-RM performs multi-scale geometric–semantic aggregation, integrates auxiliary priors, and introduces a model-agnostic refinement pipeline that requires no architectural modification to the baseline. It supports both end-to-end joint training and fully independent plug-and-play deployment.

The main contributions of this paper are as follows:

  • We introduce ESSC-RM, a general refinement framework designed to improve heterogeneous SSC baselines via coarse-to-fine multi-scale error reduction, applicable to both LiDAR-based and vision-based methods.

  • We develop two complementary training paradigms: a joint training mode that co-optimizes the refinement and baseline networks, and a separate training mode enabling true plug-and-play enhancement without modifying the original SSC architecture.

  • We propose a neighborhood-attention-based multi-scale aggregation module that adaptively fuses geometric and semantic features, improving voxel-level reasoning across scales.

  • We introduce a novel vision–language guidance module that injects text-derived semantic priors to compensate for missing geometric cues and ambiguous visual projections, enhancing cross-modal scene understanding.

  • Extensive experiments on SemanticKITTI (Behley et al., 2019) demonstrate that ESSC-RM consistently improves strong baselines such as CGFormer and MonoScene, validating its generality, flexibility, and effectiveness.

2 Related work

In this section, we review LiDAR- and camera-based 3D perception, then summarize advances in 3D SSC, and finally discuss recent progress in vision–language models (VLMs) and text-driven multimodal fusion.

2.1 LiDAR-based 3D perception

LiDAR provides accurate 3D geometry for autonomous driving perception, enabling detection, tracking, and mapping, and has become a core sensing modality (Guo et al., 2019; Yurtsever et al., 2020; Ma et al., 2022; Zhao H. et al., 2024; Wu et al., 2022; Lin and Wu, 2025).

Early point-based and voxel-based detectors—PointNet (Qi et al., 2016), VoxelNet (Zhou and Tuzel, 2017), SECOND (Yan et al., 2018), PointPillars (Lang et al., 2018), PointRCNN (Shi et al., 2018), PV-RCNN (Shi et al., 2019), and Voxel R-CNN (Deng et al., 2020)—established effective feature extraction paradigms. Tracking frameworks such as AB3DMOT (Weng et al., 2020; Cho and Kim, 2023) leverage motion models and geometric association. Semantic segmentation approaches including PointNet++ (Qi et al., 2017), RangeNet++ (Milioto et al., 2019), and Cylinder3D (Zhou et al., 2020) demonstrate point-based, projection-based, and cylindrical-voxel inference strategies.

2.2 Camera-based 3D perception

Camera-based perception offers a cost-efficient alternative with rich semantic cues. Monocular approaches extend 2D detectors (Brazil and Liu, 2019; Duan et al., 2019; Manhardt et al., 2018) or rely on pseudo-depth and geometric priors (Xu and Chen, 2018; Wang et al., 2018; Zia et al., 2014; Mousavian et al., 2016; Hu et al., 2018), yet remain affected by depth ambiguity. Stereo-based methods (Chang and Chen, 2018; Li et al., 2019; You et al., 2019; Chen et al., 2020) mitigate this by enforcing geometric consistency (Mao et al., 2023).

With multi-camera setups becoming standard, multi-view 3D detection methods have evolved rapidly. LSS-based pipelines (Philion and Fidler, 2020; Huang et al., 2021) lift image features to Bird's-Eye View (BEV), while transformer-based designs such as DETR3D (Wang et al., 2021) and BEVFormer (Li Z. et al., 2022) aggregate cross-view features using 3D object queries. Spatiotemporal attention mechanisms (Vaswani et al., 2017; Doll et al., 2022; Mao et al., 2023) further enhance robustness.

2.3 Semantic scene completion

SSC jointly predicts occupancy and voxel-level semantics. SSCNet (Song et al., 2016) established the task on indoor data (Silberman et al., 2012); outdoor datasets such as KITTI and SemanticKITTI (Geiger et al., 2012; Behley et al., 2019, 2021; Li et al., 2024) introduce sparsity and large-scale variability.

2.4 Vision–language models

Vision–language models (VLMs) provide strong semantic priors through aligned image–text representations (Liu et al., 2025). CLIP (Radford et al., 2021) and EVACLIP (Sun et al., 2023a,b) learn powerful contrastive embeddings, while LongCLIP (Zhang et al., 2024) and JinaCLIP (Xiao et al., 2024a; Koukounas et al., 2024) improve long-text modeling.

Models such as BLIP2 (Li J. et al., 2023), InstructBLIP (Dai et al., 2023), MiniGPT-4 (Zhu et al., 2024), and LLaVA (Liu H. et al., 2023; Liu et al., 2024) leverage frozen Large Language Models (LLMs) to build efficient multimodal reasoning pipelines (OpenAI et al., 2024). Text-conditioned segmentation models such as LSeg (Li B. et al., 2022) and Grounded-SAM (Ren et al., 2024) further highlight the utility of text in perception tasks (Liu S. et al., 2023; Kirillov et al., 2023).

2.5 Multimodal fusion and text modality

Multimodal fusion traditionally combines 3D geometry (LiDAR, stereo) with rich 2D semantics. With the emergence of LLMs and VLMs, text has become a scalable, low-cost semantic modality for describing road scenes (Li and Tang, 2024; Liu et al., 2025).

Attention-based fusion (Vaswani et al., 2017; Cao et al., 2021)—as in (Xu et al. 2020), (Cao et al. 2024c), and (Wang et al. 2026)—captures long-range cross-modal dependencies but can be computationally heavy. Learnable fusion strategies such as Text-IF (Yi et al., 2024) and VLScene (Wang et al., 2025) use trainable coefficients to balance visual and linguistic cues.

3 Methodology

ESSC-RM refines the coarse voxel predictions produced by any SSC backbone. We now present the problem formulation and describe the architecture components of our refinement module, including the 3D U-Net backbone, the progressive neighborhood attention module (PNAM), and the vision–language guidance module (VLGM), as illustrated in Figure 1.

Figure 1

3.1 Problem statement

Given an RGB image and a LiDAR point cloud at time t, 3D SSC aims to predict a dense semantic voxel grid defined in the vehicle coordinate system, where each voxel is either empty (c0) or belongs to one of the C semantic classes {c1, …, cC} and H, W, Z denote the voxel grid dimensions. A standard SSC backbone learns , but the coarse prediction often exhibits broken surfaces, incomplete structures, and semantic confusions. We therefore introduce a refinement module gϕ that treats as a noisy discrete volume and outputs a refined prediction , where aux denotes additional cues (multi-scale voxel features and text semantics) extracted within the refinement module. The objective is to bring closer to the ground truth Yt in both geometry and semantics while remaining compatible with heterogeneous SSC backbones.

3.2 Overall architecture

As shown in Figure 1, ESSC-RM has two decoupled parts:

  • SSC backbone: maps (It, Pt) to a coarse voxel grid .

  • Refinement module: operates purely in voxel space, refining into using multi-scale U-Net features, neighborhood attention, and vision–language guidance.

This separation allows us to plug in backbones of different quality while focusing the design of gϕ on correcting geometric and semantic errors at the voxel level using additional structural and semantic cues.

3.3 SSC backbone

ESSC-RM is model-agnostic and can refine the output of any SSC backbone. To demonstrate generality, we instantiate two monocular SSC models with different coarse prediction qualities: CGFormer (Tang et al., 2023) and MonoScene (Cao and de Charette, 2021). CGFormer represents a strong backbone with accurate voxel lifting, while MonoScene produces notably noisier volumes, providing a more challenging setting for refinement. All architectural details follow the original papers, as our refinement module does not modify or depend on the internal design of the backbone.

3.4 3D U-Net refinement backbone

The refinement module receives the coarse discrete volume and must (i) map it into a continuous feature space; (ii) aggregate multi-scale contextual information; and (iii) reconstruct a refined voxel grid . To accomplish these steps, we adopt a three-dimensional U-shaped neural network (3D U-Net) backbone (Çiçek et al., 2016; Ronneberger et al., 2015), whose overall encoder–bottleneck–decoder structure is illustrated in Figure 2. The specific computational blocks that constitute the encoder and decoder, namely the feature encoding block (FEB) and the feature aggregation block (FAB), are detailed in Figure 3.

Figure 2

Figure 3

3.4.1 Voxel embedding and encoder–decoder

We first embed the discrete labels of into a continuous feature map:

where . A 1 × 1 × 1 3D convolution then produces the input feature:

The encoder uses four stacked feature encoding blocks (FEBs; Figure 3) to extract multi-scale features F1:s at progressively lower resolutions. For a voxel grid of size H×W×Z and feature dimension G, the encoder outputs

A bottleneck processes F1:16, and the decoder then upsamples via four stacked feature aggregation blocks (FABs), followed by a shared prediction head that produces voxel logits at multiple scales:

where C is the number of semantic classes. At inference time, we use

as the final refined semantic voxel prediction.

3.4.2 Feature encoding block (FEB)

Each FEB refines features at a given scale and produces both a skip feature and a downsampled feature. As in Figure 3, an FEB applies two 3D convolutions with InstanceNorm3D (Ulyanov et al., 2016) and LeakyReLU (Xu et al., 2015), followed by a residual skip and a stride-2 convolution:

3.4.3 Feature aggregation block (FAB) and multi-scale supervision

Each FAB upsamples low-resolution features and fuses them with encoder skip features:

Following PaSCo (Cao A.-Q. et al., 2024), each decoder feature map is mapped to logits by a 1 × 1 × 1 3D convolution:

and all scales are supervised during training. This encourages coarse-to-fine refinement and stabilizes optimization.

3.5 Progressive neighborhood attention module (PNAM)

Purely convolutional decoders aggregate context only within fixed local windows, limiting their ability to capture long-range and structure-aware voxel relations. To address this, we integrate the Progressive Neighborhood Attention Module (PNAM) (Liu T. et al., 2023) into the decoder of our refinement network.

As illustrated in Figure 4, the FABs at scales 1:2, 1:4, and 1:8 are replaced with PNA-based FABs, while the finest-scale FAB remains convolutional for efficiency. PNAM enhances multi-scale voxel reasoning by combining global self-attention (Vaswani et al., 2017) with localized neighborhood aggregation (Hassani and Shi, 2022; Hassani et al., 2023, 2024).

Figure 4

3.5.1 PNA-based feature aggregation block

As illustrated in Figure 5, a PNA-based FAB consists of two branches: (1) a self-attention (SA) branch operating on Fup; and (2) a neighborhood cross-attention (NCA) branch operating between Fskip and Fup.

Figure 5

Given the upsampled feature and the corresponding skip feature , the two attention responses are computed as:

for ℓ ∈ {2, 4, 8}. The outputs are fused and refined via normalization and a lightweight feed-forward network (FFN):

3.5.2 Self-attention (SA)

SA refines the upsampled voxel features by capturing long-range dependencies. Following the standard multi-head attention formulation (Vaswani et al., 2017), we use 13 and depthwise 33 convolutions to compute Q, K, V, followed by attention and a residual FFN. This propagates global geometric–semantic cues, compensating for missing structures in the coarse prediction.

3.5.3 Neighborhood cross-attention (NCA)

NCA enforces local geometric consistency. Inspired by the NATTEN family of neighborhood attention operators (Hassani and Shi, 2022; Hassani et al., 2023, 2024), it restricts attention to a 3D neighborhood window, enabling each voxel to aggregate high-confidence structural cues from spatially adjacent voxels. This makes PNAM particularly effective at restoring fine structures such as object boundaries and thin geometry.

Overall, PNAM strengthens the refinement network's ability to jointly model global context and local voxel continuity across scales.

3.6 Vision–language guidance module (VLGM)

Even with stronger voxel–voxel reasoning, SSC remains ambiguous in occluded or sparsely observed regions. To inject high-level scene priors—such as road layout, object co-occurrence patterns, or typical urban structures—we introduce the Vision–Language Guidance Module (VLGM). As illustrated in Figure 6, the module leverages a frozen vision–language model (VLM) to produce a free-form scene description, whose textual semantics are encoded and fused into the voxel refinement pipeline.

Figure 6

3.6.1 Text acquisition and semantic encoding

Given an input image I and prompt P, a frozen VLM such as LLaVA (Liu H. et al., 2023; Liu et al., 2024) or InstructBLIP (Dai et al., 2023) generates a scene description

which is precomputed offline to avoid training overhead.

To capture different levels of textual semantics, we employ two complementary encoders. (1) JinaCLIP (Xiao et al., 2024a; Koukounas et al., 2024) extracts a global embedding

providing holistic scene cues; and (2) A Q-Former (Li J. et al., 2023) produces token-level embeddings

which enable fine-grained cross-modal alignment. This design follows instruction-style prompting practices used in (Dai et al. 2023); (Liu et al. 2025).

3.6.2 Text–voxel fusion modules

To integrate text cues into voxel refinement, we build a Text U-Net by inserting lightweight fusion blocks after each FEB and FAB. Each fusion block consists of two components:

3.6.2.1 Semantic interaction guidance module (SIGM)

Following Text-IF (Yi et al., 2024), global JinaCLIP features are mapped to affine parameters (γm, βm) via MLPs. Voxel features are modulated as

injecting scene-level priors that guide early geometric reasoning.

3.6.2.2 Dual cross-attention module (DCAM)

Inspired by BLIP-2 (Li J. et al., 2023), SAM (Kirillov et al., 2023), and MultiRAtt-RSSC (Cai et al., 2024), DCAM alternates self- and cross-attention between Q-Former tokens and voxel features. Text self-attention yields , followed by text-to-voxel cross-attention producing , and voxel-to-text cross-attention generating . A residual update produces

SIGM injects global scene priors (e.g., “urban street with parked vehicles”), while DCAM provides fine-grained token-level alignment. As visualized in Figure 6, the two components operate synergistically to improve geometric completeness and semantic coherence, especially in occluded and ambiguous regions.

3.7 Loss function

ESSC-RM performs coarse-to-fine refinement across multiple spatial scales. We therefore supervise both voxel-wise predictions and scene-level consistency using two complementary terms: a class-weighted cross-entropy loss and the scene–class affinity loss (SCAL) (Cao and de Charette, 2021; Tang et al., 2023). This combination stabilizes multi-scale refinement while encouraging globally coherent semantics.

3.7.1 Cross-entropy loss

At each refinement scale l, voxel predictions are supervised using a class-weighted cross-entropy:

where ŷ′ denotes refinement logits and wc compensates for class imbalance (Roldão et al., 2020). Aggregating all scales yields:

3.7.2 Scene–class affinity loss (SCAL)

To promote globally consistent refinement—particularly under sparsity or ambiguous projections—we adopt SCAL (Cao and de Charette, 2021), which optimizes class-wise precision (Pc), recall (Rc), and specificity (Sc). Let pi denote the ground-truth class for voxel i, and the predicted probability for class c. Using Iverson brackets ⟦·⟧, the metrics are:

The per-scale affinity loss is:

SCAL is applied to both semantic and geometric predictions across all refinement scales:

3.7.3 Overall objective

The total training loss is:

with all coefficients set to 1 in our experiments, providing a balanced supervision over voxel-wise accuracy, geometric completion, and scene-level semantic consistency.

4 Experiment

This section evaluates ESSC-RM on the SemanticKITTI benchmark (Behley et al., 2019, 2021). We first describe the experimental setup (datasets, metrics, and implementation), then report quantitative and qualitative results on strong and weak semantic scene completion baselines (CGFormer and MonoScene). Comprehensive ablation studies that analyze the refinement framework, the neighborhood-attention-based aggregation module, and the vision–language guidance module are provided in the Supplementary material.

4.1 Experimental setup

4.1.1 Datasets

We adopt the SemanticKITTI semantic scene completion benchmark (Behley et al., 2019, 2021), which extends the KITTI odometry dataset (Geiger et al., 2012) with dense semantic labels for each LiDAR scan. The dataset contains 22 outdoor sequences; following the official split, sequences 00–07 and 09–10 are used for training, 08 for validation, and 11–21 as a hidden test set.

For semantic scene completion, a 3D volume around the ego-vehicle is considered: 51.2m in front, 25.6m to each side (total width 51.2m), and 6.4m in height (Behley et al., 2019). This volume is voxelized into a 256 × 256 × 32 grid with voxel size 0.2m3. Each voxel is assigned one of 20 classes (19 semantic classes and 1 free-space), obtained by voxelizing aggregated, registered semantic point clouds (Li Y. et al., 2023).

We conduct all experiments on SemanticKITTI, following its established voxelization protocol and official evaluation scripts, which provides a standardized testbed for semantic scene completion.

4.1.2 Evaluation metrics

We follow standard practice (Cao and de Charette, 2021; Li Y. et al., 2023; Tang et al., 2023) and report intersection-over-union (IoU) for 3D scene completion (SC) and mean intersection-over-union (mIoU) for semantic scene completion (SSC).

For SC, evaluation is binary (occupied vs. free) and uses IoU over the occupancy grid:

where TP, FP, and FN denote true positives, false positives, and false negatives on the occupancy grid.

For SSC, we evaluate per-class IoU over C = 19 semantic classes and report mean IoU:

where TPc, FPc, and FNc are computed for class c, and evaluation is carried out in known space as in (Roldão et al., 2021). IoU primarily reflects geometric completion quality, whereas mIoU captures voxel-wise semantic accuracy; both are reported to assess overall scene understanding.

4.1.3 Implementation details

We consider two training paradigms for ESSC-RM: (1) joint training, where the semantic scene completion backbone is switched to inference mode while the refinement module is trained on-the-fly from its predictions; and (2) separate training, where semantic scene completion predictions are pre-computed and stored, and the refinement module is trained purely as a plug-and-play post-processor without modifying the original semantic scene completion architecture.

Unless otherwise stated, experiments are conducted on two NVIDIA RTX A5000 GPUs, with 10 epochs and a batch size of 1 per GPU. We use AdamW (Loshchilov and Hutter, 2017) with β1 = 0.9, β2 = 0.99, and a peak learning rate of 5 × 10−5. A cosine schedule (Smith and Topin, 2017) with 5% warm-up is applied. The refinement module follows a 3D U-Net (Çiçek et al., 2016) backbone; encoder and decoder feature-enhancement blocks (FEB/FAB) are adapted from SemCity (Lee et al., 2024), and neighborhood-attention-based variants from NATTEN (Hassani and Shi, 2022; Hassani et al., 2023, 2024) and PNA (Liu T. et al., 2023). The vision–language guidance module (VLGM) uses frozen vision–language models [InstructBLIP (Li J. et al., 2023; Dai et al., 2023) and LLaVA (Liu H. et al., 2023; Liu et al., 2024)] together with text–voxel fusion modules inspired by Text-IF (Yi et al., 2024) and MultiAtt-RSSC (Cai et al., 2024). Following PaSCo (Cao A.-Q. et al., 2024) and HybridOcc (Zhao X. et al., 2024), we apply coarse-to-fine multi-level supervision in the decoder. Training losses are described in Section 3.7.

4.2 Evaluation results

We evaluate ESSC-RM as a refinement module on strong and weak SSC baselines and analyze its efficiency and qualitative behavior.

4.2.1 Quantitative results

ESSC-RM is designed to prioritize voxel-wise semantic correctness (mIoU) over boundary-sensitive binary occupancy smoothness (IoU); therefore, minor IoU drops may accompany consistent mIoU gains.

4.2.1.1 3D SSC performance

Table 1 reports SSC performance on SemanticKITTI, including representative image-based SSC baselines and our ESSC-RM variants. Among the listed baselines without ESSC-RM (upper block), DepthSSC (Yao et al., 2024) achieves the best SC-IoU (45.84%), while Symphonize (Jiang et al., 2024) attains the highest mIoU (14.89%). In addition to these method-level comparisons, we evaluate ESSC-RM as a plug-and-play refinement module on top of two representative SSC backbones: CGFormer (Tang et al., 2023) as a strong baseline (45.99% IoU, 16.87% mIoU) and MonoScene (Cao and de Charette, 2021) as a widely used weaker baseline (36.86% IoU, 11.08% mIoU). Due to training and storage overhead of voxel-level refinement, we instantiate ESSC-RM on these two backbones to demonstrate generality across different performance regimes; extending the plug-in evaluation to additional backbones is left for future work (see Section 5).

Table 1

MethodsIoUmIoUCar (3.92%)Bicycle (0.03%)Motorcycle (0.03%)Truck (0.16%)Other-vehicle (0.20%)Person (0.07%)Bicyclist (0.07%)Motorcyclist (0.05%)Road (15.30%)Parking (1.12%)Sidewalk (11.13%)Other-ground (0.56%)Building (14.10%)Fence (3.90%)Vegetation (39.3%)Trunk (0.51%)Terrain (9.17%)Pole (0.29%)Traffic-sign (0.08%)
Baselines (without ESSC-RM)
TPVFormer (Huang et al., 2023)35.6111.3623.810.360.058.084.350.510.890.0056.5020.6025.870.8513.885.9416.922.2630.383.141.52
OccFormer (Zhang et al., 2023)36.5013.4625.090.811.1925.538.522.782.820.0058.8519.6126.880.3114.405.6119.633.9332.624.262.86
IAMSSC (Xiao et al., 2024b)44.2912.4526.260.600.158.745.061.323.460.0154.5516.0225.850.7017.386.8624.634.9530.136.353.56
VoxFormer-S (Li Y. et al., 2023)44.0212.3525.790.590.515.633.771.783.320.0054.7615.5026.350.7017.657.6424.395.0829.967.114.18
DepthSSC (Yao et al., 2024)45.8413.2825.940.351.166.027.502.586.320.0055.3818.7627.040.9219.238.4626.374.5230.197.424.09
Symphonize (Jiang et al., 2024)41.9214.8928.682.542.8220.4413.893.522.240.0056.3715.2827.580.9521.648.4025.726.6030.879.575.76
HASSC-S (Wang et al., 2024)44.8213.4827.230.920.869.915.612.804.710.0057.0515.9028.251.0419.056.5825.486.1532.947.684.05
H2GFormer-S (Wang and Tong, 2024)44.5713.7328.210.500.4710.007.391.542.880.0056.0817.8329.120.4519.747.2426.256.8034.427.884.68
MonoScene and ESSC-RM variants
MonoScene (Cao and de Charette, 2021)36.8611.0823.260.610.456.981.481.861.200.0056.5214.2726.720.4614.095.8417.892.8129.644.142.25
MonoScene + 3D U-Net35.7011.4723.460.410.8710.953.692.981.640.0056.2414.9526.631.4213.116.1916.752.7329.573.772.62
MonoScene + VLGM35.6211.4922.760.440.7112.453.123.041.640.0056.4814.3526.641.4213.556.2816.442.9729.503.852.65
MonoScene + PNAM36.4411.5123.110.400.7311.383.592.951.690.0056.2714.6526.711.4513.486.2017.082.9629.453.842.69
CGFormer and ESSC-RM variants
CGFormer (Tang et al., 2023)45.9916.8734.324.612.7119.447.672.384.080.0065.5120.8232.310.1623.529.2026.938.8339.5410.677.84
CGFormer + 3D U-Net43.5317.1733.995.283.1122.398.222.654.050.0065.2920.2632.140.1323.118.9326.8411.1738.9911.937.84
CGFormer + VLGM43.2017.2134.335.243.0122.337.812.704.120.0065.5220.7932.310.1323.278.9526.6910.7339.2911.937.82
CGFormer + PNAM44.3317.2734.115.692.9423.718.362.644.370.0065.2720.8731.900.1622.709.0826.6311.4238.9111.787.66

Quantitative results on the SemanticKITTI validation set.

The upper block lists baseline stereo-based SSC methods without ESSC-RM. The middle and lower blocks show MonoScene- and CGFormer-based ESSC-RM variants, respectively. Within each block, the best and second-best results are shown in bold and underlined, respectively.

To assess the generality of ESSC-RM, we plug it on top of both CGFormer and MonoScene, progressively adding (i) a plain 3D U-Net refinement head; (ii) the proposed neighborhood-attention-based refinement module (PNAM); and (iii) the vision–language guidance module (VLGM). The MonoScene and CGFormer blocks in Table 1 summarize these ablation results.

4.2.1.2 ESSC-RM on CGFormer

As shown in the CGFormer block of Table 1, adding a 3D U-Net refinement head improves mIoU from 16.87 to 17.17%. Equipping the refinement with VLGM further increases mIoU to 17.21%, while PNAM achieves the best mIoU of 17.27% with only a modest IoU drop. The gains are more apparent on small and medium-scale categories (e.g., truck, bicycle, trunk, pole), suggesting that coarse-to-fine decoding and neighborhood-aware aggregation help correct local ambiguities and recover thin structures that are challenging for the backbone alone.

Despite consistent improvements, the absolute mIoU gain on CGFormer remains moderate (from 16.87 to 17.27%, +0.40, i.e., ~2% relative). This is mainly because ESSC-RM performs refinement in the voxel-prediction space by design: it takes the discrete semantic occupancy predicted by the backbone, embeds it into a continuous feature map, and refines it via a 3D U-Net style encoder–decoder (with PNAM/VLGM as optional enhancements). Consequently, the global occupancy layout and object extents remain largely inherited from the backbone prediction, while ESSC-RM mainly improves local semantic consistency and boundary delineation (e.g., thin objects and class-confusing regions), which naturally limits the headroom when the backbone output is already geometrically plausible.

4.2.1.3 ESSC-RM on MonoScene

The MonoScene block of Table 1 shows that ESSC-RM also improves the weaker MonoScene baseline. VLGM increases mIoU from 11.08 to 11.49%, and PNAM further pushes it to 11.51% with comparable IoU. These consistent gains across CGFormer and MonoScene support the plug-and-play nature of ESSC-RM, indicating that the refinement is not tied to a specific SSC backbone.

On MonoScene, ESSC-RM improves mIoU from 11.08 to 11.51% (+0.43, ~4% relative). Since the refinement module does not introduce additional sensor-level geometric observations beyond the backbone output, its improvement is mainly achieved by enforcing multi-scale voxel consistency and reducing local misclassifications. When large missing structures are completely absent in , post-hoc voxel refinement cannot fully recover them, whereas it remains effective at sharpening boundaries and improving local semantic coherence.

4.2.1.4 SC-IoU trade-off

Although refinement improves mIoU, SC-IoU (occupied vs. free) can slightly decrease in some cases; this is an expected design trade-off rather than a flaw. For example, on CGFormer, ESSC-RM increases mIoU by +0.40 (16.87% → 17.27%) while SC-IoU decreases by 1.66 (45.99% → 44.33%). This behavior is expected because SC-IoU is a binary occupancy metric that is particularly sensitive to boundary voxels: semantic refinement around thin structures and object borders may flip a small fraction of occupied/free decisions, increasing FP/FN near boundaries even when per-class semantics improve. As SC-IoU aggregates over the entire occupancy grid, such boundary perturbations can lead to a measurable IoU change, reflecting a mild trade-off between semantic correction (mIoU) and boundary-sensitive binary occupancy under discrete voxel predictions.

4.2.1.5 Refinement module efficiency

We further analyze the computational overhead of ESSC-RM on top of CGFormer (Table 2). CGFormer itself has 122.42 M parameters, requires about 19.3 GB memory during training and 6.55 GB at inference, and runs at approximately 205 ms per frame. The 3D U-Net refinement head adds only 13.36 M parameters and can be trained jointly with CGFormer on a 24 GB GPU when the backbone is set to inference mode. VLGM and PNAM increase parameter counts and inference time more noticeably, but remain practical for offline refinement or two-stage pipelines.

Table 2

ModelIoUmIoUParams (M)Train memory (M)Infer. memory (M)Infer. time (ms)
CGFormer45.9916.87122.4219,3306,550205
+3D U-Net43.5317.1713.3612,7264,904215
+VLGM43.2017.2143.9618,9425,382340
+PNAM44.3317.279.5920,6645,042265

Ablation study on the efficiency of the refinement module with CGFormer as backbone.

Bold values indicate the best performance (highest is best) within each comparison group.

4.2.2 Qualitative results

Figures 7, 8 present qualitative results of ESSC-RM applied respectively to CGFormer and MonoScene on the SemanticKITTI validation set. Each row displays the input RGB image, ground truth, the prediction of the baseline model, and the refined outputs after integrating the 3D U-Net, PNAM, and VLGM modules.

Figure 7

Figure 8

Across both baselines, the refinement module consistently reduces holes and misclassifications in occluded or boundary regions, restores missing vegetation and structures at scene edges, and produces smoother and more coherent semantic layouts. On large-scale structures such as roads and buildings, PNAM and VLGM further improve geometric regularity, yielding cleaner contours and more stable surface predictions. For small-scale objects like traffic signs and poles, text-derived priors in VLGM highlight distinctive semantic regions, while PNAM enhances local aggregation and sharpens object boundaries.

These results demonstrate that ESSC-RM provides robust and generalizable refinement across different SSC backbones.

5 Conclusion

In summary, while ESSC-RM improves semantic scene completion by refining voxel predictions with PNAM and VLGM, several challenges remain. First, the refinement module still relies on 3D convolutions and attention, leading to non-negligible latency and memory overhead. Second, our evaluation is currently centered on SemanticKITTI, and broader generalization remains constrained by differences in voxel resolution, label taxonomy, and scene layout, which may require re-training or lightweight structural adaptation. Moreover, while we validate the plug-and-play behavior on two representative SSC backbones (CGFormer and MonoScene), extending the plug-in evaluation to additional backbones is still limited by the training and storage overhead of voxel-level refinement. Third, PNAM and VLGM are incorporated as largely independent components without a unified fusion mechanism, and emphasizing semantic correction may slightly compromise geometric completeness, resulting in minor degradations in SC-IoU.

Future work will therefore explore lightweight and efficient representations (e.g., sparse convolution, tri-plane features, and Gaussian voxelization), knowledge distillation for compact deployment, as well as structured pruning and quantization-aware optimization to further reduce latency and memory footprint. We will also investigate adapter-based transfer across datasets, and broaden plug-in evaluation across diverse SSC backbones to further substantiate generality. In addition, we will study adaptive fusion layers that more tightly couple local geometric attention with textual priors. Finally, integrating generative priors (e.g., CVAE- or diffusion-based models) to pre-complete sparse voxels, together with extensive evaluation on diverse real-world benchmarks, may further improve the robustness, practicality, and scalability of ESSC-RM.

Statements

Data availability statement

Publicly available datasets were analyzed in this study. This data can be found at: https://semantic-kitti.org/.

Author contributions

DZ: Data curation, Writing – original draft, Writing – review & editing. JL: Methodology, Writing – original draft, Writing – review & editing. HY: Funding acquisition, Methodology, Supervision, Writing – review & editing. LB: Funding acquisition, Methodology, Supervision, Writing – review & editing. BS: Funding acquisition, Methodology, Supervision, Writing – review & editing.

Funding

The author(s) declared that financial support was received for this work and/or its publication. This research was funded by the Natural Science Foundation of Tianjin (No.24PTLYHZ00290).

Conflict of interest

The author(s) declared that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

The handling editor HC declared a shared affiliation with the authors DZ and JL at the time of review.

Generative AI statement

The author(s) declared that generative AI was not used in the creation of this manuscript.

Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Supplementary material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fnbot.2025.1768219/full#supplementary-material

References

  • 1

    BehleyJ.GarbadeM.MiliotoA.QuenzelJ.BehnkeS.GallJ.et al. (2021). Towards 3D LiDAR-based semantic scene understanding of 3D point cloud sequences: the SemanticKITTI dataset. Int. J. Rob. Res. 40, 959967. doi: 10.1177/02783649211006735

  • 2

    BehleyJ.GarbadeM.MiliotoA.QuenzelJ.BehnkeS.StachnissC.et al. (2019). A dataset for semantic segmentation of point cloud sequences. arXiv [Preprint]. arXiv:1904.01416. doi: 10.48550/arXiv.1904.01416

  • 3

    BrazilG.LiuX. (2019). M3D-RPN: monocular 3D region proposal network for object detection. arXiv [Preprint]. arXiv:1907.06038. doi: 10.48550/arXiv.1907.06038

  • 4

    CaiJ.MengK.YangB.ShaoG. (2024). Multimodal remote sensing scene classification using VLMs and dual-cross attention networks. arXiv [Preprint]. arXiv:2412.02531. doi: 10.48550/ARXIV.2412.02531

  • 5

    CaoA.de CharetteR. (2021). MonoScene: monocular 3D semantic scene completion. arXiv [Preprint]. arXiv:2112.00726. doi: 10.48550/arXiv.2112.00726

  • 6

    CaoA.-Q.DaiA.de CharetteR. (2024). “PaSCo: urban 3D panoptic scene completion with uncertainty awareness,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

  • 7

    CaoH.ChenG.LiZ.HuY.KnollA. (2022). NeuroGrasp: multimodal neural network with euler region regression for neuromorphic vision-based grasp pose estimation. IEEE Trans. Instrum. Meas. 71, 111. doi: 10.1109/TIM.2022.3179469

  • 8

    CaoH.ChenG.XiaJ.ZhuangG.KnollA. (2021). Fusion-based feature attention gate component for vehicle detection based on event camera. IEEE Sens. J. 21, 2454024548. doi: 10.1109/JSEN.2021.3115016

  • 9

    CaoH.ChenG.ZhaoH.JiangD.ZhangX.TianQ.et al. (2024a). SDPT: semantic-aware dimension-pooling transformer for image segmentation. IEEE Trans. Intell. Transp. Syst. 25, 1593415946. doi: 10.1109/TITS.2024.3417813

  • 10

    CaoH.QuZ.ChenG.LiX.ThieleL.KnollA.et al. (2024b). GhostViT: expediting vision transformers via cheap operations. IEEE Trans. Artif. Intell. 5, 25172525. doi: 10.1109/TAI.2023.3326795

  • 11

    CaoH.ZhangZ.XiaY.LiX.XiaJ.ChenG.et al. (2024c). “Embracing events and frames with hierarchical feature refinement network for object detection,” in European Conference on Computer Vision, eds. A. Leonardis, E. Ricci, S. Roth, O. Russakovsky, T. Sattler, and G. Varol (Milan: Springer), 161177. doi: 10.1007/978-3-031-72907-2_10

  • 12

    ChangJ.ChenY. (2018). Pyramid stereo matching network. arXiv [Preprint]. arXiv:1803.08669. doi: 10.48550/arXiv.1803.08669

  • 13

    ChenY.LiuS.ShenX.JiaJ. (2020). DSGN: deep stereo geometry network for 3D object detection. arXiv [Preprint]. arXiv:2001.03398. doi: 10.48550/arXiv.2001.03398

  • 14

    ChoM.KimE. (2023). 3D LiDAR multi-object tracking with short-term and long-term multi-level associations. Remote Sens. 15:5486. doi: 10.3390/rs15235486

  • 15

    ÇiçekÖ.AbdulkadirA.LienkampS. S.BroxT.RonnebergerO. (2016). 3D U-Net: learning dense volumetric segmentation from sparse annotation. arXiv [Preprint]. arXiv:1606.06650. doi: 10.48550/arXiv.1606.06650

  • 16

    DaiW.LiJ.LiD.TiongA.ZhaoJ.WangW.et al. (2023). InstructBLIP: towards general-purpose vision-language models with instruction tuning. arXiv preprint arXiv:2305.06500.

  • 17

    DengJ.ShiS.LiP.ZhouW.ZhangY.LiH.et al. (2020). Voxel R-CNN: towards high performance voxel-based 3D object detection. arXiv [Preprint]. arXiv:2012.15712. doi: 10.48550/arXiv.2012.15712

  • 18

    DollS.SchulzR.SchneiderL.BenzinV.MarkusE.LenschH. P.et al. (2022). “SpatialDETR: robust scalable transformer-based 3D object detection from multi-view camera images with global cross-sensor attention,” in European Conference on Computer Vision (ECCV) (Tel Aviv-Yafo: ACM). doi: 10.1007/978-3-031-19842-7_14

  • 19

    DuanK.BaiS.XieL.QiH.HuangQ.TianQ.et al. (2019). CenterNet: keypoint triplets for object detection. arXiv [Preprint]. arXiv:1904.08189. doi: 10.48550/arXiv.1904.08189

  • 20

    GeigerA.LenzP.UrtasunR. (2012). “Are we ready for autonomous driving? The KITTI vision benchmark suite,” in 2012 IEEE Conference on Computer Vision and Pattern Recognition (Providence, RI: IEEE), 33543361. doi: 10.1109/CVPR.2012.6248074

  • 21

    GuoY.WangH.HuQ.LiuH.LiuL.BennamounM.et al. (2019). Deep learning for 3D point clouds: a survey. arXiv [Preprint]. arXiv:1912.12033. doi: 10.48550/arXiv.1912.12033

  • 22

    HassaniA.HwuW.-M.ShiH. (2024). Faster neighborhood attention: reducing the O(n2) cost of self-attention at the threadblock level. arXiv [Preprint]. arXiv:2403.04690.

  • 23

    HassaniA.ShiH. (2022). Dilated neighborhood attention transformer. arXiv preprint arXiv: 2209.15001. Available online at: https://arxiv.org/abs/2209.15001 (Accessed March 11, 2025).

  • 24

    HassaniA.WaltonS.LiJ.LiS.ShiH. (2023). “Neighborhood attention transformer,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (Vancouver, BC: IEEE). doi: 10.1109/CVPR52729.2023.00599

  • 25

    HuH.CaiQ.WangD.LinJ.SunM.KrähenbühlP.et al. (2018). Joint monocular 3D vehicle detection and tracking. arXiv [Preprint]. arXiv:1811.10742. doi: 10.48550/arXiv.1811.10742

  • 26

    HuangJ.HuangG.ZhuZ.DuD. (2021). BEVDet: high-performance multi-camera 3D object detection in bird-eye-view. arXiv [Preprint]. arXiv:2112.11790. doi: 10.48550/arXiv.2112.11790

  • 27

    HuangY.-K.ZhengW.ZhangY.ZhouJ.LuJ. (2023). “Tri-perspective view for vision-based 3d semantic occupancy prediction,” in 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 9223–9232. Available online at: https://api.semanticscholar.org/CorpusID:256868375 (Accessed March 11, 2025).

  • 28

    JangH.-K.KimJ.KweonH.YoonK.-J. (2024). TALoS: enhancing semantic scene completion via test-time adaptation on the line of sight. arXiv [Preprint]. arXiv:2410.15674.

  • 29

    JiangH.ChengT.GaoN.ZhangH.LinT.LiuW.et al. (2024). “Symphonize 3D semantic scene completion with contextual instance queries,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 20258–20267. doi: 10.1109/CVPR52733.2024.01915

  • 30

    KirillovA.MintunE.RaviN.MaoH.RollandC.GustafsonL.et al. (2023). Segment anything. arXiv [Preprint]. arXiv:2304.02643. doi: 10.48550/arXiv.2304.02643

  • 31

    KoukounasA.MastrapasG.WangB.AkramM. K.EslamiS.GüntherM.et al. (2024). jina-clip-v2: multilingual multimodal embeddings for text and images. arXiv [Preprint]. arXiv:2412.08802.

  • 32

    LangA. H.VoraS.CaesarH.ZhouL.YangJ.BeijbomO.et al. (2018). PointPillars: fast encoders for object detection from point clouds. arXiv [Preprint]. arXiv:1812.05784. doi: 10.48550/arXiv.1812.05784

  • 33

    LeeJ.LeeS.JoC.ImW.SeonJ.YoonS.-E. (2024). “SemCity: semantic scene generation with triplane diffusion,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

  • 34

    LiB.WeinbergerK. Q.BelongieS. J.KoltunV.RanftlR. (2022). Language-driven semantic segmentation. arXiv [Preprint]. arXiv:2201.03546. doi: 10.48550/arXiv.2201.03546

  • 35

    LiJ.LiD.SavareseS.HoiS. (2023). “BLIP-2: bootstrapping language-image pre-training with frozen image encoders and large language models,” in Proceedings of the 40th International Conference on Machine Learning (ICML).

  • 36

    LiP.ChenX.ShenS. (2019). Stereo R-CNN based 3D object detection for autonomous driving. arXiv [Preprint]. arXiv:1902.09738. doi: 10.48550/arXiv.1902.09738

  • 37

    LiS.TangH. (2024). Multimodal alignment and fusion: a survey. arXiv [Preprint]. arXiv:2411.17040. doi: 10.48550/ARXIV.2411.17040

  • 38

    LiY.LiS.LiuX.GongM.LiK.ChenN.et al. (2024). SSCBench: A large-scale 3D semantic scene completion benchmark for autonomous driving. arXiv [Preprint]. arXiv:2306.09001.

  • 39

    LiY.YuZ.ChoyC. B.XiaoC.1lvarezJ. M.FidlerS.et al. (2023). “VoxFormer: sparse voxel transformer for camera-based 3D semantic scene completion,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 90879098.

  • 40

    LiZ.WangW.LiH.XieE.SimaC.LuT.et al. (2025). BEVFormer: learning bird's-eye-view representation from LiDAR-camera via spatiotemporal transformers. IEEE Trans. Pattern Anal. Mach. Intellig.47, 20202036. doi: 10.1109/TPAMI.2024.3515454

  • 41

    LinS.-L.WuJ.-Y. (2025). Enhancing LiDAR-based 3D classification through an improved deep learning framework with residual connections. IEEE Access13, 4283642849. doi: 10.1109/ACCESS.2025.3547942

  • 42

    LiuH.LiC.LiY.LeeY. J. (2024). “Improved baselines with visual instruction tuning,” in 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 26286–26296. doi: 10.1109/CVPR52733.2024.02484

  • 43

    LiuH.LiC.WuQ.LeeY. J. (2023). Visual instruction tuning. arXiv [Preprint]. arXiv:2304.08485.

  • 44

    LiuP.LiuH.LiuH.LiuX.NiJ.MaJ. (2025). VLM-E2E: Enhancing end-to-end autonomous driving with multimodal driver attention fusion. arXiv [Preprint]. arXiv:2502.18042.

  • 45

    LiuS.ZengZ.RenT.LiF.ZhangH.YangJ.et al. (2023). Grounding DINO: marrying DINO with grounded pre-training for open-set object detection. arXiv [Preprint]. arXiv:2303.05499. doi: 10.48550/arXiv.2303.05499

  • 46

    LiuT.WeiY.ZhangY. (2023). “Progressive neighborhood aggregation for semantic segmentation refinement,” in Proceedings of the Thirty-Seventh AAAI Conference on Artificial Intelligence and Thirty-Fifth Conference on Innovative Applications of Artificial Intelligence and Thirteenth Symposium on Educational Advances in Artificial Intelligence, AAAI'23/IAAI'23/EAAI'23 (Washington, DC: AAAI Press).

  • 47

    LoshchilovI.HutterF. (2017). Fixing weight decay regularization in adam. arXiv [Preprint]. arXiv:1711.05101. doi: 10.48550/arXiv.1711.05101

  • 48

    MaX.OuyangW.SimonelliA.RicciE. (2022). 3D object detection from images for autonomous driving: a survey. arXiv [Preprint]. arXiv:2202.02980. doi: 10.48550/arXiv.2202.02980

  • 49

    ManhardtF.KehlW.GaidonA. (2018). ROI-10D: monocular lifting of 2D detection to 6D pose and metric shape. arXiv [Preprint]. arXiv:1812.02781. doi: 10.48550/arXiv.1812.02781

  • 50

    MaoJ.ShiS.WangX.LiH. (2023). 3D object detection for autonomous driving: a comprehensive survey. Int. J. Comput. Vision131, 19091963. doi: 10.1007/s11263-023-01790-1

  • 51

    MiliotoA.VizzoI.BehleyJ.StachnissC. (2019). “RangeNet ++: fast and accurate LiDAR semantic segmentation,” in 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (Macau: IEEE), 42134220. doi: 10.1109/IROS40897.2019.8967762

  • 52

    MousavianA.AnguelovD.FlynnJ.KoseckaJ. (2016). 3D bounding box estimation using deep learning and geometry. arXiv [Preprint]. arXiv:1612.00496. doi: 10.48550/arXiv.1612.00496

  • 53

    OpenAIAchiam, J.AdlerS.AgarwalS.AhmadL.AkkayaI.et al. (2024). GPT-4 technical report. arXiv [Preprint]. arXiv:2303.08774.

  • 54

    PhilionJ.FidlerS. (2020). Lift, splat, shoot: encoding images from arbitrary camera rigs by implicitly unprojecting to 3D. arXiv [Preprint]. arXiv:2008.05711. doi: 10.48550/arXiv.2008.05711

  • 55

    QiC. R.SuH.MoK.GuibasL. J. (2016). PointNet: deep learning on point sets for 3D classification and segmentation. arXiv [Preprint]. arXiv:1612.00593. doi: 10.48550/arXiv.1612.00593

  • 56

    QiC. R.YiL.SuH.GuibasL. J. (2017). PointNet++: deep hierarchical feature learning on point sets in a metric space. arXiv [Preprint]. arXiv:1706.02413. doi: 10.48550/arXiv.1706.02413

  • 57

    RadfordA.KimJ. W.HallacyC.RameshA.GohG.AgarwalS.et al. (2021). Learning transferable visual models from natural language supervision. arXiv [Preprint]. arXiv:2103.00020. doi: 10.48550/arXiv.2103.00020

  • 58

    RenT.LiuS.ZengA.LinJ.LiK.CaoH.et al. (2024). Grounded SAM: assembling open-world models for diverse visual tasks. arXiv [Preprint]. arXiv:2401.14159.

  • 59

    RoldãoL.de CharetteR.Verroust-BlondetA. (2020). LMSCNet: lightweight multiscale 3D semantic completion. arXiv [Preprint]. arXiv:2008.10559. doi: 10.48550/arXiv.2008.10559

  • 60

    RoldãoL.de CharetteR.Verroust-BlondetA. (2021). 3D semantic scene completion: a survey. arXiv [Preprint]. arXiv:2103.07466. doi: 10.48550/arXiv.2103.07466

  • 61

    RonnebergerO.FischerP.BroxT. (2015). U-Net: convolutional networks for biomedical image segmentation. arXiv [Preprint]. arXiv:1505.04597. doi: 10.48550/arXiv.1505.04597

  • 62

    ShiS.GuoC.JiangL.WangZ.ShiJ.WangX.et al. (2019). PV-RCNN: point-voxel feature set abstraction for 3D object detection. arXiv [Preprint]. arXiv:1912.13192. doi: 10.48550/arXiv.1912.13192

  • 63

    ShiS.WangX.LiH. (2018). PointRCNN: 3D object proposal generation and detection from point cloud. arXiv [Preprint]. arXiv:1812.04244. doi: 10.48550/arXiv.1812.04244

  • 64

    SilbermanN.HoiemD.KohliP.FergusR. (2012). “Indoor segmentation and support inference from RGBD images,” in Computer Vision-ECCV 2012, eds. A. Fitzgibbon, S. Lazebnik, P. Perona, Y. Sato, and C. Schmid (Berlin; Heidelberg: Springer Berlin Heidelberg), 746760. doi: 10.1007/978-3-642-33715-4_54

  • 65

    SmithL. N.TopinN. (2017). Super-convergence: very fast training of residual networks using large learning rates. arXiv [Preprint]. arXiv:1708.07120. doi: 10.48550/arXiv.1708.07120

  • 66

    SongS.YuF.ZengA.ChangA. X.SavvaM.FunkhouserT. A.et al. (2016). Semantic scene completion from a single depth image. arXiv [Preprint]. arXiv:1611.08974. doi: 10.48550/arXiv.1611.08974

  • 67

    SunQ.FangY.WuL.WangX.CaoY. (2023a). EVA-CLIP: improved training techniques for CLIP at scale. arXiv [Preprint]. arXiv:2303.15389. doi: 10.48550/arXiv.2303.15389

  • 68

    SunQ.WangJ.YuQ.CuiY.ZhangF.ZhangX.et al. (2023b). EVA-CLIP-18B:scaling CLIP to 18 billion parameters. arXiv [Preprint]. arXiv:2402.04252.

  • 69

    TangJ.ZhengG.ShiC.YangS. (2023). “Contrastive grouping with transformer for referring image segmentation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 23570–23580. doi: 10.1109/CVPR52729.2023.02257

  • 70

    UlyanovD.VedaldiA.LempitskyV. S. (2016). Instance normalization: the missing ingredient for fast stylization. arXiv [Preprint]. arXiv:1607.08022. doi: 10.48550/arXiv.1607.08022

  • 71

    VaswaniA.ShazeerN.ParmarN.UszkoreitJ.JonesL.GomezA. N.et al. (2017). Attention is all you need. arXiv [Preprint]. arXiv:1706.03762. doi: 10.48550/arXiv.1706.03762

  • 72

    WangM.PiH.LiR.QinY.TangZ.LiK. (2025). “VLScene: vision-language guidance distillation for camera-based 3D semantic scene completion,” in Proceedings of the AAAI Conference on Artificial Intelligence, 78087816.

  • 73

    WangM.WuF.QinY.LiR.TangZ.LiK. (2026). Vision-based 3D semantic scene completion via capturing dynamic representations. Knowledge Based Syst. 331:114550. doi: 10.1016/j.knosys.2025.114550

  • 74

    WangS.YuJ.LiW.LiuW.LiuX.ChenJ.et al. (2024). “Not all voxels are equal: Hardness-aware semantic scene completion with self-distillation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

  • 75

    WangY.ChaoW.GargD.HariharanB.CampbellM.WeinbergerK. Q.et al. (2018). Pseudo-LiDAR from visual depth estimation: bridging the gap in 3D object detection for autonomous driving. arXiv [Preprint]. arXiv:1812.07179. doi: 10.48550/arXiv.1812.07179

  • 76

    WangY.GuiziliniV.ZhangT.WangY.ZhaoH.SolomonJ.et al. (2021). DETR3D: 3D object detection from multi-view images via 3D-to-2D queries. arXiv [Preprint]. arXiv:2110.06922. doi: 10.48550/arXiv.2110.06922

  • 77

    WangY.TongC. (2024). H2GFormer: horizontal-to-global voxel transformer for 3D semantic scene completion. Proc. AAAI Conf. Artif. Intell. 38, 57225730. doi: 10.1609/aaai.v38i6.28384

  • 78

    WengX.WangJ.HeldD.KitaniK. (2020). AB3DMOT: a baseline for 3D multi-object tracking and new evaluation metrics. arXiv [Preprint]. arXiv:2008.08063. doi: 10.48550/arXiv.2008.08063

  • 79

    WuD.LiangZ.ChenG. (2022). Deep learning for LiDAR-only and LiDAR-fusion 3D perception: a survey. Intell. Robot. 2, 105129. doi: 10.20517/ir.2021.20

  • 80

    XiaZ.LiuY.-C.LiX.ZhuX.MaY.LiY.et al. (2023). “SCPNet: semantic scene completion on point cloud,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 1764217651.

  • 81

    XiaoH.MastrapasG.WangB. (2024a). “Jina CLIP: your CLIP model is also your text retriever,” in ICML 2024 Workshop on Multi-modal Foundation Models Meets Embodied AI.

  • 82

    XiaoH.XuH.KangW.LiY. (2024b). Instance-aware monocular 3D semantic scene completion. IEEE Trans. Intell. Transp. Syst. 25, 65436554. doi: 10.1109/TITS.2023.3344806

  • 83

    XuB.ChenZ. (2018). “Multi-level fusion based 3D object detection from monocular images,” in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (Salt Lake City, UT: IEEE), 23452353. doi: 10.1109/CVPR.2018.00249

  • 84

    XuB.WangN.ChenT.LiM. (2015). Empirical evaluation of rectified activations in convolutional network. arXiv [Preprint]. arXiv:1505.00853. doi: 10.48550/arXiv.1505.00853

  • 85

    XuX.WangT.YangY.ZuoL.ShenF.ShenH. T.et al. (2020). Cross-modal attention with semantic consistence for image–text matching. IEEE Trans. Neural Netw. Learn. Syst. 31, 54125425. doi: 10.1109/TNNLS.2020.2967597

  • 86

    YanX.GaoJ.LiJ.ZhangR.LiZ.HuangR.et al. (2020). Sparse single sweep LiDAR point cloud segmentation via learning contextual shape priors from scene completion. arXiv [Preprint]. arXiv:2012.03762. doi: 10.48550/arXiv.2012.03762

  • 87

    YanY.MaoY.LiB. (2018). Second: sparsely embedded convolutional detection. Sensors18:3337. doi: 10.3390/s18103337

  • 88

    YangX.ZouH.KongX.HuangT.LiuY.LiW.et al. (2021). Semantic segmentation-assisted scene completion for LiDAR point clouds. arXiv [Preprint]. arXiv:2109.11453. doi: 10.48550/arXiv.2109.11453

  • 89

    YaoJ.ZhangJ.PanX.WuT.XiaoC. (2023). “DepthSSC: monocular 3D semantic scene completion via depth-spatial alignment and voxel adaptation,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 21542163.

  • 90

    YiX.XuH.ZhangH.TangL.MaJ. (2024). “Text-IF: leveraging semantic text guidance for degradation-aware and interactive image fusion,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Available online at: https://openaccess.thecvf.com/content/CVPR2024/html/Yi_Text-IF_Leveraging_Semantic_Text_Guidance_for_Degradation-Aware_and_Interactive_Image_CVPR_2024_paper.html

  • 91

    YouY.WangY.ChaoW.GargD.PleissG.HariharanB.et al. (2019). Pseudo-LiDAR++: accurate depth for 3D object detection in autonomous driving. arXiv [Preprint]. arXiv:1906.06310. doi: 10.48550/arXiv.1906.06310

  • 92

    YurtseverE.LambertJ.CarballoA.TakedaK. (2020). A survey of autonomous driving: common practices and emerging technologies. IEEE Access8, 5844358469. doi: 10.1109/ACCESS.2020.2983149

  • 93

    ZhangB.ZhangP.DongX.ZangY.WangJ. (2024). “Long-CLIP: unlocking the long-text capability of CLIP,” in Computer Vision - ECCV 2024 (Lecture Notes in Computer Science, Vol. 15109) (Springer). doi: 10.1007/978-3-031-72983-6_18

  • 94

    ZhangY.ZhuZ.DuD. (2023). “OccFormer: dual-path transformer for vision-based 3D semantic occupancy prediction,” in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 93999409.

  • 95

    ZhaoH.LiX.XuC.XuB.LiuH. (2024). “A survey of automatic driving environment perception,” in 2024 IEEE 24th International Conference on Software Quality, Reliability, and Security Companion (QRS-C) (Cambridge: IEEE), 10381047. doi: 10.1109/QRS-C63300.2024.00137

  • 96

    ZhaoX.ChenB.SunM.YangD.WangY.ZhangX.et al. (2024). HybridOcc: NeRF enhanced transformer-based multi-camera 3D occupancy prediction. IEEE Robot. Autom. Lett. 9, 78677874. doi: 10.1109/LRA.2024.3416798

  • 97

    ZhouH.ZhuX.SongX.MaY.WangZ.LiH.et al. (2020). Cylinder3D: an effective 3D framework for driving-scene LiDAR semantic segmentation. arXiv [Preprint]. arXiv:2008.01550. doi: 10.48550/arXiv.2008.01550

  • 98

    ZhouY.TuzelO. (2017). VoxelNet: end-to-end learning for point cloud based 3D object detection. arXiv [Preprint]. arXiv:1711.06396. doi: 10.48550/arXiv.1711.06396

  • 99

    ZhuD.ChenJ.ShenX.LiX.ElhoseinyM. (2024). “MiniGPT-4: enhancing vision-language understanding with advanced large language models,” in International Conference on Learning Representations (ICLR).

  • 100

    ZiaM. Z.StarkM.SchindlerK. (2014). “Are cars just 3D boxes? Jointly estimating the 3D shape of multiple objects,” in 2014 IEEE Conference on Computer Vision and Pattern Recognition (Columbus, OH: IEEE), 36783685. doi: 10.1109/CVPR.2014.470

Summary

Keywords

plug-and-play, PNAM, refinement, semantic scene completion, vison-language guidance

Citation

Zhang D, Lu J, Yang H, Bao L and Song B (2026) Enhancing 3D semantic scene completion with refinement module. Front. Neurorobot. 20:1768219. doi: 10.3389/fnbot.2026.1768219

Received

15 December 2025

Revised

31 December 2025

Accepted

29 January 2026

Published

06 March 2026

Volume

20 - 2026

Edited by

Hu Cao, Technical University of Munich, Germany

Reviewed by

Qi Zhang, City University of Macau, Macao SAR, China

Lei Zhu, Guangzhou University, China

Simone Mosco, University of Padua, Italy

Updates

Copyright

*Correspondence: Han Yang,

Disclaimer

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

Outline

Figures

Cite article

Copy to clipboard


Export citation file


Share article

Article metrics