Your new experience awaits. Try the new design now and help us make it even better

ORIGINAL RESEARCH article

Front. Remote Sens., 16 December 2025

Sec. Lidar Sensing

Volume 6 - 2025 | https://doi.org/10.3389/frsen.2025.1622210

3D reconstruction and morphological characteristic study of Abdullahpuram palace building in Vellore-Bangalore NH using terrestrial LiDAR data

  • School of Civil Engineering, Vellore Institute of Technology, Vellore, Tamil Nadu, India

Effective preservation of cultural heritage structures requires precise, non-destructive, and scalable documentation techniques. However, conventional survey methods often fail to capture intricate geometric features and to quantify localized surface deterioration such as spalling and plaster loss. Terrestrial LiDAR scanning provides high-resolution point cloud data well-suited for such applications, though challenges persist in data registration, segmentation, and deterioration quantification. This study applies terrestrial LiDAR technology to the documentation of the Abdullahpuram Palace, a 19th-century heritage building located in Vellore, Tamil Nadu, India, which exhibits Indo-Saracenic architectural influences (as reported by the Tamil Nadu Heritage Commission, 2019). Multiple scans were registered using Cyclone 360, and the data were pre- and post-processed in CloudCompare for noise filtering, segmentation, and geometric refinement. Surface deterioration was assessed by extracting 3D surface profiles and quantifying volume of material loss using convex hull and raster-based analyses in MeshLab and ArcGIS, respectively. It is to be noted that material loss represents the surface-level deterioration rather than direct evidence of structural failure. Additionally, an octree-based downscaling approach was also implemented to facilitate multi-scale visualization and improve computational efficiency for large datasets. The methodology enhances heritage documentation, supports objective condition assessment, and aligns with sustainable conservation principles articulated in SDG 9 and 11.4. The findings highlight the potential of terrestrial LiDAR and advanced point cloud processing to develop accurate, scalable, and non-invasive documentation strategies for heritage conservation globally.

1 Introduction

India is celebrated for its diverse cultures and rich architectural heritage, shaped by various dynasties that have ruled the region. Each empire has left a unique imprint, contributing distinct architectural styles, materials, and construction techniques. Preserving this cultural legacy is crucial not only as a testament to history but also as an asset for future generations. Conservation and documentation of heritage structures offer numerous benefits, including preserving community identity, enhancing awareness of historical and cultural contexts, and inspiring innovative architectural designs.

Traditionally, the preservation of heritage structures relied on manual surveying and mapping techniques such as tape measurements, theodolites, and photogrammetry (Chenaux et al., 2011). These methods, though foundational, often proved insufficient for comprehensive documentation due to their labor-intensive nature and limited accuracy in capturing intricate details. In response, non-destructive techniques (NDT) have emerged, utilizing advanced imaging technologies to create detailed 2D and 3D models of historical structures (Moyano et al., 2020; Sánchez-Aparicio et al., 2023). Among these, Terrestrial Laser Scanner (TLS) has demonstrated high accuracy in capturing geometric characteristics and is significantly more time-efficient than Unmanned Aerial Vehicle (UAV)-based photogrammetry or 3D surveys (Llabani and Lubonja, 2024). The ability to acquire a large number of data points ensures precise documentation of architectural components and complex geometries, supporting the generation of 2D CAD, 3D BIM, animations, and rendered imagery (Pritchard et al., 2017). The dense point cloud results in detailed representations with minimal errors, typically within 2–6 mm, which falls within the acceptable deviation range specified by international standards such as the English Heritage Metric Survey Specifications (Bryan et al., 2009) and CIPA Heritage Documentation guidelines (Stylianidis, 2019). This enhances the reliability of geometric measurements (Bouziani et al., 2021; Marčiš et al., 2024; Mohammadi et al., 2021). TLS has thus become a benchmark method for 3D reconstruction and geometric evaluation in heritage documentation and condition assessment due to its precision and data richness.

Among these advancements, Terrestrial LiDAR (Light Detection and Ranging) has emerged as a revolutionary tool for documenting and assessing heritage structures. It provides high-resolution 3D point cloud data that capture complex geometries, measure deformations, and detect and map surface deterioration during site documentation, enabling precise condition mapping (González-Aguilera et al., 2012; Wood et al., 2017; Yin and Antonio, 2020). The 3D point cloud offers a comprehensive visualization of the current state of a structure, facilitating detailed analysis of its architectural components, including arches, piers, and decks (Liu and Li, 2024). The quantification of material erosion through volume estimation informs restoration planning and helps prioritize maintenance actions, indirectly contributing to the long-term resilience of heritage assets by preventing progressive deterioration (Kushwaha et al., 2019). Recent studies have also explored integrating TLS with UAV-based imagery to produce spatially accurate 3D datasets referenced in a global coordinate system, ensuring high georeferencing precision for both situational and elevation measurements (Balestrieri et al., 2024). These datasets are often analyzed within Building Information Modeling (BIM) and Heritage Building Information Modeling (HBIM) frameworks to support conservation planning (Klapa and Gawronek, 2023). Moreover, the emergence of affordable LiDAR sensors in smartphones offers promising results for rapid, low-cost heritage mapping (Martino et al., 2024).

The LiDAR data processing workflow begins with preprocessing, which is essential for optimizing raw point cloud data. This phase includes noise filtering, outlier removal, and local registration. Noise filtering eliminates irrelevant data caused by environmental effects or scanning errors, while outlier removal isolates inaccurate points that could skew analysis. The scans were registered within a local coordinate system using reference targets, as absolute georeferencing was not required for the structural-level analysis. This approach maintains relative spatial consistency across multiple scans, which is adequate for morphometric characterization and damage assessment of the building.

Following this, post-processing enables extraction of structural features through segmentation, surface modelling, and data reduction. In the methodology section, this workflow is represented schematically through an algorithm-based sequence of operations detailing each phase as Statistical Outlier Removal (SOR) for noise reduction, Iterative Closest Point (ICP) for fine registration, octree partitioning for hierarchical data management, and region-growing segmentation for architectural feature extraction.

The processing steps were implemented using Cyclone 360 (for scan registration and preprocessing), CloudCompare (for filtering, segmentation, and deviation analysis), MeshLab (for surface reconstruction and mesh optimization), and ArcGIS Pro (for visualization and spatial analysis), ensuring methodological transparency and reproducibility.

Structural condition evaluation was conducted through detailed analyses of the point cloud data. Surface irregularities and material loss were assessed through visual inspections and quantitative methods. Profile evaluations and scalar field representations were employed to identify relationships between LiDAR return intensity values and the degree of surface deterioration, thereby pinpointing areas requiring conservation attention. While most prior studies have focused on octree-based and point cloud analyses, relatively few have employed raster-based approaches for material loss quantification. This presents a research gap in achieving precise volumetric assessment, which the present study addresses through the integration of raster analysis and convex hull algorithms for comparative validation of volume estimation results.

This paper focuses on applying terrestrial LiDAR to assess the deteriorating condition of the Abdullahpuram Palace using advanced 3D reconstruction techniques to generate comprehensive point cloud and octree models. By leveraging detailed geometric data obtained through preprocessing and post-processing, the study aims to quantify material loss on palace walls rather than evaluate structural integrity. Comparative analyses between LiDAR-derived measurements and field observations demonstrate the reliability of modern surveying techniques in heritage conservation. This research contributes valuable insights for the restoration of the Abdullahpuram Palace and similar heritage sites, showcasing the potential of advanced spatial technologies to safeguard cultural heritage.

The study aligns with two Sustainable Development Goals (SDGs) viz., (a) SDG 9 (Industry, Innovation and Infrastructure) - promoting resilient infrastructure and sustainable innovation, and (b) SDG 11.4 - strengthening efforts to protect the world’s cultural and natural heritage. The integrated methodology emphasizes non-destructive, data-driven documentation that supports long-term resilience through accurate detection, documentation, and quantification of surface deterioration.

1.1 Research objectives

The objectives of the study are as follows

1. 3D reconstruction of Abdullahpuram Palace.

2. Architectural detailing of the palace through 3D reconstruction.

3. Comparative study of geometric precision between 3D point cloud and octree models.

4. Quantification of surface loss using raster analysis and convex hull algorithm.

This study provides insights into the 3D reconstruction of the palace using point cloud and octree models and evaluates their precision and accuracy. It further focuses on quantifying surface loss through raster analysis, an approach rarely applied in heritage studies, validated through a comparative evaluation using convex hull-based volume estimation methods.

2 The structure under study

The structure examined in this study is the Abdullahpuram Palace, located approximately 6 km from Vellore along the Chennai–Bengaluru National Highway (NH-48), adjacent to the Vellore–Krishnagiri trunk road in Tamil Nadu, India (Figure 1). Historical sources indicate that Abdullah Khan, a Mughal nobleman, governed the region and established the settlement of Abdullahpuram. The palace, constructed circa 1676 AD, is locally referred to as Abdullah Khan Mahal and represents a characteristic example of Mughal-period provincial architecture in South India.

Figure 1
An old, dilapidated building with arched doorways and intricate stonework, partly obscured by trees. A blue sign with text is visible on the right. The ground is covered with dry leaves and debris.

Figure 1. Field photograph of the palace.

At present, the palace survives only as partial remains comprising two floors and four rooms, constructed primarily of brick masonry bonded with lime mortar. The structure exhibits typical Mughal architectural elements, such as pointed arches, stucco decorations, and floral-geometric motifs that reflect the artistic vocabulary of the 17th century (Asher, 1992; Azmat et al., 2018). The facade shows symmetrical openings, while the horseshoe-shaped arches and pendentives demonstrate Indo-Islamic engineering principles used to transition loads from domes to square spaces (Asher, 1992). These elements are complemented by a terraced roofline and an inner dome that enhances acoustics and vertical proportion. Those features are documented in several Mughal monuments of the Deccan region metric (Vajiram and Ravi, 2025).

Over the years, environmental exposure and neglect have caused progressive material degradation, particularly lime plaster loss and surface spalling, resulting in the palace’s near-ruinous state. Despite its cultural value, very limited documentation exists in scholarly or government archives, and the site remains absent from most regional conservation inventories.

This study therefore undertakes the first high-resolution terrestrial LiDAR (TLS) survey of the Abdullahpuram Palace to digitally preserve its morphology and architectural features before potential loss. The TLS dataset enables the creation of a 3D digital twin, offering a permanent and measurable record of the structure’s geometry. Similar TLS applications in cultural heritage studies (Kurdi, 2023; Lerma et al., 2010) have demonstrated the reliability of laser scanning for accurate reconstruction, deformation analysis, and deterioration mapping. The generated model supports future research on structural conservation, heritage visualization, and restoration planning.

A field photograph of the palace is shown in Figure 1 to illustrate its current deteriorated condition and confirm the correspondence between the on-site structure and its 3D reconstructed model.

3 Methodology

The overall workflow adopted in this study is illustrated in Figure 2, which outlines the stepwise process from data acquisition to damage quantification. Each step is described in detail below.

Figure 2
Flowchart detailing a process for 3D model reconstruction and analysis. Steps include data acquisition via laser scanning, initial registration in CYCLONE 360, preprocessing, post-processing, final registration, model reconstruction, accuracy assessment, and validation. Morphometric analysis and damage assessment using Convex Hull and Raster methods are also highlighted, with comparative validation and volume computation.

Figure 2. Algorithm-based workflow of the study.

3.1 3D data acquisition

Data acquisition was performed using the Leica BLK360 Terrestrial Laser Scanner (Figure 3), which operates on the Time-of-Flight (ToF) principle to measure distances based on the travel time of laser pulses reflected from target surfaces. Each scan captures distance, horizontal, and vertical angle measurements in instrument-centred polar coordinates, generating dense 3D point clouds that accurately map surface geometry (Lichti, 2007).

Figure 3
A laser scanner placed on a tripod is set up inside an old stone chamber with arched ceilings and weathered walls. The texture and aging of the structure are evident.

Figure 3. Data collection using Leica BLK360.

For this study, six individual scans were conducted around the Abdullahpuram Palace to ensure full coverage and minimize shadowing effects caused by structural occlusions as shown in Figure 4. A local coordinate system was adopted because the focus of the study was on structural morphology rather than geospatial referencing. The scanner was strategically positioned to maintain optimal overlap between scans, thereby improving registration accuracy and coverage completeness.

Figure 4
Map showing six red markers indicating the relative locations of Terrestrial Laser Scanning (TLS) sites. Each marker is labeled with a number from 1 to 6, spread across a textured landscape.

Figure 4. Registration of the point clouds in Cyclone REGISTER 360.

The Leica BLK360 provides a range accuracy of ±4 mm at 10 m and a scan resolution of 3–5 mm point spacing, ensuring the geometric precision necessary for heritage documentation (England, 2015). The scanner also captures RGB data via an integrated camera, which was subsequently mapped to the 3D points using internal calibration parameters to produce a colorized point cloud. Table 1 summarizes the scanner specifications.

Table 1
www.frontiersin.org

Table 1. Leica BLK 360 specifications.

3.2 Data processing

3.2.1 Registration of point cloud

The raw point cloud data from each scan was initially referenced to its respective local coordinate system. To create a unified model, registration was performed using Cyclone REGISTER 360 software. Both target-based and cloud-to-cloud registration were employed to ensure accurate alignment. The target-based registration utilized reflective spheres and planar features to establish control correspondences while Cloud-to-cloud registration relied on geometric feature matching, optimized by the Iterative Closest Point (ICP) algorithm (Besl and McKay, 1992). The ICP method iteratively minimizes the Euclidean distances between overlapping point sets until convergence is reached.

The registration accuracy was assessed using Root Mean Square Error (RMSE) values, as reported in the registration summary as in Table 2. A lower registration error directly indicates the improved geometric fidelity of the reconstructed structure through the minimal or null mis-match in the alignment of point clouds. So, to ensure the higher geometric accuracy in the final 3D model, reduced registration error values should be aimed at. Upon successful registration, the unified point cloud was exported in .e57 format, which supports efficient storage of point cloud data, images, and metadata (ASTM E 2087, 2019).

Table 2
www.frontiersin.org

Table 2. The registration report.

3.2.2 3D model reconstruction

The registered point cloud was post-processed using CloudCompare (v2.10-alpha) for segmentation, noise filtering, and reconstruction. Non-structural elements such as vegetation and debris were removed to isolate the palace geometry.

Each point in the registered dataset was assigned RGB attributes from the onboard panoramic camera of the Leica BLK360 scanner to enhance visual realism and facilitate the interpretation of material textures and surface finishes. The colour integration was performed after geometric alignment using the ICP algorithm, which minimizes positional discrepancies between overlapping scans through iterative rigid-body transformations as shown in Equation 1 to preserve the spatial relationships among points. This ensured sub-centimetre alignment precision, forming the geometric foundation for accurate colour mapping.

P=R·P+t(1)

where P denotes the original point, P′ is the transformed point, R is the rotation matrix, and t is the translation vector.

However, it is recognized that lighting heterogeneity, hue variations, and surface reflectivity can affect the quality of colour representation in point cloud datasets. To mitigate these influences, a series of corrective measures were applied both during data acquisition and processing:

• Controlled illumination: All scans were conducted under diffuse daylight conditions to reduce shadowing, glare, and specular reflections on the stone surfaces.

• Uniform exposure calibration: The automatic exposure control feature embedded in the Leica BLK360 system helped to maintain radiometric consistency across sequential scans.

• Intensity balancing: Overlapping regions were post-processed through uniform intensity scaling and exposure normalization within Cyclone REGISTER 360 to ensure colour uniformity across merged point sets.

• Alignment verification: The ICP-based registration was visually verified in CloudCompare to ensure there were no local misalignments or distortions in the RGB-mapped data.

Following colour correction and alignment, segmented entities were converted into polygonal meshes to delineate distinct architectural features, as illustrated in Figure 5. This meshing process ensured that all visible and measurable geometric and textural details were retained for downstream analysis, including structural deformation assessment and surface morphology interpretation.

Figure 5
Dilapidated, two-story building with archways and overgrown vegetation on a dark blue background. The structure shows significant wear, with missing sections and visible decay.

Figure 5. 3D model after segmentation.

These integrated procedures resulted in a geometrically robust and visually coherent 3D model, minimizing both spatial and radiometric discrepancies. The resulting-coloured point cloud not only preserved fine architectural details but also supported subsequent texture mapping and mesh-based visualization processes with enhanced photorealistic fidelity.

3.2.3 Octree based data structuring

An octree is a hierarchical data structure that organizes and indexes three-dimensional spatial data in a tree-like form. It extends the principles of binary trees and quadtrees, which manage one-dimensional and two-dimensional data, respectively. In an octree, a 3D finite volume is recursively divided into eight smaller cubic volumes or octants at each subdivision level. The divisions in this structure are referred to as nodes in data structures and cells in the spatial context. The root node encompasses the entire dataset, while each subsequent child node represents a smaller sub-volume of space. Every node within the octree corresponds to a cubic volume called a voxel, which represents a particular portion of the spatial domain. The recursive subdivision continues until the minimum voxel size is reached, thereby defining the octree’s spatial resolution (Cha et al., 2019).

Within a fully developed octree, each node produces eight child nodes, and the terminal nodes (leaf nodes) exist at the defined tree depth or spatial division level D. Consequently, a complete octree contains 8D leaf nodes, forming a structured hierarchy equivalent to a uniform 3D grid with a resolution of 2D×2D×2D. The total number of nodes NT in the tree can be computed using Equation 2:

NT=i=0D8i=8D+11/7(2)

A node without children indicates that the corresponding volume can be represented uniformly, meaning no further subdivision is necessary to capture spatial variability (Elseberg et al., 2013). In general, when the octree depth (or spatial division) decreases, each voxel encompasses more points, producing a coarser model with lower spatial detail. Conversely, increasing the depth refines the voxel size, yielding higher spatial resolution but also increasing computational load and memory consumption (Chen et al., 2021). Figure 6 illustrates this hierarchical subdivision concept.

Figure 6
Diagram showing an octree representation of a 3D car model at three levels of detail: level one at \(32^3\), level two at \(64^3\), and level three at \(128^3\). Each increasing level shows a more detailed and refined model.

Figure 6. The octree hierarchy (Chen et al., 2021).

The key characteristics of an octree model can be summarized as follows:

• The relationship between cell size, cell count, and filled volume is complex and depends on the intrinsic properties of the scanned data.

• Larger cell sizes generally cover greater volumes with fewer cells, though this trend may vary based on data heterogeneity.

• Smaller cell sizes enhance local geometric detail but do not always increase filled volume.

• The optimal balance between cell size and cell count must be chosen based on the study’s objective, maintaining a trade-off between computational efficiency and geometric fidelity.

In this study, the octree-based structuring was applied to the palace point cloud dataset to enable hierarchical spatial representation and efficient data management. The maximum subdivision level (tree depth) obtained was 21, corresponding to the software’s default configuration for fine spatial partitioning. The display mode was set to plain cubes, and the visualization level was maintained at eight to represent the geometric density effectively. By grouping points into cubic cells, operations such as spatial querying, feature extraction, and visualization were executed more efficiently on localized data subsets rather than the full dataset. This structuring facilitated the analysis of geometric consistency, filled volume estimation, and volumetric distribution across the entire 3D model.

Figure 7 illustrates the octree representation of the palace structure, and Table 3 presents the computed parameters for each scan. Variations in cell size, cell count, and filled volume were observed across the six scans, reflecting differences in the captured geometry and surface complexity. The balance between cell size and cell count was found to be crucial for accurate volumetric representation. Smaller cell sizes increased the total cell count, enhancing the capture of fine architectural features but often resulting in reduced filled volumes. Conversely, larger cells captured broader regions efficiently, as noted in Scan 1 (Table 3), though this relationship was not linear across all datasets.

Figure 7
A pixelated representation of a historic stone structure with arches and columns, set against a solid dark background. The building appears partially obscured and abstract due to the pixelation.

Figure 7. Octree structure of the palace.

Table 3
www.frontiersin.org

Table 3. The properties of the octree structure in each scan.

Overall, no clear correlation was observed between cell count and filled volume, indicating that filled volume depends not only on voxel count but also on spatial content and surface morphology. For example, Scans five and six demonstrated that finer voxel resolutions yielded detailed structural patterns (high cell counts) with smaller filled volumes, whereas Scan 1 reflected efficient coverage of large structural components with fewer, larger voxels. This analytical comparison emphasizes the importance of adaptive voxel sizing in accurately characterizing spatial heterogeneity within architectural LiDAR datasets.

3.3 Accuracy assessment of 3D point cloud and octree model through morphometric analysis and its validation with the field measurements

Evaluation of the error implies that the outcome of the point cloud and octree model is accurate. The error is calculated between the actual value and the observed value. Calculating the error helps to determine how close an observed value is to the actual value of a measurement. Surveys can be used to develop a hypothesis and then test it to determine how far the result is from the exact value in the investigation.

Statistical measures, Mean Absolute Error (MAE) and Standard Deviation (SD), evaluate the accuracy of observations as shown in Equations 3 and 4, respectively. MAE estimates the average absolute difference between the field and observed measurements from point and octree models. SD measures the variability of the measurements around their mean.

MAE=1ni=1nxiyi(3)

where:

xi - estimated value

yi - actual value

n - number of measurements

SD=1ni=1nxix¯2(4)

where,

n - total number of data points

xi - each individual data point

x¯ - mean (average) of all data points

In this study, the dimensions of the palace are measured on-site. The dimensions of the palace are determined using on-site measurements. The palace has two distinct width measurements: one obtained from the front perspective, which is taken from the roadside, and the other obtained from the back view, which is taken from the opposite end. The TLS measures the dimensions of the palace model, including its length, width and height, as shown in Figure 8. The obtained dimensions compute the area of the palace. The measurements of L1 and L2 corresponding to W1 and W2 are taken individually to calculate area separately and then added together to determine the total area. The height of the two floors, the ground floor and the first floor, is measured. H1 and H2 are the height of the ground floor and first floor, respectively. The span of the main, secondary doorways and width of the columns were also measured. The measurements collected from the field and the two kinds of models are compared. The percentage error of all the parameters was determined using the measurements. Table 4 provides the precise measurements of the parameters’ dimensional details. From the table, it can be observed that there is a difference in the measurements between the point cloud and octree. This error might be due to the inaccuracy of equipment, measurements (human error or tool error), or adjustments made in calculation methods (rounding off, etc.). This error calculation determines the accuracy of the model and the field results.

Figure 8
Two 3D renderings of a building are shown for comparison. Figure a displays two measurements with distances of 11.491165 and 8.474359, marked by magenta lines. Figure b shows similar measurements with values of 11.494180 and 8.574274, also marked with magenta lines. Both images include tables with delta X, Y, and Z values in different colors.

Figure 8. Typical length measurement taken from (a) the point cloud model (top) and (b) the octree model (bottom).

Table 4
www.frontiersin.org

Table 4. Dimensional details.

3.4 Damage assessment

This study focuses on assessing surface damage, primarily spalling and erosion, observed on masonry walls. Surface quality inspection refers to the evaluation of the existing surface condition, which plays a critical role in assessing the safety, durability, and reliability of historical and structural assets (Wu et al., 2022). The investigation concentrated on two selected walls exhibiting visible surface irregularities and volumetric degradation. These volumetric variations in bricks and mortar arise due to natural expansion, shrinkage, thermal fluctuations, and corrosion-related processes (Prizeman et al., 2017).

The evaluation aimed to quantify the volume of material loss caused by the removal or deterioration of plaster and constituent masonry materials in different wall regions. For detailed spatial analysis, two damaged walls arbitrarily named as wall1 and wall2 were examined, with four representative sections (A1, A2, B1, and B2) delineated for close study as shown in Figure 9. Figure 10 presents scalar field representations of these walls, where the colour gradient (Blue < Green < Yellow < Red) indicates the intensity of surface irregularities. Among these, A1 and B1 correspond to brick masonry, while A2 and B2 correspond to stone masonry zones.

Figure 9
Two side-by-side images labeled

Figure 9. (a) wall1 and (b) wall2 from the laser scan.

Figure 10
Thermal images showing two views labeled

Figure 10. Scalar fields with intensities of (a) wall1 and (b) wall2.

3.4.1 Volumetric assessment using raster analysis

The material loss across sections A1, A2, B1, and B2 was quantified using a 3D spatial analysis workflow that integrates LiDAR-derived point cloud processing with raster-based volumetric estimation. Deterioration was recorded in both mortar joints and brick/stone units, with loss expressed in volumetric terms. Segmented data for each section were exported in standard GIS-compatible formats for integration into ArcMap, a platform suitable for 3D visualization, surface analysis, and geometric quantification.

In ArcGIS, the point cloud data were initially visualized as discrete points and then converted to a Triangulated Irregular Network (TIN) model using 3D Analyst tools (Kushwaha et al., 2019). The TIN framework enables detailed surface representation and facilitates elevation-based analysis (López-Herrera et al., 2025). Since no pre-damage or archival 3D models were available, the relatively undamaged portions of each wall were used as local reference surfaces, representing the presumed original geometry. While this assumption introduces some uncertainty due to natural irregularities in historical masonry, it provides a reasonable baseline for quantifying relative volumetric losses caused by spalling and erosion. Separate TIN models were constructed for both damaged and undamaged (reference) areas. Figure 11 shows the resulting TIN models for all four sections and their corresponding reference surfaces. Even the reference model exhibited minor surface undulations, demonstrating the sensitivity of LiDAR data in capturing minute textural variations on the wall surface.

Figure 11
Four terrain elevation maps labeled A1, A2, B1, and B2, display varying shades of green, red, orange, and gray representing different elevations. A legend at the bottom shows the color gradient from high to low elevation, with

Figure 11. TIN model with elevation for (A1,A2,B1,B2), and the reference portions.

In the TIN representation, white zones denote higher elevations (surface protrusions), while blue zones represent lower elevations (surface recessions), enabling intuitive visual identification of damage extents. The TIN model thus served as a spatial foundation for differentiating degraded areas from intact regions.

Following TIN creation, the point data were converted into raster format, with each raster cell storing an elevation value. Using the Raster Calculator tool, the elevation of the damaged wall surface was subtracted from that of the reference (undamaged) wall:

Δ Elevation=EreferenceEdamaged

This difference map isolates the elevation discrepancies corresponding to material loss zones. The resulting raster delineates areas of reduced elevation, representing spalled or eroded regions. Subsequently, the total volume of material loss was computed by summing the negative elevation differences across all raster cells, producing a precise, spatially resolved quantification of surface degradation.

This workflow provides both a numerical estimate of material loss and a visual map of deterioration intensity for brick and stone masonry. Such information is critical for designing targeted restoration strategies and understanding weathering mechanisms that affect long-term material performance in heritage structures.

3.4.2 Volumetric assessment by convex hull algorithm

For further accuracy, volumetric computations were also performed using a Convex Hull-based approach following surface reconstruction. Since raw LiDAR point clouds consist of unstructured points without explicit geometric continuity, a surface reconstruction step was essential to generate a continuous mesh that represents the wall geometry. This surface continuity not only facilitates visual analysis but also supports Finite Element Analysis (FEA), digital twin synchronization, and structural deformation studies by ensuring geometric smoothness and topological coherence. Figure 12a illustrates a typical reconstructed surface, while Figure 12b shows convex hull fitting around the reconstructed geometry.

Figure 12
Panel a shows a detailed 3D texture map of a rough surface with a complex, woven pattern on a black background. Panel b displays a 3D model of an elongated, faceted geometric object in shades of gray, also against a black background. Both images include coordinate axes for spatial reference.

Figure 12. (a) Typical surface reconstruction. (b) Typical convex hull fitting.

The Ball Pivoting Algorithm (BPA) was adopted for surface reconstruction. BPA interpolates dense point clouds to generate a triangle mesh by simulating a virtual sphere that “rolls” over the points, forming triangular facets where the sphere contacts three points (Bernardini et al., 1999; Maiti and Chakravarty, 2016). The process starts with a seed triangle, and the sphere pivots around its edges to form adjacent triangles, resulting in a geometrically coherent mesh. The algorithm is geometrically intuitive, memory-efficient, and capable of producing topologically consistent manifold surfaces. Its effectiveness, however, depends on point-cloud density, noise level, and ball radius selection.

Incorporating voxel-based indexing further enhances BPA robustness for large and irregular datasets, making it suitable for surface modelling and photogrammetry-driven 3D reconstruction from dense multi-view imagery (Ma and Li, 2019). Surface reconstruction prior to convex hull generation ensures volumetric consistency, minimizing noise-related irregularities and enabling the convex hull to tightly envelop the true geometry of the damaged surface.

Subsequently, the Convex Hull algorithm was applied to the reconstructed sections (A1, A2, B1, B2) to compute the volumetric loss due to material detachment. The convex hull fitting technique (Gao et al., 2013) estimates the volume of a damaged region by enclosing all surface points within the smallest convex polyhedron. The resulting envelope tightly wraps the damaged surface, providing a straightforward and efficient volumetric estimation method. This approach is particularly well-suited for identifying spalling, surface delamination, and other forms of localized material loss.

The combined use of BPA-based surface reconstruction and convex hull fitting offers a balanced framework ensuring geometric precision, minimizing interpolation errors, and yielding accurate volumetric estimates of the deteriorated regions.

4 Results and discussions

4.1 3D reconstruction of Abdullahpuram palace using LiDAR technology

The 3D reconstruction of the Abdullahpuram palace was conducted using the advanced non-destructive imaging technique known as LiDAR (Light Detection and Ranging) or Terrestrial Laser Scanner. This process resulted in the creation of a 3D point cloud model that captures intricate details of the palace’s ancient architectural style, design, and the materials employed in its construction.

The 3D reconstruction of the Abdullahpuram Palace was conducted using an advanced non-destructive imaging technique, namely TLS or LiDAR. The process resulted in the creation of a dense and georeferenced point cloud model that captured intricate architectural details, spatial geometry, and surface morphology of the heritage structure with millimetric accuracy. The generated dataset not only provides a digital replica of the physical structure but also serves as a spatial archive for heritage documentation, structural diagnostics, and future restoration planning.

4.1.1 Architectural interpretation through 3D reconstruction of the palace

The reconstructed model provides an interpretive visualization of the architectural elements, revealing the stylistic and structural coherence of the palace’s design. All identified features were interpreted directly from the regenerated 3D model shown in Figure 5, substantiating the relevance of model-based architectural analysis in heritage documentation.

4.1.1.1 Symmetry of the palace

The facade exhibits bilateral symmetry and modular proportions, a hallmark of Indo-Islamic architecture (Ebba Koch, n.d.; Gupta et al., 2024). The analysis was performed by drawing a central axis through the facade and measuring corresponding halves. The equal dimensions validated the bilateral symmetry, as shown in Figure 5. This structural regularity, reconstructed digitally through TLS, confirms the geometric discipline followed by traditional craftsmen and ensures objective quantification of spatial uniformity. The symmetry reflects aesthetic balance and structural harmony, enhancing both visual rhythm and spatial coherence.

4.1.1.2 Arches and openings

The 3D model identifies segmental and pointed arches, as shown in Figure 5, which are typical of Mughal-Deccan influence (Kishor and Hadi Ensaif, 2025). The larger arched openings serve as doorways or iwans leading into the main halls, while vertically aligned smaller arches act as ventilating niches, consistent with climatic adaptation strategies. The ability of the 3D reconstruction to distinctly segregate arch typologies demonstrate the potential of TLS-based modelling in architectural taxonomy and digital archiving. The ground floor comprises small arched niches, known as lamp niches, used for decorative or illumination purposes, whereas the first floor exhibits repetitive arched niches functioning as ventilation openings.

4.1.1.3 Decorative motifs and surface articulation

Motifs, stucco ornamentation, and carved recesses surrounding arches are discernible from the 3D model, reflecting Indo-Islamic decorative traditions (Gupta et al., 2024). These ornamentations, extracted from the reconstructed data, reveal the fine surface reliefs that are difficult to measure in field surveys alone, emphasizing the value of 3D scanning in micro-level documentation.

4.1.1.4 Interior dome

The interior features a domical ceiling resting on squinches, concealed externally by a flat terrace roof with gentle falls for drainage. The reconstruction clarifies this structural duality - an inner domical load-bearing system beneath an outer flat terrace as a characteristic of Indo-Islamic geometry. The dome enhances acoustic quality and visual aesthetics, while stucco ornamentation further enriches the interior detailing.

Overall, the regenerated 3D model successfully identified, classified, and documented critical architectural typologies, proving that the reconstruction is not merely visual but a tool for geometric validation, stylistic interpretation, and digital preservation.

4.1.2 Point cloud model

The initial TLS point cloud data were processed to remove noise and redundant points, yielding a refined dataset with high spatial accuracy. Each point in the model was defined by XYZ coordinates and colour (RGB) intensity values. This dataset forms the foundation for geometric measurements, volumetric computations, and surface analysis.

The point cloud captured intricate geometries and was further processed for alignment, segmentation, and feature extraction. The data were versatile and easily exportable into formats compatible with modelling software for conservation design, structural analysis, and reconstruction planning. Additionally, the point cloud serves as a digital heritage archive that can be repeatedly accessed for non-invasive structural analysis and restoration documentation.

4.1.3 Octree model

The octree model, derived from the 3D point cloud, organizes spatial data hierarchically. Each node subdivides into eight octants, resulting in a tree-like structure. This structuring enhances computational efficiency, supports multi-resolution visualization, and allows faster data queries without loss of geometric integrity.

Table 3 presents variations in cell size, count, and filled volume across different scans. These variations indicate differences in surface complexity and feature density. Smaller cell sizes capture finer surface details but increase computational load, while larger cells simplify geometry at the cost of detail precision. The absence of a consistent correlation between cell count and filled volume suggests that both structural characteristics and scan geometry influence volumetric representation. This reinforces that cell size and octree depth must be optimized according to the feature scale and analytical purpose.

The octree structure provides a scalable spatial framework that supports resource allocation, infrastructure planning, and volumetric modelling in digital conservation contexts. The scans in Table 3 show considerable variation in cell size, cell count, and filled volume. This indicates that each scan captured different structures or features within the same or different environments. The balance between cell size and cell count is crucial for representing different volumes accurately. Smaller cell sizes have generally increased the cell count and can capture finer details, but this does not always translate to a larger filled volume. Larger cell sizes can lead to capturing larger filled volumes, as seen in Scan 1 in Table 3, but this is not consistently true across all scans.

There is no clear correlation between cell count and filled volume, as observed in Table 3. The observation suggests that filled volume is not solely dependent on cell count but is also influenced by the cell size and possibly the specific structure or content being scanned. Larger cell sizes, as in Scan 1, can result in larger filled volumes with fewer cells, suggesting an efficient capture of large structures. Scans five and six in Table 3 show that smaller cell sizes capture more detailed structures (reflected in higher cell counts) but result in smaller filled volumes.

4.1.4 Comparative analysis of point clouds and octree models

Both point cloud and octree models serve distinct but complementary purposes. The point cloud directly represents spatial coordinates, while the octree conveys hierarchical spatial organization. While point clouds retain maximum geometric detail, octree-based models enhance storage efficiency, rendering speed, and analytical scalability.

The point cloud offers intricate detail suitable for architectural documentation and fine-scale measurement. The octree model, though less detailed, is computationally optimized, making it ideal for volumetric analyses and integration into GIS or BIM systems. Hence, their combined application provides an optimal framework of point cloud for micro-level accuracy and octree for macro-level efficiency.

4.1.5 Validation of the 3D model (cloud-to-mesh distance analysis)

Geometric validation was conducted using Cloud-to-Mesh (C2M) analysis to assess the alignment accuracy between the reconstructed model and the reference TLS mesh. Distances were computed as orthogonal offsets from evaluation points to the reference mesh with a maximum search radius of 5 mm.

The mean deviation was 0.185 mm, standard deviation 0.063 mm, and RMSE 0.196 mm. These minimal deviations confirm the geometric reliability of the reconstruction, with negligible distortion or registration error. Figure 13 presents the deviation map and corresponding Gauss distribution.

Figure 13
Panel (a) displays a 3D model with blue coloring, showing a coordinate marker and a range of C2M signed distances. Panel (b) presents a histogram of intensity distribution, displaying a Gauss curve with a mean of 0.185495 and a standard deviation of 0.063257 across 650 classes, transitioning from green to yellow.

Figure 13. (a) Deviation map and (b) Gauss distribution of the model.

The statistical consistency of these parameters demonstrates that the 3D reconstruction retained the precision of the original TLS data, validating its applicability for structural analysis, deformation monitoring, and heritage documentation.

4.2 Accuracy assessment

Accuracy was quantified by comparing TLS-derived measurements against field-surveyed dimensions using Mean Absolute Error (MAE) and Standard Deviation (SD). The point cloud model exhibited an MAE of 0.0947 m and SD of 0.2270 m, while the octree model showed 0.1652 m and 0.25 m, respectively.

The lower MAE and narrower error distribution for the point cloud in Figure 14 demonstrate its superior precision, particularly for dimensional and surface-based analyses.

Figure 14
Box plot comparing error distribution between point cloud error (blue) and octree error (orange). Point cloud error shows minimal variation, concentrated around 0.1. Octree error displays greater variation, ranging from 0.1 to about 0.9.

Figure 14. Error distribution of point cloud and octree model.

Both models produced occasional outliers, primarily associated with area and opening measurements. The octree model exhibited broader deviations due to voxel approximation and discretization. Thus, while the octree facilitates rapid spatial assessment, the point cloud remains indispensable for sub-decimeter accuracy.

While the overall accuracy is commendable, the minor discrepancies in area estimation arise from voxel discretization and projection rounding, rather than from measurement error.

Based on the findings of the statistical analysis, the following hypotheses can be proposed:

1. The larger 3D cells in the octree model may represent multiple points, leading to a less precise approximation when the depth is not adequately refined. This condition results in higher inaccuracies during dimension measurements compared to the point cloud model.

2. Additional measurement errors might occur if the distribution of points within the octree cells is not uniform when compared to the actual point cloud data.

3. Errors in the point cloud data may increase due to inaccurate point selection during the measurement process.

Overall, the analysis suggests that both measurement techniques have their strengths and weaknesses, warranting careful consideration of their applications in precise geometric assessments.

4.3 Damage assessment

4.3.1 Visual analysis of damaged walls

Figures 9, 10 depict surface damage visualized through intensity and scalar field representations. Low-intensity (blue) regions denote degraded surfaces with low reflectivity, whereas high-intensity (red) regions correspond to intact and smooth surfaces.

The correlation between laser return intensity and surface roughness validates the efficacy of LiDAR in differentiating between damaged and undamaged zones.

Raster-based elevation models shown in Figure 11 corroborate scalar field observations, reinforcing the consistency of damage representation across analytical techniques.

4.3.2 Profile analysis of surface irregularities

Profiles derived from wall sections A1, A2, B1, and B2 as in Figure 15 illustrate variable depths of deterioration. Pronounced cavities and uneven contours reflect localized spalling and erosion. The reference section demonstrates an undisturbed alignment, serving as the comparative baseline.

Figure 15
Five side-view 3D-rendered models labeled a to e, depicting different levels of detail in layered structures with golden textures against dark backgrounds. Variations in texture and surface features are visible across each model.

Figure 15. Side-view profile of (a) A1, (b) A2, (c) B1, (d) B2 and (e) reference section.

This cross-sectional evaluation establishes a direct geometric correlation between the depth of damage and its spatial extent, validating the utility of TLS in quantifying surface loss.

4.3.3 Quantitative assessment of material loss

Material loss was quantified using two complementary methods viz., raster-based volumetric estimation and convex hull fitting as in Table 5. The raster approach computed elevation differences between damaged and reference surfaces, while convex hull fitting approximated the enclosing envelope of damaged zones.

Table 5
www.frontiersin.org

Table 5. Quantification of Surface damage.

Raster analysis yielded more realistic estimates due to its sensitivity to surface irregularities, whereas convex hull fitting tended to overestimate volumes owing to its convex assumption. Both methods, however, produced comparable results within acceptable error margins, confirming the reliability of TLS-based volumetric evaluation for restoration planning.

4.4 Applicability to conservation and retrofitting

The 3D point cloud and octree models extend beyond visualization, offering quantifiable insights into surface degradation, volumetric deformation, and material erosion. These models form a digital baseline for monitoring deterioration trends and guiding conservation priorities. Segmented elements such as arches, walls, and cornices can be individually assessed for structural integrity and load response. Furthermore, integration with Finite Element Modelling (FEM) and Building Information Modeling (BIM) frameworks allows simulation of stress distribution and retrofitting strategies.

Thus, the study bridges the gap between 3D documentation and actionable conservation planning, demonstrating the operational value of LiDAR-based data for informed heritage management.

5 Conclusion

The study demonstrates the effectiveness of terrestrial LiDAR in achieving high-precision 3D reconstruction, geometric validation, and quantitative damage assessment of heritage structures.

The point cloud model provided sub-centimeter accuracy, enabling detailed architectural interpretation, while the octree model optimized volumetric computation and visualization efficiency. The dual application of these models supports both detailed documentation and large-scale structural assessment.

Through raster and convex hull analyses, material loss was quantified objectively, highlighting the potential of TLS in condition monitoring and restoration planning.

Overall, this integrative framework advances the scope of digital heritage documentation by coupling geometric accuracy with conservation applicability, establishing a replicable model for architectural and structural evaluation of historic monuments. It establishes a transferable model for interdisciplinary domains including digital preservation, archaeology, and architectural forensics. Furthermore, it aligns with and directly supports sustainable infrastructure development as articulated in UN Sustainable Development Goals, particularly SDG 9 (Industry, Innovation, and Infrastructure) and SDG 11.4 (Conservation of cultural and natural heritage).

For subsequent research, the integration of Machine Learning (ML), Deep Learning (DL), and Building Information Modeling (BIM), coupled with Finite Element Analysis (FEA), is recommended to enable predictive modelling of structural behavior under varied stress conditions. Such advancements would further enhance the precision, automation, and scalability of heritage documentation workflows.

Data availability statement

The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author.

Author contributions

AM: Investigation, Conceptualization, Methodology, Writing – original draft, Software. VN: Writing – review and editing, Validation, Visualization, Supervision.

Funding

The authors declare that no financial support was received for the research and/or publication of this article.

Acknowledgements

We, the authors of this paper like to express our sincere thanks to the management of Vellore Institute of Technology (VIT), Vellore, Tamil Nadu, for consistently providing all necessary facilities to carry out the research.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Generative AI statement

The authors declare that no Generative AI was used in the creation of this manuscript.

Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

Asher, C. B. (1992). Architecture of Mughal India. Cambridge University Press. doi:10.1017/CHOL9780521267281

CrossRef Full Text | Google Scholar

ASTM E 2087 (2019). Specification for 3D imaging data exchange, version 1.0. West Conshohocken, PA: ASTM International. doi:10.1520/E2807-11R19

CrossRef Full Text | Google Scholar

Azmat, S., Hadi, A., and Soni Azmat, C. (2018). Geometrical pattern designs used in Mughal architecture in India during the period of 1526-1737. Int. J. Home Sci. 4 (Issue 2). Available online at: www.homesciencejournal.com.

Google Scholar

Balestrieri, M., Valmori, I., and Montuori, M. (2024). UAS and TLS 3D data fusion for built cultural heritage assessment and the application for St. Catherine monastery in Ferrara, Italy. Int. Arch. Photogrammetry Remote Sens. Spatial Inf. Sci. - ISPRS Archives 48, 9–16. doi:10.5194/isprs-archives-XLVIII-M-4-2024-9-2024

CrossRef Full Text | Google Scholar

Bernardini, F., Mittleman, J., Rushmeier, H., Silva, C., Taubin, G., and Member, S. (1999). The bal I-Pivoting algorithm for surface reconstruction. IEEE Trans. Vis. Comput. Graph. 5 (Issue 4).

Google Scholar

Besl, J. P., and McKay, D. N. (1992). A method for registration of 3-D shapes. IEEE Trans. Pattern Analysis Mach. Intell. 14 (2), 239–256. doi:10.1109/34.121791

CrossRef Full Text | Google Scholar

Bouziani, M., Chaaba, H., and Ettarid, M. (2021). Evaluation of 3D building model using terrestrial laser scanning and drone photogrammetry. Int. Archives Photogrammetry, Remote Sens. Spatial Inf. Sci. - ISPRS Archives 46, 39–42. doi:10.5194/isprs-archives-XLVI-4-W4-2021-39-2021

CrossRef Full Text | Google Scholar

Bryan, P., Blake, B., and Bedford, J. (2009). Metric survey specifications for cultural heritage.

Google Scholar

Cha, G., Lee, D., Park, S., and Park, I. (2019). Development of structural shape information model using terrestrial laser scanning based on the octree space division method. Indian J. Eng. Mater. Sci. 26, 168–175. doi:10.1061/(ASCE)CO.1943-7862.0001701

CrossRef Full Text | Google Scholar

Chen, S. Y., Chang, S. F., and Yang, C. W. (2021). “Generate 3D triangular meshes from spliced point clouds with cloudcompare,” in Proceedings of the 3rd IEEE Eurasia conference on IOT, communication and engineering 2021, ECICE 2021, 72–76. doi:10.1109/ECICE52819.2021.9645689

CrossRef Full Text | Google Scholar

Chenaux, A., Murphy, M., Keenaghan, G., Jenkins, J., McGovern, E., and Pavia, S. (2011). Combining a virtual learning tool and onsite study visits of four conservation sites in Europe. Geoinformatics FCE CTU 6, 157–169. doi:10.14311/gi.6.21

CrossRef Full Text | Google Scholar

Ebba Koch. (n.d.). Mughal architecture.

Google Scholar

Elseberg, J., Borrmann, D., and Nüchter, A. (2013). One billion points in the cloud - an octree for efficient processing of 3D laser scans. ISPRS J. Photogrammetry Remote Sens. 76, 76–88. doi:10.1016/j.isprsjprs.2012.10.004

CrossRef Full Text | Google Scholar

England, H. (2015). Geospatial survey specifications for cultural heritage.

Google Scholar

Gao, M., Cao, T. T., Nanjappa, A., Tan, T. S., and Huang, Z. (2013). GHull: a GPU algorithm for 3D convex hull. ACM Trans. Math. Softw. 40 (1), 1–19. doi:10.1145/2513109.2513112

CrossRef Full Text | Google Scholar

González-Aguilera, D., Rodriguez-Gonzalvez, P., Armesto, J., and Lagüela, S. (2012). Novel approach to 3D thermography and energy efficiency evaluation. Energy Build. 54, 436–443. doi:10.1016/j.enbuild.2012.07.023

CrossRef Full Text | Google Scholar

Gupta, A., Amir Khan, M., and Arshad Ameen, M. (2024). Geometric patterns in Mughal architecture: under a quantitative lens. ShodhKosh J. Vis. Perform. Arts 5 (6), 2746–2759. doi:10.29121/shodhkosh.v5.i6.2024.61

CrossRef Full Text | Google Scholar

Kishor, S., and Hadi Ensaif, H. (2025). Mughal Islamic architecture and its impact on Eastern civil architecture. Mon. Peer-Reviewed J. IC Value 87 (11), 2455. doi:10.2015/IJIRMF/202503019

CrossRef Full Text | Google Scholar

Klapa, P., and Gawronek, P. (2023). Synergy of geospatial data from TLS and UAV for heritage building information modeling (HBIM). Remote Sens. 15 (1), 128. doi:10.3390/rs15010128

CrossRef Full Text | Google Scholar

Kurdi, F. T. (2023). Efficiency of terrestrial laser scanning in survey works: assessment, modelling, and monitoring. Int. J. Environ. Sci. and Nat. Resour. 32 (2). doi:10.19080/ijesnr.2023.32.556334

CrossRef Full Text | Google Scholar

Kushwaha, S. K. P., Pande, H., and Raghavendra, S. (2019). Concrete volume loss calculation of structures using terrestrial laser scanner (TLS). J. Geomatics 13 (2). doi:10.1002/pbc

CrossRef Full Text | Google Scholar

Lerma, J. L., Navarro, S., Cabrelles, M., and Villaverde, V. (2010). Terrestrial laser scanning and close range photogrammetry for 3D archaeological documentation: the upper Palaeolithic cave of Parpalló as a case study. J. Archaeol. Sci. 37 (3), 499–507. doi:10.1016/j.jas.2009.10.011

CrossRef Full Text | Google Scholar

Lichti, D. D. (2007). Error modelling, calibration and analysis of an AM-CW terrestrial laser scanner system. ISPRS J. Photogrammetry Remote Sens. 61 (5), 307–324. doi:10.1016/j.isprsjprs.2006.10.004

CrossRef Full Text | Google Scholar

Liu, J., and Li, B. (2024). Terrestrial laser scanning (TLS) survey and building information modeling (BIM) of the edmund pettus bridge: a case study. Int. Archives Photogrammetry, Remote Sens. Spatial Inf. Sci. - ISPRS Archives 48 (1), 379–386. doi:10.5194/isprs-archives-XLVIII-1-2024-379-2024

CrossRef Full Text | Google Scholar

Llabani, A., and Lubonja, O. (2024). Integrating UAV photogrammetry and terrestrial laser scanning for the 3D surveying of The Fortress of Bashtova. WSEAS Trans. Environ. Dev. 20, 306–315. doi:10.37394/232015.2024.20.30

CrossRef Full Text | Google Scholar

López-Herrera, J., López-Cuervo, S., Pérez-Martín, E., Maté-González, M. Á., Izquierdo, C. V., Peñarroya, J. M., et al. (2025). Evaluation of 3D models of archaeological remains of Almenara castle using two UAVs with different navigation systems. Heritage 8 (1), 22. doi:10.3390/heritage8010022

CrossRef Full Text | Google Scholar

Ma, W., and Li, Q. (2019). An improved ball pivot algorithm-based ground filtering mechanism for LiDAR data. Remote Sens. 11 (10), 1179. doi:10.3390/rs11101179

CrossRef Full Text | Google Scholar

Maiti, A., and Chakravarty, D. (2016). Performance analysis of different surface reconstruction algorithms for 3D reconstruction of outdoor objects from their digital images. SpringerPlus 5 (1), 932. doi:10.1186/s40064-016-2425-9

PubMed Abstract | CrossRef Full Text | Google Scholar

Marčiš, M., Fraštia, M., and Vošková, K. T. (2024). Potential of low-cost UAV photogrammetry for documenting hard-to-access interior spaces through building openings. Heritage 7 (11), 6173–6191. doi:10.3390/heritage7110290

CrossRef Full Text | Google Scholar

Martino, A., Maria Lingua, A., and Maschio, P. (2024). Affordable sensors for speditive and accurate documentation of built heritage: first tests and preliminary results. Int. Archives Photogrammetry, Remote Sens. Spatial Inf. Sci. - ISPRS Archives 48 (1), 487–493. doi:10.5194/isprs-archives-XLVIII-1-2024-487-2024

CrossRef Full Text | Google Scholar

Mohammadi, M., Rashidi, M., Mousavi, V., Karami, A., Yu, Y., and Samali, B. (2021). Quality evaluation of digital twins generated based on uav photogrammetry and tls: bridge case study. Remote Sens. 13 (17), 3499. doi:10.3390/rs13173499

CrossRef Full Text | Google Scholar

Moyano, J., Nieto-Julián, J. E., Antón, D., Cabrera, E., Bienvenido-Huertas, D., and Sánchez, N. (2020). Suitability study of structure-from-motion for the digitisation of architectural (heritage) spaces to apply divergent photograph collection. Symmetry 12 (12), 1–25. doi:10.3390/sym12121981

CrossRef Full Text | Google Scholar

Pritchard, D., Sperner, J., Hoepner, S., and Tenschert, R. (2017). Terrestrial laser scanning for heritage conservation: the cologne cathedral documentation project. ISPRS Ann. Photogrammetry, Remote Sens. Spatial Inf. Sci. 4 (2W2), 213–220. doi:10.5194/isprs-annals-IV-2-W2-213-2017

CrossRef Full Text | Google Scholar

Prizeman, O. E. C., Sarhosis, V., D’Altri, A. M., Whitman, C. J., and Muratore, G. (2017). Modelling from the past: THE leaning soutwest tower of caerphilly castle 1539-2015. ISPRS Ann. Photogrammetry, Remote Sens. Spatial Inf. Sci. 4 (2W2), 221–227. doi:10.5194/isprs-annals-IV-2-W2-221-2017

CrossRef Full Text | Google Scholar

Sánchez-Aparicio, L. J., del Blanco-García, F. L., Mencías-Carrizosa, D., Villanueva-Llauradó, P., Aira-Zunzunegui, J. R., Sanz-Arauz, D., et al. (2023). Detection of damage in heritage constructions based on 3D point clouds. A systematic review. J. Build. Eng. 77, 107440. doi:10.1016/j.jobe.2023.107440

CrossRef Full Text | Google Scholar

Stylianidis, E. (2019). CIPA - Heritage documentation: 50 years: looking backwards. Int. Archives Photogrammetry, Remote Sens. Spatial Inf. Sci. XLII-2/W14, 1–130. doi:10.5194/isprs-archives-xlii-2-w14-1-2019

CrossRef Full Text | Google Scholar

Vajiram and Ravi (2025). Indo-islamic architecture: evolution, features, style, types. Available online at: https://vajiramandravi.com/upsc-exam/indo-islamic-architecture/.

Google Scholar

Wood, R. L., Mohammadi, M. E., Barbosa, A. R., Abdulrahman, L., Soti, R., Kawan, C. K., et al. (2017). Damage assessment and modeling of the five-tiered pagoda-style nyatapola temple. Earthq. Spectra 33 (Special issue 1), S377–S384. doi:10.1193/121516EQS235M

CrossRef Full Text | Google Scholar

Wu, C., Yuan, Y., Tang, Y., and Tian, B. (2022). Application of terrestrial laser scanning (Tls) in the architecture, engineering and construction (aec) industry. Sensors 22 (1), 265. doi:10.3390/s22010265

PubMed Abstract | CrossRef Full Text | Google Scholar

Yin, Y., and Antonio, J. (2020). Application of 3D laser scanning technology for image data processing in the protection of ancient building sites through deep learning. Image Vis. Comput. 102, 103969. doi:10.1016/j.imavis.2020.103969

CrossRef Full Text | Google Scholar

Keywords: heritage documentation, 3D reconstruction, damage quantification, raster analysis, convex hull algorithm

Citation: Mahendra Rekha A and Nagarajan V (2025) 3D reconstruction and morphological characteristic study of Abdullahpuram palace building in Vellore-Bangalore NH using terrestrial LiDAR data. Front. Remote Sens. 6:1622210. doi: 10.3389/frsen.2025.1622210

Received: 02 May 2025; Accepted: 25 November 2025;
Published: 16 December 2025.

Edited by:

Wei Lang, Sun Yat-sen University, China

Reviewed by:

Ahmed Mohamed Sallam, Aswan University, Egypt
Suchi Priyadarshani, Free University of Bozen-Bolzano, Italy

Copyright © 2025 Mahendra Rekha and Nagarajan. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Vaani Nagarajan, bnZhYW5pQHZpdC5hYy5pbg==

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.