Inspection of histological 3D reconstructions in virtual reality

3D reconstruction is a challenging current topic in medical research. We perform 3D reconstructions from serial sections stained by immunohistological methods. This paper presents an immersive visualisation solution to quality control (QC), inspect, and analyse such reconstructions. QC is essential to establish correct digital processing methodologies. Visual analytics, such as annotation placement, mesh painting, and classification utility, facilitates medical research insights. We propose a visualisation in virtual reality (VR) for these purposes. In this manner, we advance the microanatomical research of human bone marrow and spleen. Both 3D reconstructions and original data are available in VR. Data inspection is streamlined by subtle implementation details and general immersion in VR.


Introduction
Visualisation is a significant part of modern research.It is important not only to obtain images, but to be able to grasp and correctly interpret data.There is always the question, whether the obtained model is close enough to reality.Further, the question arises how to discern important components and derive conclusions from the model.The method we present in this paper is a typical example of this very generic problem statement, but it is quite novel in the details.We use virtual reality for quality control (QC) and visual analytics (VA) of our 3D reconstructions in medical research.Figure 1 positions our tool in the visual computing methodology.3D reconstruction from histological serial sections closes a gap in medical research methodology.Conventional MRI, CT, and ultrasound methods do not have the desired resolution.This is also true for micro-CT and similar methods.Even worse, there is no way to identify the structures of interest in human specimens under non-invasive imaging techniques.In contrast, immunohistological staining provides a reliable method to mark specific kinds of molecules in the cells as long as these molecules can be adequately fixed after specimen removal.It is possible to unequivocally identify, e.g., the cells forming the walls of small blood vessels, the so-termed endothelial cells.Only a thin section of the specimen-typically about 7 µm thick-is immunostained to improve recognition in transmitted light microscopy.If larger structures, such as microvessel networks, are to be observed, multiple sections in a series ('serial sections') need to be produced.These series are necessary, because paraffin sections cannot be cut at more than about 30 µm thickness due to mechanical restrictions.The staining solution can penetrate more than 30 µm of tissue.It is not possible up to now to generate focused z-stacks from immunostained thick sections in transmitted light.Hence, the information gathered from single sections is limited.Thus, registration is a must.
For 3D reconstruction, serial sections are digitally processed after obtaining large images of each section with a special optical scanning microscope.The resolution is typically in the range 0.11-0.5 µm/pixel, the probe may cover up to 1 cm 2 .With a our registration method (Lobachev et al., 2017b) we produce stacks of serial sections that are spatially correct.After some post-processing (e. g., morphological operations, but also an interpolation Lobachev et al., 2017a), a surface polygon structure (a mesh) is obtained from volume data with the marching cubes algorithm (Lorensen and Cline, 1987;Ulrich et al., 2014).Both the actual mesh construction and the post-processing operations feature some subjective decisions, most prominently, choice of an iso-value for mesh construction.Thus, it is necessary to demonstrate that the 3D reconstruction is correct.
We now present a method for controlling that 3D reconstructions tightly correspond to the original immunostained sections by directly comparing the reconstructions to the original serial sections.This method accelerates QC and mesh colouring.QC is facilitated by showing single sections in the visualised mesh, without volume rendering.We inspect, annotate, and colour 3D models (directly compared to original data, the serial sections) in virtual reality (VR).Figure 2 documents a QC session from 'outside'.The presented method has been extensively used in microanatomical research (Steiniger et al., 2018a,b;Lobachev, 2018;Lobachev et al., 2019).
Our domain experts are much better trained in distinguishing and analysing details in stained histological sections than in reconstructed meshes.However, only 3D reconstructions provide an overview of continuous structures spanning multiple sections, e. g., blood vessels.
Further, the reconstructed mesh permits novel findings.In our prior experience, domain experts often encounter problems when trying to understand 3D arrangements using conventional mesh viewers.For this reason, we previously showed pre-rendered videos to the experts to communicate the reconstruction results and to enable detailed inspection.Videos are, however, limited by the fixed direction of movement and the fixed camera angle.Our experience with non-immersive interactive tools has been negative, both with standard (Cignoni et al., 2008) and with custom (Fig. 16) software.Further, our data suffer from a high degree of self-occlusion.In VR the user can move freely and thus intuitively control the angle of view and the movement through the model.In our experience, immersion allows for much easier and more thorough inspection of visualised data.Occlusion of decisive structures in the reconstruction does no longer pose a problem.

Contributions
We present a modern, immersive VR approach for inspection, QC, and VA of histological 3D reconstructions.Some of the reconstructed meshes are highly self-occluding, we are able to cope with this problem.For QC, the original data is simultaneously displayed with the reconstruction.User annotations and mesh painting facilitate VA.With our application, novel research results concerning the micoanatomy od human spleens became viable for the first time; our findings have been established and published in a series of papers (Steiniger et al., 2018a(Steiniger et al., ,b, 2020)).

Immersive visualisation
Immersive visualisation is not at all a new idea, Brooks Jr. (1999) quotes a vision of Sutherland (1965Sutherland ( , 1968Sutherland ( , 1970)).However, an immersive scientific visualisation was quite hard to obtain in earlier years, if a multi-million-dollar training simulation was to be avoided (van Dam et al., 2000).The availability of inexpensive hardware such as Oculus Rift or HTC Vive head-mounted displays (HMD) has massively changed the game recently.This fact (and the progress in GPU performance) allows for good VR experiences on commodity hardware.Immersive visualisation has been previously suggested for molecular visualisation (Stone et al., 2010), for medical volumetric data (Shen et al., 2008;Scholl et al., 2018), for dentistry (Shimabukuro and Minghim, 1998), and for computational fluid dynamics (Quam et al., 2015).More relevant to our approach are the visualisations of the inside of large arterial blood vessels (Forsberg et al., 2000;Egger et al., 2020).A recent trend is to utilise VR in medical education and training (Chan et al., 2013;Mathur, 2015).The availability of head-mounted displays has sparked some new research (Chen et al., 2015;Choi et al., 2016;Inoue et al., 2016) in addition to already covered fields.Mann et al. (2018) present a taxonomy of various related approaches (virtual reality, augmented reality, etc.).Checa and Bustillo (2020) review VR applications in the area of serious games.
A radically different approach is to bypass mesh generation altogether and to render the volumes directly.Scholl et al. (2018) do so in VR, although with quite small volumes (with largest side of 256 or 512 voxels).Faludi et al. (2019) also use direct volume rendering and add haptics.In this case the largest volume side was 512 voxels.The key issue in such direct volume renderings is the VR-capable frame rate, which is severely limited in volume rendering of larger volumes.The volumes we used for the mesh generation had 2000 pixels or more at the largest side.
In the next section we discuss additional modern medical applications of VR, however, VR visualisation is a much broader topic (Berg and Vance, 2017;Misiak et al., 2018;Rizvic et al., 2019;Slater et al., 2020).

Virtual reality in medicine
Although there have been precursors long before the 'VR boom', e. g., Tomikawa et al. (2010), most relevant publications on the use of VR in medical research, training, and in clinical applications appeared after 2017.Stets et al. (2017) work with a point cloud.We work with surface meshes.Esfahlani et al. (2018) reported on non-immersive VR in rehab.We use immersive VR in medical research.Uppot et al. (2019) describe VR and AR for radiology, we use histological sections as our input data.Knodel et al. (2018) discuss the possibilities of VR in medical imaging.Stefani et al. (2018) show confocal microscopy images in VR.We use images from transmitted light microscopy.Cal et al. (2019) visualise glial and neuronal cells in VR.We visualise blood vessels and accompanying cell types in lymphatic organs, mostly in the spleen.
A visualisation support for HTC Vive in popular medical imaging toolkits has been presented before (Egger et al., 2017).Unlike our approach, this method is tied into existing visualisation libraries.Our method is a stand-alone application, even if easily usable in our tool pipeline.Further, visualising both reconstructed meshes and original input data was a must in our use case.We also implemented a customised mesh painting module for visual analytics.Both our approach and the works of Egger et al. (2017Egger et al. ( , 2020) ) generate meshes prior to the visualisation.We discuss the differences between Egger et al. (2020) and our approach on page 23.
El Beheiry et al. (2019) analyse the consequences of VR for research.In their opinion, VR means navigation, but also allows for better education and provides options for machine learning.They can place annotations in their program, but focus on (immersed) measurements between the selected points.El Beheiry et al. perform some segmentations in VR, but primarily work with image stacks.Our mesh painting in VR can be seen as a form of segmentation, but we perform it on the 3D models, not on image data.Mesh painting uses geodesic distances, as detailed in Section 5.7.Daly (2018;2019a;2019b) has similar goals to this work, however he uses a radically different tool pipeline, relying more on off-the-shelf software-which alleviates a larger part of software development, but is also a limitation in the amount of features.Daly (and also others, e.g., Preim and Saalfeld, 2018) focus a lot on teaching, we use our system at the moment mostly for research purposes.
The work by Liimatainen et al. (2020) allows the user to inspect 3D reconstructions from histological sections (created in a radically different manner from how we section, they skip a lot of tissue in an effort to cover a larger volume).The user can view the sections and 'interact with single areas of interest'.This is elaborated to be a multi-scale selection of the details and allowing the user to zoom in.We stay mostly at the same detail level, but allow for more in-depth analysis.They put histological sections of tumors in their correct location in the visualisation, which was also one of the first requirements to our tool, as Section 3.1 details.
We are not aware of other implementations of advanced VR-and mesh-based interactions, such as our mesh paining that follows blood vessels (Section 5.7).To our knowledge, annotations have never before been implemented in the manner we use: The markers are preserved after the VR session and can be used in a mesh space for later analysis.This paper presents both those features.In general, most VR-based visualisations focus on presentation and exploration of the data.We do not stop there, but also perform a lot of visual analytics.

Why non-VR tools do not suffice
While enough non-VR tools for medical visualisation exist, such as 3D Slicer (Pieper et al., 2004;Kikinis et al., 2014), ParaView (Ahrens et al., 2005;Ayachit, 2015), or MeVisLab (Silva et al., 2009;Ritter et al., 2011), we are proponents of VR-based visualisation.Rudimentary tasks in QC can be done, e.g., in 3D Slicer or using our previous work, a custom non-VR tool (detailed below on page 21), but in our experience, our VR-based QC was much faster and also easier for the user.The navigation and generation of insights are a larger problem with non-VR tools.The navigation in VR is highly intuitive.A lot of insight can be gathered by simply looking at the model from various views.
The relation of implementation efforts to the usability impact was favourable for our VR tool.The complexity of software development of large standard components also plays a role here.We base our VR application heavily on available components, such as Steam VR and VCG mesh processing library, as Section 4.1 details.However, our tool is not an amalgamation of multiple existing utilities (e.g., using Python or shell as a glue), but a stand-alone application, written in C++.
Merely paging through the registered stack of serial sections does not convey a proper 3D perception.Single entities in individual sections (e.g., capillaries) have a highly complex shape and are entangled among similar objects.While it is possible to trace a single entity through a series, gaining a full 3D perception is impossible without a full-fledged 3D reconstruction.An inspection of our reconstructions in VR (Steiniger et al., 2018a(Steiniger et al., ,b, 2020) ) was much faster than a typical inspection of 3D data without VR (Steiniger et al., 2016), as Section 6 details.

VR visualisation: Requirements
Our domain experts provided feedback on the earlier versions of the software in order to shape our application.The following features were deemed necessary by medical researchers: • Load multiple meshes corresponding to parts of the model and to switch between them.This allows for the analysis of multiple 'channels' from different stainings.
• Load the original data as a texture on a plane and blend it in VR at will at the correct position.The experts need to discriminate all details in the original stained sections.
• Remove the reconstructed mesh to see the original section underneath.
• Provide a possibility to annotate a 3D position in VR.Such annotations are crucial for communication and analysis.
• Adjust the perceived roles of parts of the meshes by changing their colour.Colour changes form the foundation of visual analytics.
• Cope with very complex, self-occluding reconstructions.Otherwise it is impossible to analyse the microvasculature in thicker reconstructions (from about 200 µm in z axis onward).
• Free user movement.This issue is essential for long VR sessions.Basically, no movement control (e.g., flight) is imposed on the user.In our experience, free user movement drastically decreases the chances of motion sickness.
• Provide a possibility for voice recording in annotations (work in progress).
• Design a method for sharing the view point and current highlight with partners outside VR (trivial with Steam VR and its display mirroring), and for communicating the findings from live VR sessions as non-moving 2D images in research papers (an open question).
Figure 3 showcases some important steps of this pipeline.

Histological background
The human spleen is a secondary lymphatic organ which serves to immunologically monitor the blood.In order to intensify the contact between blood-borne molecules, which provoke immune reactions, and the specific immunocompetent lymphocytes and macrophages, the spleen harbours a so-termed 'open circulation system'.This system is unique to the spleen.It does not comprise continuous arteries, arterioles, capillaries, venules and veins as in other organs, but there is an open vessel-free space between capillaries and the beginning of the draining venous system, which is passed by all constituents of the blood.In addition, the initial venous vessels which re-collect the blood into the circulation system are also organ-specific and are termed 'sinuses'.
In humans, the initial parts of the splenic capillary network is covered by peculiar multicellular arrangements termed 'capillary sheaths'.The detailed function of these sheaths is unknown, but comparative anatomy suggests, that they collect certain foreign molecules from the blood and guide the immigration of special immunocompetent lymphocytes (B-lymphocytes) into the spleen.
Prior to our works (Steiniger et al., 2018a,b), arrangement of capillary sheaths has never been shown in three dimensions.It has been unknown, whether all splenic capillaries are covered by sheaths, how long the sheaths are, what shape they have and, finally, which cell types they consist of.In addition, the location of the sheaths with respect to the open ends of capillaries feeding the 'open circulation' has remained enigmatic.
During recent years our research (Steiniger et al., 2018a(Steiniger et al., ,b, 2020) ) has clarified many of these questions.We now report an advanced study comprising 3D models derived from up to 150 serial paraffin sections stained for conventional transmitted light microscopy utilising three different chromogens (brown, blue and red) to visualise four molecules by immunohistological methods.In detail, we demonstrate smooth muscle alpha-actin (SMA), CD34, CD271 and CD20 (Table 1).

Input data
The input data was generated from the registered stack of serial sections.Typical data volume was 2.3k × 2.3k × 161 voxels, with z-axis interpolation.The original data typically featured 21 to 24 sections, but we have used up to 150 sections in some reconstructions.The final meshes had typically 1.7M-2.3Mvertices.(Most of the GPU memory used by the application was occupied by textures anyway.)Real time rendering was possible with Vive-native resolution at 90 fps.Our experiments with Valve Index ran at even higher frame rates.Original data were projected as textures, typically at 2.3k × 2.3k.Although we experimented with showing all sections at once, in a productive use only one section was shown at a time.
We quality control the following data sets derived from the bone marrow of a 53year-old male and from the spleen of a 22-year-old male.Acquisition of the specimens complied with the ethical regulations of Marburg University Hospital at the time of processing.

Components
Our application makes use of existing software libraries.We load meshes with the VCG library.Multiple meshes with vertex colours are supported.We utilise Open GL 4.2.We enable back face culling, multisampling, and mipmaps.The section textures are loaded with the FreeImage library.The Steam VR library is used for interaction with Vive controllers and the headset.
With respect to hardware, the system consists of a desktop computer with a Steam VRcapable GPU and a Steam VR-compatible headset with controllers.

Controls
For control, a simple keyboard and mouse interface (for debugging outside VR), XBox One controller, and Steam VR-compatible controllers can all be used.Our initial idea was to use an XBox 360 or an XBox One controller, as such controllers provide All images are produced with our VR tool.Similar illustrations can be found in (Steiniger et al., 2020).a simple control metaphor.However, the expert users were not acquainted to gaming controllers and could not see the XBox One controller in VR.Thus, initial error rates were high when they e.g., tried to simultaneously use an 'X' key and a 'D-Pad' in blind.
Hence, a more intuitive approach with the native Vive controllers was targeted.We have kept the keyboard-and-mouse and the XBox controller options, but duplicated required input actions with Vive controllers.Native HTC Vive controllers proved their benefits.Although the metaphors were much more complicated, the intuitive control payed off immediately.Further, the visibility of the tracked controllers in VR helped a lot.Later on, we extended the application support to Valve Index knuckle controllers.
Further spectators can follow the domain expert from 'outside' of the immersion, as the HMD feed is mirrored on a monitor.Further, annotations significantly improve communication (Figs.7-9).The main controller actions of the domain expert are: • Blending the original data (the stained sections) in or out; • Blending the mesh(es) in or out; • Advancing the currently displayed section of original data; • Placing an annotation; • Mesh painting.
The most significant user interaction happens with intuitive movements of the immersed user around (and through) the displayed entities in VR.
Figure 6: This figure demonstrates why we need geodesic distances for mesh painting in our VR application.The yellow circle is the painting tool.We would like to mark the green blood vessels inside the circle, but do not want to co-mark the red blood vessel, even if it is also inside the circle.Red and green blood vessels might even be connected somewhere outside the circle, but the geodesic distance from the centre of the circle (the black dot) to any vertex of the red blood vessel is too large, even if they are reachable.Conversely, a part of the green blood vessel is selected, as a vertex of the green mesh is closest to the centre of the circle.As many vertices are selected as the geodesic distance (corresponding to the radius of the circle with some heuristics) allows for.

Communication
Without a beacon visible in VR it is almost impossible to understand what the expert tries to show.With a VR controller and our annotation tool, interesting areas in the visualisation can be shown to the outside spectators in real time.

Annotation markers
We designed a spherical selection tool for marking points in space (Fig. 5).The sphere is located at the top front of a Vive or Index controller and can be seen in the virtual space (and, by proxy, also in the mirrored display, Fig. 2).We need to note, however, that the annotation sphere appears much more vivid to the VR user than it appears on screenshots.User's movement and live feedback are in our opinion a major reason for such a difference in perception.Figures 4d-4f, 5, 7b, 8, show our annotation tool in images captured from VR sessions.
The annotations and mesh modifications are saved for further analysis.For example, after the domain expert has marked suspicious areas, the 3D reconstruction expert can inspect them in a later VR session.Reconstruction improvements can be deduced from this information.

Anti-aliasing
If a 'direct' rendering approach is used, there is a very dominant aliasing effect at certain points.We used multisampling (MSAA) on meshes and mipmaps on textures to alleviate this problem.We can look at the meshes only (c) or also show the original data (d).A closer view (e), (f) confirms: the reconstruction is correct, these structures are CD34 + objects inside the follicle.As the structures in question continue through multiple sections, they do not represent single CD34 + cells.Hence the objects in question must be blood vessels.The reconstruction is correct, the brown structures are real.All images in this figure are screenshots from our application.Similar results can be found in (Steiniger et al., 2018a).
Figure 8: A VR screenshot showing mesh reconstructions of blood vessels in a human spleen specimen, anti-CD34 staining, 'follicle-single' data set (Steiniger et al., 2018a).Unconnected mesh components were set to distinct colours.The user is highlighting a smaller blood vessel that follows larger ones with the HTC Vive controller.

Front face culling
Consider the interplay between the model and original serial sections.A section is not an infinitely thin plane.We show the original data as an opaque cuboid that is one section thick in the z direction and spans over the full surface in the xy plane.The actual data points of the mesh, corresponding to the displayed section, are inside the opaque block.Decisive parts of the mesh are occluded by the front face of the cuboid.On the one hand, this is, of course, not desired, and requires correction.On the other hand, the behaviour of the model spanning for multiple sections in front of the current section is best studied when it ends at the front face of the cuboid.The solution is to enable or disable front face culling of the original data display at will.
With front face culling enabled, the user can look inside the opaque block with original section texture.This is well suited for the inspection of lesser details and small artefacts.(Figure 9, (d), (e) features a real-life example, observe the edge of the section shown.)The general behaviour of the model across multiple sections can be tracked more easily with front faces of the original section on display.The presence of both representations accelerates QC.

Geodesic distances for mesh painting
We also implemented a VR-based mesh painting facility, mostly based on MeshLab code base (Cignoni et al., 2008).In this mode the colour spheres, which our user can place with the controller, produce a geodesically coloured region on the mesh instead of an annotation.These two functions, annotations and mesh painting, are conveyed to be clearly different to the user.
The selected colour is imposed on all vertices inside the geodesic radius from the centre of the sphere.We would like to paint on, for example, a part of a blood vessel that has a specific property.At the same time, we would like not to colour other blood vessels that might be inside the painting sphere, but are not immediately related to the selected blood vessel.This is facilitated with geodesic distances, as Figures 6 shows.
The markings from mesh painting lead to the final separation of the entities (such as blood vessel types, kinds of capillary sheaths, etc.) in the visualisation.

Front plane clipping
The classic view frustum in computer graphics consists of six planes, four 'display edges' building the frustum sides, a back plane (the farthest visible boundary) and the front plane, the closest boundary.The clipping planes are recomputed when the camera (i.e., the user) is moving.In the cases when there are too many self-occluding objects in the scene, the observer cannot 'pierce through' further than few closest objects.In other words, the observer can only see the closest objects.(This fact motivates occlusion culling.) Such an occlusion was the case with our denser data sets.With a simple change in the code, we moved the front plane of the view frustum further away, in an adjustable manner.Basically, the user 'cuts' parts of the reconstruction in front of their eyes, allowing for the detailed inspection of the inside of the reconstruction.
This adjustment is very minor from the computer graphics point of view, but it was very much welcomed by our actual users, the medical experts.With appropriate front plane clipping set at about 60 cm from the camera, it becomes possible to inspect very dense medical data sets from 'inside'.(Figs. 5,10,14,15 demonstrate this effect.)The user 'cuts away' the currently unneeded layers with their movements.

Hardware
We conducted most of our investigations on a 64-bit Intel machine with i7-6700K CPU at 4 GHz, 16 GB RAM, and Windows 10.We used NVidia GTX 1070 with 8 GB VRAM, and HTC Vive.
Our VR application was initially developed with HTC Vive in mind; it performed well on other headsets, such as HTC Vive Pro Eye and Valve Index.We observed convincing performance on Intel i7-9750H at 2.6 GHz, 64 GB RAM (MacBook Pro 16 ) and NVidia RTX 2080 Ti with 11 GB VRAM in Razor Core X eGPU with HTC Vive Pro Eye, as well as on AMD Ryzen 2700X, 32 GB RAM, NVidia RTX 2070 Super with 8 GB VRAM with Valve Index.Our application also should perform well with further headsets such as Oculus Rift.It was possible to use previous-generation GPUs, we also tested our application with NVidia GTX 960.Overall, it is possible to work with our application using an inexpensive setup.
The largest limitation factor seems to be the VRAM used by the uncompressed original image stack.The second largest limitation is the number of vertices of the visualised meshes and the rasteriser performance in case of very large, undecimated reconstructions.

'Bone marrow' data set
We have reconstructed the 3D shape of smaller and larger microvessels in hard, undecalcified, methacylate-embedded human bone serial sections (Steiniger et al., 2016).Shape diameter function on the reconstructed mesh allows to distinguish capillaries from sinuses. Figure 4 shows (a) a single section (part of the input data to reconstruction), (b) a volume rendering of all 21 sections, and (c) our 3D reconstruction.In Steiniger et al. (2016) we did not use VR.Here, we use the same data set to showcase some features of our VR-based method.It took us much more manpower and time to validate the reconstructions then, without VR, as Section 6 details (Fig. 16).
The process of annotation is demonstrated in Fig. 4, (d).The next subfigures show further investigation of the annotated area in VR either in combined mesh-section view (e), or showing the corresponding section only (f).To discriminate between capillaries (smaller, coloured red) and sinuses (larger, coloured green), we computed shape diameter function on the reconstructed meshes and colour-coded resulting values on the mesh, as shown in (c)-(e).The handling of the reconstruction and serial section data in VR showcases the annotation process.

'Follicle-double' data set
The human spleen contains accumulations of special migratory lymphocytes, the so-termed follicles.We reconstructed the capillaries inside and outside the follicles (Steiniger et al., 2018a).We show some results from this work in this section and in the next.Fig. 7 presents one of three ROIs that were quality controlled.
Our 3D reconstruction demonstrates that follicles are embedded in a superficial capillary meshwork resembling a basketball basket.Figure 7 shows that our VR tool enables easy annotation and projection of the original data leading to further results (Steiniger et al., 2018a).In Fig. 7, (e), some brown dots have been marked inside a follicle.The 3D model shows, that the dots indeed represent capillaries cut orthogonally to their long axis.Thus, we additionally find that infrequent capillaries also occur inside the follicles.The superficial capillary network of the follicles is thus connected to very few internal capillaries and to an external network of capillaries in the red pulp.We observed the latter two networks to have a shape which is totally different from the superficial follicular network.The external network is partially covered by capillary sheaths stained in blue colour.In total, we examined three 'follicle-double' data sets in VR.

'Follicle-single' data set
To continue the investigation of capillaries inside and outside the follicles, Fig. 8 shows that the annotated elongated structures in the follicles and in the T-cell zone at least partially belong to long capillaries, which accompany the outside of larger arteries, so-termed vasa vasorum.With our VR-based method, we investigated this 4k × 4k ROI at 0.3 µm/pixel and three further ROIs (not shown) with 1600 × 1600 pixels at 0.6 µm/pixel (Steiniger et al., 2018a).

'Red pulp' data set
The location of capillary sheaths in human spleens has not been clarified in detail until recently (Steiniger et al., 2018b).Our 3D reconstructions indicate that sheaths primarily occur in a post-arteriolar position in the part of the organ, which does not contain lymphocyte accumulations (so-termed red pulp), although length and diameter of the sheaths are variable.Many sheaths are interrupted by the boundaries of the ROI.(The remedy was a longer series of sections, as presented in Section 5.6.)For this reason it makes sense to collect only sheaths which are completely included in the reconstruction.Such a selection was done with our VR classification tool.
Figure 9, (a) shows an overview of the annotations.In Figs. 9, (b)-(d) it becomes clear, that the sheaths indeed end at the marked positions.Notice the enabled front face culling on the section cuboid in the closeups.Figure 9, (e) additionally shows the reconstructed meshes for the sheaths.We show a single ROI at 2k × 2k pixels.We have inspected 11 such ROIs in VR.

'Sheaths alternating' data set and clipping
The 'sheaths alternating' data set with up to 150 sections was created to further investigate the morphology and (to some extent) the function of capillary sheaths (Steiniger et al., 2020).The resulting 3D data set was extremely dense.The increased amount of 'channels' and the nature of the study (tracking the blood vessels) was a big challenge.The amount of the reconstructed blood vessels and their self-occlusion prohibited any possible insight when viewing them from the outside.Here we utilised front plane clipping (Section 4.8).Figures 5b, 10, 14 (and also Fig. 11 for 'sinus' data set) showcase this minor, but important adjustment.Figs.14, 15 further demonstrate the complexity of the 'sheaths alternating' data set.

Mesh painting and obtaining insights
As already seen in Figs. 5, 7, (a), (b), 8, we can point to anatomical structures with the Valve Index controller.Similarly, annotations can be placed and mesh fragments can be painted in different colours.An example of real-life mesh painting with geodesic distances is in Figure 12.The arrows show a part of a structure already painted by user in red in (a).It is painted back to blue in (b).
In this manner we have refined an automatic heuristics for arterioles (larger blood vessels, red in Fig. 13) to be always correct and to lead up to the capillary sheaths.
With a similar tool, working on unconnected components, we changed the colour of the sheaths in the 'red pulp' and 'sheaths alternating' data sets.The colour change effected the classification of the sheaths.The sheaths were initially all blue in our visualisations of the 'red pulp' data set.Sheaths around capillaries, following known arterioles, were then coloured green.We also annotated very few capillaries that should have a sheath, but did not (white).Figure 13 shows one of the final results, a connected vascular component with accompanying sheaths, follicle structures, and smooth muscle actin.
Figure 14 underlines the complexity of the 'sheaths alternating' data set.Both images in this figure show the final results of mesh painting, the actual work is already done.Still, they convey how interwoven and obscure the original situation is.CD271 + cells, mostly present in capillary sheaths, are in various shades of green in this figure.Figure 14 is highly complex; the fact that something can be seen in still images is the merit of applied annotations and mesh painting, of an active front plane clipping, and of a proper choice of a view point.A viewer in VR has no problem navigating such data sets because of the active control of view direction and of movement dynamics.The latter also means a control of the front plane clipping dynamics.
Figure 15 shows a further development: a separated arteriole with its further capillary branches and capillary sheaths (Steiniger et al., 2020).The sheath in the figure was cut open by front plane clipping.This 'separation' was generated from user inputs in VR, similar to the previous figure.Now, however, the complexity is reduced to a degree, that allows showing the still image in a medical research publication.
Summarising, our visualisations in VR were used to obtain insights (Steiniger et al., 2018b, 2020) on the position of the capillary sheaths-a problem that was initially discussed (Schweigger-Seidel, 1862) more than 150 years ago!

Discussion
The novelty of this work stems from how VR streamlines and facilitates better QC and VA of our reconstructions.The presented VR-based tool plays an important role in our pipeline.'Usual' 3D reconstructions from serial histological sections are known, but are quite rare, because they involve cumbersome tasks.The reasons for this are threefold: technical difficulties to create them; struggles to QC the reconstructions; investigation and comprehension problems in dense, self-occluding reconstructions.A proper registration, correct reconstruction options, and possibly also inter-slice interpolation are necessary for creating a satisfying reconstruction.For QC we need to visually ensure the correctness of processing, identify what the reconstruction actually shows, keep artefacts at bay by creating a better reconstruction if needed.Finding a good reconstruction from previously unexplored data with a new staining is an iterative process.While we create the reconstructions quite efficiently, QC was a lot of work in the past.With an immersive VR application, QC is much easier and faster, in our experience.
Annotations and mesh colouring provide for visual analytics abilities and these facilitate better distinction between various aspects of the reconstruction.To give some example, capillary sheaths, surely following arterioles, can be separated from capillary sheaths, the origins or ends of which lie outside of the ROI.Such distinctions allow for better understanding in microanatomical research.
Our experience emphasises the importance of VR-based QC.Our older 3D reconstruction study (Steiniger et al., 2016) featured 3500 × 3500 × 21 voxels in four regions.Figure 13: Final result of mesh annotation and painting, 'red pulp' data set (Steiniger et al., 2018b).Blood vessels are yellow.Certain support structures in the spleen that feature smooth muscle actin are also reconstructed and displayed in yellow.(A trained histologist can discern these structures from various kinds of blood vessels though.)Some of the blood vessels (red) lead from larger blood vessels (arterioles) to capillary sheaths (green).Some sheaths are fed by arterioles not traced in the reconstruction.These sheaths are marked blue.Finally, while some capillaries are red (having green sheaths), some other capillaries, coming from the same arteriole, do not have a sheath at all.Such capillaries are coloured in white.The background is black to better discern the white colour.
A similar, but different image appeared in (Steiniger et al., 2018b) under CC-BY 4.0 license.
From each reconstruction a further version was derived.They did not need to be quality controlled again, but their inspection was crucial to produce a diagnosis for medical research.We used both a non-VR visualisation tool for QC and pre-rendered videos for inspection.It took a group of 3 to 5 experts multiple day-long meetings to QC these reconstructions with the non-VR tool (Fig. 16).Deducing the anatomical findings from pre-rendered videos was also not easy for the domain experts.We found free user movement essential for long-term usability of our applicationour users spend hours immersed in consecutive sessions.Basically, the model is not otherwise translated, rotated, or scaled in the productive use, but only in response to tracking and reacting to user's own movements.Such free user movement allows the immersed user to utilise their brain's systems for spatial orientation and spatial memory.In their turn, the recognition and annotation of structures become easier.Free user movement also distinguishes our application from Egger et al. (2020): they used a VR flight mode on 3D models from CT.
We first found the benefits of VR-based visualisation during the preparation of Steiniger et al. (2018a).Unlike the bone marrow data set (Steiniger et al., 2016), in our  next work (Steiniger et al., 2018b), the total number of voxels was slightly larger, and QC was much faster with the new method.Our domain expert alone quality controlled with our VR-based method eleven regions with 2000 × 2000 × 24 voxels per day in one instance (Steiniger et al., 2018b) and two to four ≈ 2000 × 2000 × 84 regions per day in another instance (Steiniger et al., 2020).These sum up to slightly more than 10 9 voxels per day in the first case and up to 1.36 • 10 9 voxels per day in the second case.We would like to highlight, that these amounts of data were routinely quality controlled by a single person in a single day.Thus VR immersion saved an order of magnitude of man-hours for QC of our medical research 3D reconstructions (Steiniger et al., 2018a(Steiniger et al., ,b, 2020)).
Our immersive application also enabled VA of the same reconstructions.Without immersive VA and (later on) interactive 'cutting' into the reconstructions with front plane clipping in VR, it would be exorbitantly harder or even impossible for us to obtain the research results, summarised in Figs.13-15 (Steiniger et al., 2018b13-15 (Steiniger et al., , 2020)).

Conclusions
3D reconstructions from histological serial sections require quality control (QC) and further investigations.Domain experts were not satisfied by previously existing QC methods.We present a VR-based solution to explore mesh data.Our application also allows to superimpose the original serial sections.Such display is essential for QC.In our experience, immersion accelerates QC by an order of magnitude.Our users can annotate areas of interest and communicate the annotations.VR-powered VA allowed for a more exact and fast distinction and classification of various microanatomical entities, such as post-arteriolar capillaries and other kinds of capillaries.The classification of arterial blood vessels in its turn facilitated the classification of capillary sheaths.Summarising, our VR tool greatly enhances productivity and allows for more precise reconstructions that enable new insights (Steiniger et al., 2018a(Steiniger et al., ,b, 2020) ) in microanatomical research.

Future work
Making our application an even a better visual analytics tool is always viable.Minor improvements at user input handling include more input combinations and gestures.A planned feature is to record spoken annotations for every annotation marker.Recorded memos would facilitate better explanation of markings at their revision.The application has a potential to evolve in the direction of a non-medical 3D sculpting utility.A better maintainability of the code base through an excessive use of software product lines (Apel et al., 2016) is an important goal.Not all builds need all features and software product lines can accommodate this point.
Improvements of the rendering performance are both important and viable.Possible points of interest are better occlusion culling (e. g., Mattausch et al., 2008;Hasselgren et al., 2016) and progressive meshes (e.g., Derzapf and Guthe, 2012).There are further ways to improve the anti-aliasing and thus even further improve the immersive user experience.A possibility to consider is an advanced interpolation for higher internal frame rates.
A promising idea is to learn better view angles (similar to Burns et al., 2007) from the transformation matrices saved as parts of annotations.Better pre-rendered videos might be produced in this manner.(Gutirrez et al., 2018, have a similar motivation.)Texture compression in general and volume compression techniques in particular, e. g., (Guthe et al., 2002;Guthe and Goesele, 2016;Guarda et al., 2017), would help to reduce the GPU memory consumption caused by data for original slices.
VR might be the pivotal instrument for better understanding in teaching complex 3D structures, e.g., in anatomy or machine engineering.An effect of VR in training and education in such professions might need a more detailed assessment.
Of course, viable future work includes applications of our visualisations to reconstructions of further organs and tissues (e. g., future bone marrow, lung, heart, or tonsil probes) and expansion to further modalities of medical (such as MRI or CT) or nonmedical data.Recently, we experimented with VR presentation of serial block face electron microscopy data.Multi-modality is an interesting topic, too (Tang et al., 2020).Possible examples of further applications include materials science, computational fluid dynamics, and, most surely, computer graphics.Figure 16: Showcasing our non-VR renderer.Endothelia of blood vessels are stained brown in 'bone marrow' data set.The blended-in mesh is blue.The volume renderer played an important role in data verification for our publication (Steiniger et al., 2016).(a) shows volume data representation, (b) presents the visualisation of the final, filtered mesh vs. corresponding single section.
• Spleen: one of the organs which essentially as a blood filter.The spleen has a unique vasculature, where blood flow also occurs outside blood vessels.It features some unique structures both with respect to the vasculature and to the arrangement of two major types of migratory white blood cells called lymphocytes, which either occupy follicles or T-cell zones.
• Bone marrow: the inside of bones not only contains fatty tissue, but also regions responsible for generation of new blood cells.Bone marrow features a unique vasculature with capillaries and sinuses.
• Staining: for better visual inspection, specific parts of the tissue sections can be coloured.
• Immunohistology: specific detection of different molecules in tissue sections using antibody solutions.Binding of an antibody to a tissue component is finally visualised by deposition of a coloured insoluble polymerisation product of a previously uncoloured soluble stain.The stainings can, for example, detect membrane glycoproteins in the innermost cells of blood vessels, so-termed endothelial cells.Membrane glycoproteins have been numbered in the order of their discovery using the CD (cluster of differentiation) nomenclature.
• MRI, magnetic resonance imaging, and CT, computed tomography from a series of X-ray images are non-invasive imaging techniques that revolutionised the diagnostics.Unfortunately, the spatial resolution and selectivity of these techniques are not enough for our goals.
• Anti-CD34 staining: primarily stains endothelial cells of arterial vessels and capillaries in human spleen and bone marrow.Typically used colour is brown.Some stem cells in bone marrow are also stained.It is also weakly present in sinus endothelia in the proximity of follicles.
• Anti-CD141 staining: stains sinus endothelial cells in human bone marrow and in human spleen.Typically used colour is brown.
• Anti-SMA staining: stains smooth muscle alpha-actin.It is present, e.g., in walls of larger blood vessels on the 'input' arterial side, the so-called 'arterioles.'Typically used staining colour is brown.
• Anti-CD271 staining: stains capillary sheath cells and additional fibroblast-like cells in human spleen.The sheaths are multi-cellular structures around the initial segment of human splenic capillaries.Sheath cells obviously represent the sessile fibroblast-derived part of capillary sheaths.Typically used staining colour is blue or red.Specialised fibroblasts inside the follicles are more weakly stained with this antibody.

Figure 1 :
Figure 1: The general life cycle of visual computing (black) and our specific case (blue).The intensity of blue shading in the circle highlights the major focus of this paper.Blue arrows show facets of our implementation.

Figure 2 :
Figure 2: A user in VR.The large display mirrors the headset image.

Figure 3 :
Figure 3: From section images to final results: A human spleen section is stained for SMA (brown), CD34 (blue), and either CD271 (red) or CD20 (red), this is the 'Sheaths alternating' data set.(a): The region of interest (ROI), staining of B-lymphocytes (CD20) in red.(b): The ROI, staining of capillary sheaths (CD271) in red.(c): Result of colour deconvolution for CD217 of (b), a single image.(d): Same, but for CD34.(e): A volume rendering of the first 30 sheath-depicting sections of the ROI.(f): Final meshes.The colours highlight the different functions.The arterial blood vessels are in blue and red.The red colour is highlighting a specific tree of blood vessels.The sheaths related to this tree are green, the unrelated sheaths are dark green.The follicular dendritic cells (that are also weakly CD271 + ) are depicted in light green.The SMA mesh was used for a heuristics to find arterioles among blood vessels.SMA and B-lymphocytes are not shown in the rendering.

Figure 4 :
Figure 4: Images, renderings, and VR screenshots showing mesh reconstructions of blood vessels in a human bone marrow specimen, stained with anti-CD34 plus anti-CD141.(a): A single section.The staining colour is brown for both molecules in the original data.(b): A volume rendering of 21 consecutive serial sections.(c): The reconstructed mesh.It shows shape diameter function values, colour-coded from red to green.(d): We annotate a position of interest in the mesh in VR.An original section is seen in the background.(e): We have found the section containing the annotation, the mesh is still visible.(f): Only the section with the annotation is shown in VR.Domain experts can now reason on the stained tissue at the marked position.

Figure 5 :
Figure 5: Working with our annotation tool.We show VR screenshots of our application, in a human spleen 'sheaths alternating' data set (Steiniger et al., 2020).In Fig.(b) the front plane clipping is evident, viz.Fig. 10.Notice the Valve Index controller with a ball, showing an anatomical structure.In this manner, the VR user can clarify some morphological details or demonstrate an issue to an audience outside VR.All images are produced with our VR tool.Similar illustrations can be found in(Steiniger et al., 2020).

Figure 7 :
Figure7: Real or artefact?The models are derived from human spleen sections from the 'follicle-double' data set.These sections were stained for CD34 (brown in staining, yellow in the reconstruction) and for CD271 (blue).In VR we spotted and annotated putative capillaries inside follicles (large blue structures, a, b).We can look at the meshes only (c) or also show the original data (d).A closer view (e), (f) confirms: the reconstruction is correct, these structures are CD34 + objects inside the follicle.As the structures in question continue through multiple sections, they do not represent single CD34 + cells.Hence the objects in question must be blood vessels.The reconstruction is correct, the brown structures are real.All images in this figure are screenshots from our application.Similar results can be found in(Steiniger et al., 2018a).
Fig. 8 also shows a Vive controller tracing one of the longer capillaries with the annotation ball as a form of communication of the findings to spectators outside VR.

Figure 9 :
Figure 9: Investigating the annotated regions, VR screenshots of our application.The human spleen 'red pulp' data set is used (Steiniger et al., 2018b); we have annotated some ends of capillary sheaths in meshes reconstructed from human spleen data.(a): Overview.(b)-(d): Original data and an annotation.Experts can more easily reason on such visualisations because of 3D perception and intuitive navigation.(e): The same annotation as in (d), showing additionally the mesh for sheaths.

Figure 10 :
Figure 10: Showcasing front plane clipping on 'sheaths alternating' data sets (Steiniger et al., 2020).(a): A complete data set in a frontal visualisation.(b): The user cuts into objects of interest using clipping.(c)-(d): Utilisation of clipping during the exploration of the data set.All images are produced with our VR tool either directly or with Steam VR interface.(a), (b) were featured in a poster(Lobachev et al., 2019).(a), (c), (d): Similar illustrations can be found in(Steiniger et al., 2020).

Figure 11 :
Figure 11: Cutting structures open with front clipping plane, using the 'sinus' data set.Capillaries are blue, capillary sheaths are green in the reconstruction.An original section is visible on the right.

Figure 12 :
Figure 12: An ongoing session of mesh painting with geodesic distances as VR screenshots.We use the 'sinus' data set.(a): Notice the huge annotation ball on the controller, the bright red dot is its centre.This centre is the starting point of the geodesic computation, initiated by the trigger on the controller.The large radius of the marking tool is bounded by the connectivity: the vertices which are within the radius, but are not connected to the starting point or are 'too far' geodesically, are not painted.(b): For better visibility, we show a crop from the left eye view of (a).The white arrow shows a point of interest.(c): An excessive marking (white arrow), is removed with a repeated painting operation.On the bottom left in (a), (c) a Valve Index controller is visible.The background colour in this figure signifies the highlighting mode used.

Figure 14 :
Figure 14: The natural complexity of the 'sheaths alternating' data set (Steiniger et al., 2020) is very high.With mesh annotation, painting, and removal of irrelevant details we were able to keep the complexity at a tolerable level.(a): The capillary network of the splenic red pulp is blue.Arterioles have been highlighted in red.Capillary sheaths are green and dark green, depending on whether they belong to the red vessels.Special supportive cells in a follicle are light green.An arteriole (red) is entering from left, one of the branches is splitting up into capillaries (still red) that immediately enter capillary sheaths.One such sheath (green) is is cut open, showing the capillary inside.(b): Arterioles and sheathed capillaries are light blue, capillary sheaths are green.The open-ended side branches of sheathed capillaries are red.This figure shows a single system, starting with an arteriole.It has been separated from other arterial vessel systems in the surroundings.Front plane clipping opens the capillary sheath and shows the blue capillary inside.We see some open green capillary sheaths with light blue 'main line' blood vessels inside.Similar, but different figures can be found in Steiniger et al. (2020).

Figure 15 :
Figure 15: Final result of mesh annotation, painting, and removal of irrelevant details.The complexity is now greatly reduced.This is the 'sheaths alternating' data set (Steiniger et al., 2020).The meshes are cut open by the front clipping plane.Blood vessels are red, capillary sheaths are green, cells in follicle are light green.An arteriole (red) is entering from left in the proximity of a follicle, this arteriole splits further, one of the branches is splitting up into capillaries (still red) that immediately enter capillary sheaths.One such sheath (green) is curved around the follicle.The sheath is cut open, showing the capillary inside.A similar, but different figure, depicting a different sheath, can be found in Steiniger et al. (2020).

Table 1 :
Cell distribution of the molecules detected in human spleens.