3D Reconstruction

 

Archaeological 3D computational science is devoted to the representation of Virtual Environments (VE) of sites and artefacts with computer applets assessing cognitive procedures behind the interpretation of the archaeological record.  VEs are a subset of Virtual archaeology (VA) that deals with the analysis of the procedures of management and representation of digitally reconstructed archaeological evidence through interactive real-time computer generated imagery (CGI) aided by the use of computer graphics 3D techniques. The necessity to archive, visualize and experience those reconstructed evidence has driven research communities onto developing efficient computer science methods (hardware, reconstruction algorithms) for recording accurate three-dimensional data.

The cognitive experiences of virtual archaeological evidence are divided into passive and active forms of interaction. While the passive form has to do mainly with the primary need of monitoring and analyzing artefacts individually, the active form recreates reality, contextualizes artefacts in a simulated dynamic VE. Eventually it enhances exploitation through interactive means and renders semantic content through the creation of an interactive virtual museum or an applet, reachable on digital media or on the web.  3D reconstruction and documentation of still existing archaeological sites and artefacts can support archeology by offering scholars a contemporary method of experience for understanding the traces of the past.

The methods of acquisition of three-dimensional data and processing for reconstruction of solid multidimensional object representations, known as 3D surrogates, it is useful for metric analysis, documentation, analysis and visualization.  ICACH is experimenting in providing 3D digital surrogates suitable for multimodal VE and is using the latest methods and algorithms in generating sparse and dense point cloud meshes.

Photogrammetry, from photos to models.

Photogrammetry is an image-based technique often compared to laser scanning as a method for generating XYZ point-accurate meshes of a surface. Rather than simply using a two-dimensional photograph to give the illusion of the spatial arrangement of surface, photogrammetric image-based methods provide three-dimensional high-polygon models and high-fidelity textures from a sequence of overlapping 2D photos (large files, hundreds of thousands to millions of points, etc).

Photogrammetry is a dynamic and changing field, new tools are frequently developed

and existing algorithms are being refined and improved to be more efficient and adjusted to current needs. By following academic and industrial research standards ICACH is using efficient computer algorithms to calculate camera position for a series of overlapping photographs, and then extrapolate corresponding points from the photographs and camera positions based on EXIF (intrinsic technical metadata) data. Based on these procedures, a point cloud of pixels in 3D space is created and reconstructed to a multidimensional mesh and then assigned into a VE with its corresponding metadata characteristics (provenance, material, location, etc.).

Photorealistic, highly detailed 3D models, classified as dense point clouds (DPC), which are highly accurate, are based on Structure from motion (SfM) algorithms which are using stereo-matching procedures, and are known for their ability to render geometry at a ratio of 1:1 physical to virtual.
Since, the exploitation of the VE will be simulated real-time, the demand of the processing unit that drives the application needs to be used efficiently due to the high-load draw calls of DPC and high polygon meshes on screen, optimization and multidimensional decimation of models and textures are needed in order to simulate accurate data, that’s why a trade off between detail and size needs to be assessed. The optimization of 3D meshes to various level-of-detail (LOD) datasets it is done to all DPC meshes presented in a VE. DPC can be obtained not only from photogrammetry but also from laser-based and structured-light based techniques.

Structured light, real time 3D reconstruction

Structured-light techniques rely on projecting light patterns onto the subject, usually in order to directly acquire a 3D range map of the surface, typically using a single camera and a single projector.

Structured light 3D scanners project a pattern of light on the subject and look at the deformation of the pattern on the subject. Consider an array of parallel vertical laser stripes sweeping horizontally across a target. It is then projected onto the subject using either an LCD projector or an infrared laser light attached in a device that has an RGB-D (chromaticity RGB and Depth information) capable sensor.  A camera, offset slightly from the pattern projector, looks at the shape of the line and uses a technique similar to triangulation to calculate the distance of every point on the line. In the case of a single-line pattern, the line is swept across the field of view to gather distance information one strip at a time. The advantage of this technique in comparison to the others is that the point clouds are computed in real-time and the reconstructed three-dimensional is drawn on screen, in particular instead of scanning one point at a time, structured light scanners scan multiple points or the entire field of view at once. The disadvantage is the geometrical accuracy and texture detail; in addition it demands a dark environment so that the intensity of the projected pattern can be detected from the camera. This problem was recently solved by a breakthrough technology called Multistripe Laser Triangulation (MLT).

Structured light elaborates and combines two classic computer vision techniques, depth from focus and depth from stereo. This method is still a very active area of research with many research papers published each year.

 

 

Images presented are courtesy of the Cyprus Department of Antiquities.

CC BY-NC-ND 4.0
3D Reconstruction by ICACH is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Imaging