Additive Manufacturing in Construction
AMC TRR 277

Research Summary Report of C06

Integration of Additive Manufacturing in the Construction Process

[13.06.2024]

Mawas, Karam; Doctoral researcher, k.mawas@tu-braunschweig.de, TU Braunschweig, Institute of Geodesy and Photogrammetry (IGP)

Gerke, Markus; Project leader, m.gerke@tu-braunschweig.de, TU Braunschweig, Institute of Geodesy and Photogrammetry (IGP)

Maboudi, Mehdi; Associated scientist, m.maboudi@tu-braunschweig.de, TU Braunschweig, Institute of Geodesy and Photogrammetry (IGP)

 

 

Quality control plays a pivotal role in enabling the seamless integration of components into objects. To ensure adherence to a resilient process and the faithful realization of the designed model in the printed object, it is essential to implement continuous and automated data capture and process inspection. Based on the outcomes of our quality control measures, we investigated how to integrate these practices into Construction Industry 4.0. We will continue our research in this field during the second phase.

 

Summary

The architecture, engineering, and construction (AEC) industry continuously evolves to meet the demand for sustainable and effective design and construction of the built environment. Two primary techniques for additive manufacturing are extrusion and shotcrete. With extrusion, the material is extruded through a digitally controlled nozzle in a layer-by-layer process. In shotcrete, the material is sprayed through the nozzle. The continuous flow of concrete material, termed filament or layer, shapes the object structure. Quality control of filament geometry is crucial as it is influenced by factors such as the materials, nozzle dimensions and shape, extrusion speed, and air pressure in the case of shotcrete, etc. Thus, we introduce an automated procedure to assess filament geometry by generating images from point clouds of printed objects captured via techniques or sensors such as photogrammetry, structured light system (SLS), or terrestrial laser scanner (TLS). By utilizing a deep learning model, the generated images are segmented to control and evaluate filament geometry.

 

Current state of research

Our process begins with data capture of the printed object to obtain point cloud, as shown in Fig. 2. We then clean and filter the point cloud if necessary. Also, co-registration of TLS stations is accomplished through target-based and plane-based registration. Next, we build a virtual camera model to project the point cloud onto a camera sensor to generate an image for subsequent image-based segmentation. The ground sampling distance (GSD) of the established camera is defined by considering Shannon’s sampling theorem, ensuring it is less than half the height of the area between adjacent filaments. Object points are back-projected into the camera with their associated information (RGB or Intensity). We then train a deep learning model on these images for filament segmentation, using a U-Net architectural model. The model outputs a binary mask indicating interlayer line locations in the input image, as illustrated in Fig. 1, which is then can be sent back to designed model to be updated about the current state.

Fig 1: Workflow starting from a point cloud, establishing a virtual camera and frustum, projecting points onto the virtual sensor, and deploying a deep learning model for filament segmentation. / Credit: modified from Mawas et al. 2024 [to be published].

Fig 2: Workflow of the image generation to be trained with deep learning model.

Projection onto virtual camera model

WordPress Lightbox