Additive Manufacturing in Construction
AMC TRR 277

Research Summary Report of C06

Integration of Additive Manufacturing in the Construction Process

 

[13.06.2025]

M. Sc. Karam Mawas                                 Researcher, k.mawas@tu-braunschweig.de

M. Sc. Mohammad Savadkouhi               Researcher,

                                                mohammad.savadkouhi-aghamolki@tu-braunschweig.de

Dr.-Ing. Mehdi Maboudi                            Researcher, m.maboudi@tu-braunschweig.de

Prof. Dr.-Ing. Markus Gerke                    Project leader, m.gerke@tu-braunschweig.de

Technical University of Braunschweig, Institute of Geodesy and Photogrammetry (IGP)

 

Quality control plays a pivotal role in enabling the seamless integration of components into objects, and in general to maintain pre-defined tolerances. To ensure adherence to a resilient process and the faithful realization of the designed model in the printed object, it is essential to implement continuous and automated data capture and process inspection. Based on the outcomes of our quality control measures, we investigated how to integrate these practices into Construction Industry 4.0. We will continue our research in this field during the second phase.

Summary

The architecture, engineering, and construction (AEC) industry continuously evolves to meet the demand for sustainable and effective design and construction of the built environment. Two primary techniques for additive manufacturing are extrusion and shotcrete. With extrusion, the material is extruded through a digitally controlled nozzle in a layer-by-layer process. In shotcrete, the material is sprayed through the nozzle. The continuous flow of concrete material, termed filament or layer, shapes the object structure. Quality control of the filament geometry is crucial as it is influenced by factors such as material properties, nozzle dimensions and shape, extrusion speed, and air pressure in the case of shotcrete, etc. Thus, we introduce an automated procedure to assess filament geometry by generating images from point clouds of printed objects captured via techniques or sensors such as photogrammetry, structured light system (SLS), or terrestrial laser scanner (TLS). By utilizing a deep learning model, the generated images are segmented to control and evaluate filament geometry.

Integrating automated quality control into the production cycle can significantly enhance productivity and uphold stringent quality standards in rapid construction. Additionally, effective automated methodologies can replace labor-intensive manual inspections, ensuring the structural attributes of 3D-printed items align with design specifications and enabling prompt defect identification.

 

Current state of research

Our process begins with capturing data of the printed object to obtain a point cloud or images.  Fig. 1a shows the data used for training and testing the model. This data comes from a variety of sources. One source is fresh extrusion-based data obtained from Rill-García et al. (2022). Other 2D images used to perform photogrammetry to generate 3D data were used to capture a Contour Crafting (CC) 3D-printed object.  The 3D point-cloud data were acquired via structured-light scanning (SLS) and sourced from Mendricky and Keller (2023). The TLS dataset for SC3DP and the extruded clay object encompasses multiple objects exhibiting a range of geometric complexities, as shown in Fig. 2b.

Captured images can be fed directly into the deep learning model, but point cloud data requires further processing, as shown in Fig. 2. After obtaining the point cloud of the object, a virtual camera model is built to project the point cloud onto a camera sensor, generating an image for subsequent image-based instance segmentation deep learning model. We define the ground sampling distance (GSD) of the established camera by considering Shannon’s sampling theorem to ensure that it is less than half the height of the area between adjacent filaments.

Points are back-projected into the camera along with their associated information (RGB or intensity). Lastly, a sliding window is performed on 512×512 images to achieve consistent sizing, as every object is a different size. Then, we train a deep learning model on the images in Fig. 1b for filament instance segmentation using the Yolov11 architectural model.

As shown in Figure 3, the model produces a colorized mask indicating the filament area superimposed on the input image. Further model enhancements are required, especially for images with high perspective distortion. Nevertheless, results from the deep learning model can be sent back to BIM/FIM models to update the designed model about the current state.

Fig. 1: Dataset used for instance segmentation Deep Learning Model. (a) shows the classification of the different datasets used for deep learning.

Fig. 1: Dataset used for instance segmentation Deep Learning Model. (b) shows the complete objects used for each step of deep learning: training, validation, and testing. A sliding window is applied to each image shown in the figure to retrieve an image size of 512×512 (to avoid resizing). The total number of images resulted from the sliding window is as follows: 843 images for training, 199 images for the validation dataset, and 138 images for the test dataset.

Fig. 2: Workflow of the image generation to be trained with deep learning model.

Figure 3 shows the results of the deep learning model on the test dataset. (a) Image from a 2D image with perspective distortion from extrusion-based 3D printing; (b) Image resulting from our virtual camera model from a 3D point cloud from extrusion-based 3D printing; (c) Image of an SC3DP object from our virtual camera model; and (d) Image of fresh extrusion concrete material.

WordPress Lightbox
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful. Please visit our Privacy policy for further information.