Home Insights Visual quality control in additive manufacturing: Building a complete pipeline
additive manufacturing

Visual quality control in additive manufacturing: Building a complete pipeline

Automated visual quality control (VQC) is a critically important capability for many manufacturers because it helps to dramatically reduce quality control costs, waste, return rates, and reputation damage. Modern computer vision technologies make it relatively easy to develop custom VQC solutions for many types of manufacturing processes, and many off-the-shelf products are readily available. However, the development of VQC solutions for additive manufacturing is generally more challenging than for many other applications such as discrete manufacturing, and requires specialized machine learning techniques and, in some cases, specialized equipment as well.

In this article, we share a reference implementation of a VQC pipeline for additive manufacturing that detects defects and anomalies on the surface of printed objects using depth-sensing cameras. We show how we developed an innovative solution to synthetically generate point clouds representing variations on 3D objects, and propose multiple machine learning models for detecting defects of different sizes. We also provide a comprehensive comparison of different architectures and experimental setups. The complete reference implementation is available in our git repository.

What is additive manufacturing?

Additive manufacturing, also known as 3D printing, is a revolutionary manufacturing process that builds objects layer by layer using digital models. Unlike traditional subtractive manufacturing methods that involve cutting or shaping materials, additive manufacturing adds material to create the final product. This innovative approach offers several key principles that set it apart from conventional manufacturing techniques.

One of the fundamental principles of additive manufacturing is the use of digital design files, typically in the form of Computer-Aided Design (CAD) models. These digital files serve as blueprints for the 3D printing process, guiding the additive manufacturing system in constructing the desired object with precision and accuracy.

Additive manufacturing encompasses various techniques and technologies, each with its own unique characteristics and applications. Some of the commonly used additive manufacturing techniques include:

  • VAT Photopolymerization
  • Material Jetting
  • Binder Jetting
  • Material Extrusion
  • Powder Bed Fusion
  • Sheet Lamination
  • Directed Energy Deposition

Two additive manufacturing techniques, Stereolithography and Selective Laser Sintering, are best suited for implementation of the fashion industry reference case study that is analyzed later in this blog post. These techniques typically produce high-resolution and smooth surface finishes, which are conducive to accurate depth sensing and inspection.

Moreover, additive manufacturing offers several advantages over traditional manufacturing methods. These include:

  • Design freedom: Additive manufacturing enables the production of highly complex geometries, intricate details, and customized designs that would be challenging or impossible to achieve using traditional methods. It allows for design optimization and the integration of multiple components into a single printed part.
  • Reduced material waste: Unlike subtractive manufacturing, where excess material is removed, additive manufacturing adds material only where it is needed. This results in minimal waste and efficient use of resources, making it a more sustainable manufacturing process.
  • Rapid prototyping and iteration: Additive manufacturing allows for quick and cost-effective production of prototypes, enabling rapid design iterations, testing, and validation. This accelerated development cycle can significantly speed up product development processes.

Additive manufacturing finds applications across a wide range of industries. In aerospace, it is used to create lightweight components with complex internal structures, leading to improved fuel efficiency. In healthcare, 3D printing enables the production of patient-specific implants, prosthetics, and medical devices. The automotive industry utilizes additive manufacturing for rapid tooling, customized car parts, and production of concept cars. Additionally, additive manufacturing has applications in architecture, consumer goods and fashion, among other fields. Hence, in this blog post, our focus will be on the fashion industry, specifically using it as a use case to demonstrate the VQC pipeline by utilizing a high-end shoe sole model with complex geometry.

Visual quality control in additive manufacturing

One of the primary reasons VQC is essential in additive manufacturing is its direct influence on the functionality of printed parts. While structural integrity and dimensional accuracy are crucial, visual defects can also compromise the performance and reliability of the manufactured objects. For example, in industries such as aerospace or automotive, visual defects like surface roughness, layer misalignment, or distortions can hinder aerodynamics, impair functionality, and compromise safety.

VQC plays a crucial role in verifying the overall integrity, visual appeal, and functionality of the printed parts. Additionally, a product’s appearance significantly affects its worth and popularity, while defects undermine its quality and reliability. Therefore, product defects have the high potential to affect consumer satisfaction and market perception of the brand, which may further lead to a decline in sales and company reputation. In addition to the aforementioned advantages, effective VQC has a significant impact on other factors as well, including the optimization of production processes, waste reduction, and increased productivity. As a result, early detection and rectification of defects prevent the production of faulty parts, saving time and materials. Ultimately, these efforts contribute to cost savings and enhance the overall efficiency of the manufacturing process.

Product defects can stem from a variety of factors, such as material inconsistencies, process parameters, machine calibration, and post-processing. Several types of visual defects are commonly encountered in additive manufacturing processes, including the following:

  • Cracking: occurs due to thermal stresses and can significantly impact part performance and structural integrity.
  • Residual stresses: result from the cooling and solidification process, leading to distortion and dimensional inaccuracies.
  • Porosity: occurs when voids or air pockets are present within the printed object, which can weaken its strength and durability.
  • Balling: occurs when excessive heat causes the material to form irregular-shaped droplets, affecting surface quality and precision.

Additionally, VQC is typically conducted by installing one or more of the following tools for recording purposes:

  • Traditional Tools
  • Coordinate Measuring Machine
  • Optical 3D Scanners
  • Computed Tomography
  • In-Situ Inspection

On the other hand, VQC in additive manufacturing involves the implementation of various techniques and methods to ensure accurate and efficient assessment of visual defects. These approaches range from traditional visual inspection to advanced automated inspection systems, incorporating imaging technologies, computer vision, and machine learning algorithms. Types of VQC may be classified as follows [1]:

Traditional non-destructive defect detection technology

  • Infrared imaging: employs the thermal radiation intensity of printed objects to visualize and identify defects, showcasing their shape and contour.
  • Penetration: utilizes capillary phenomena and the application of fluorescent or colored dyes to examine surface defects in materials.
  • Eddy current: a technique that employs electromagnetic induction to detect and characterize defects in conductive materials by measuring changes in induced eddy currents.
  • Ultrasonic: a testing approach that utilizes ultrasonic waves to inspect the internal defects of metal components.

Defect detection technology based on machine learning

  • Input data (Images, Point Cloud)
  • Architecture of model (Classical ML, CNN, Recurrent DL, Auto Encoder)
  • Type of learning (supervised, unsupervised)

The complexity of 3D printed objects poses unique inspection challenges, and traditional methods may be inadequate, which leads to the conclusion that automated specialized techniques are required. With advancements in AI methods, this type of VQC is gaining more attention. Typically, these approaches involve training algorithms on large datasets of annotated images or other data to enable automatic recognition and classification of visual defects. However, obtaining a large dataset with real data can be costly and impractical. As a result, this blog post will propose a similar approach to the research conducted in the paper, Geometrical defect detection for additive manufacturing with machine learning models [2], where synthetic data generation will be used as a cost-effective alternative.

Solution overview

The main objective of this solution is to develop an architecture that can effectively learn from a sparse dataset, and is able to detect defects on a printed object by controlling the surface of the printed object each time a new layer is added. To address the challenge of acquiring a sufficient quantity of defect anomalies data for accurate ML model training, the proposed approach leverages a synthetic data generation approach. The controlled nature of the additive manufacturing process reduces the presence of unaccounted exogenous variables, making synthetic data a valuable resource for initial model training. In addition to this, we hypothesize that by deliberately inducing overfitting of the model on good examples, the model will become more accurate in predicting the presence of anomalies/defects. To achieve this, we generate a number of normal examples with introduced noise in a ratio that balances the defects occurrence expected during the manufacturing process. For instance, if the fault ratio is 10 to 1, we generate 10 similar normal examples for every 1 defect example. Hence, the pipeline for initial training consists of two modules: the synthetic generation module and the module for training anomaly detection models.

Once the model is trained, the inference results need to be supervised by a human operator or annotator to ensure accurate outcomes. This step ensures the reliability of the model’s predictions. At the same time, the synthetic generation process will receive feedback from human annotation, enabling adjustments to be made in order to reduce the disparity between the synthetic generated dataset and real examples.

Additionally, a post-processing module, leveraging an encoder-decoder architecture, could be incorporated immediately after the synthetic generation process and before the training module. This supplementary step would employ Deep GAN-type models, consisting of a generator and discriminator, to learn how to post-process the synthetic dataset, making it resemble real data as closely as possible. However, due to the scope of this research, the development of such a model is omitted.

Finally, this reference implementation is developed in Python program language, incorporating the following libraries: Pandas, NumPy, Bpy – (Blender package for Python), Open3D (Python package for working with Point Cloud), Sklearn and PyTorch.

ML architecture, where the light boxes represent the implemented components in this research
Figure 1: ML architecture, where the light boxes represent the implemented components in this research

Dataset pipeline

For this research we decided to use the STL mesh of a shoe sole with complex geometry (figure 2), taken from Thingiverse.com [3], that could resemble the use case of printing parts of shoes for high-end products. This type of low-volume product is typically manufactured for sneakers or trainers that need to be light and still have special properties for absorbing impact shock. The full synthetic generation module structure is presented in the following figure.

Pipeline for synthetic generation of data
Figure 2: Pipeline for synthetic generation of data

Before proceeding with data synthesis, we needed to decide which data recording process to mimic. Two options were considered: recording images/video with regular cameras and reconstructing 3D objects from the recorded images/video; or recording point clouds using depth-sensing cameras. Both approaches were explored, and after evaluation, it was determined that the latter approach, involving the recording of point clouds from depth-sensing cameras, yielded more promising results.

Two options for capturing input data
Figure 3: Two options for capturing input data

The process of synthetic generation begins with the normalization of the 3D CAD/STL model. This involves centering the model to the origin of the coordinate system and adjusting its dimensions to match those of the final printed object. Moreover, if we use low vertex mesh, we should raise the number of faces and vertices as well. In the next phase, bisect models are generated to simulate the layer-by-layer printing process. The dimensions of these bisect models are calibrated based on input parameters specific to the additive manufacturing machine and its layer thickness. The output of this stage is referred to as reference meshes, which serve as the starting point for generating both anomalies and normal examples. These reference meshes are also used for comparison purposes during the model training process.

Shoe sole model
The process of bisecting the mesh that mimics the process of printing layer by layer

Figure 4: Shoe sole model used in this research, and the process of bisecting the mesh that mimics the process of printing layer by layer

The second phase of the synthetic generation process involves converting the meshes from the complete 3D reference model to the hull of the model that will be captured by the recording equipment. To achieve this, cameras are initialized within the 3D scene. These cameras can be stationary and placed in multiple locations, or a video stream of a rotating object can be captured. For this research, we chose to mimic the position of four cameras around the object, as depicted in Figure 5. Once the cameras are generated, we cast Vrays from each pixel captured by the cameras and record which faces are hit by them. Consequently, we obtain a list of faces that would be visible to the real recording equipment, while removing all other faces in the process.

Positioning of the cameras
Casting of Vrays
Figure 5: Cameras are positioned around the object (top), and Vrays are cast (bottom)

The next part of the pipeline focuses on generating anomalies on the hulls of the 3D reference meshes. In this process, we parameterize geometric defects on the surface of the layers. We utilize parameters such as the randomly selected location of the anomalies, the area of the anomaly with a standard deviation around the center, the strength of the anomaly, and the direction along the Z-axis (either pull or push). These anomalies are generated on the last layer of the mesh to mimic their occurrence during inference time, where the model will primarily be used to detect defects on the surface of the most recently printed layer. Finally, in a real situation, one is likely to encounter various types of anomalies, so we parameterize them with the usage of real defect examples. Therefore, code is implemented in a modular way so that new anomaly code can be easily added, while existing code can be excluded.

Process of generating anomalies
Figure 6: Process of generating anomalies
Rendered example 1
Rendered example 2
Rendered example 3

Figure 7: Rendered examples of meshes with generated anomalies

In the final step of the process, the pipeline converts 3D meshes into point clouds. To achieve this, the polygons are first converted into triangles, enabling the generation of random points on any face. The selection of points is based on the properties of vectors. The product of two vectors $\mathbb{a}$ and $\mathbb{b}$ (representing the edges of a triangle) generate a new vector $\mathbb{c}$, that represents the diagonal of the parallelogram constructed from the vectors $\mathbb{a}$ and $\mathbb{b}$. By multiplying each vector by random numbers ui and uj between 0 and 1, we can construct any other vector within that parallelogram, where the endpoint of the new vector represents the point that we want to draw.

Another important property ensures that points are always drawn from the first triangle of the parallelogram, which corresponds to the surface of our mesh face. This property states that if the sum of the two random variables ui and uj is smaller or equal to 1, the point will be inside the triangle of our face. Because the sum of ui and uj cannot exceed 2, in situations where their sum is greater than 1, the sum of their complementary values (1 – ui) and (1 – uj) will always be smaller than 1.

\begin{aligned}
\overrightarrow{w} = u_{i}\circ \overrightarrow{a} + u_{j}\circ \overrightarrow{b}\\
(u_{i},u_{j}) \in U[0,1] \to u_{i}+ u_{j} > 1 \Longrightarrow (1-u_{i}) + (1-u{j})\le 1 \text{ or } u_{i}+ u_{j} \le 1\\
\end{aligned}

How points are generated on mesh polygons
Figure 8: How points are generated on mesh polygons

In the second part of the final stage, the process involves uniformly drawing points from all the meshes. Since the surface area of triangles varies, it is important to ensure that a sufficient number of points are drawn from each triangle to avoid uneven density. To address this, we utilize the surface area as weighted values, which means that points are more frequently drawn from larger surfaces compared to smaller ones.

As a result of this stage, we obtain point clouds for the reference, normal, and anomaly examples. Additionally, we introduce imperfections to account for potential inaccuracies in the surface or the recording equipment. This is achieved by introducing jittering to the positions of the anomaly and normal example points, effectively changing the points’ positions around their origin. Examples of point clouds are presented in the following figure:

Normal example points
Jittering to the positions of the anomaly

Figure 9: Examples of generated point clouds

Implementation details

Finally, we can move on to implementing the training model, starting by fine-tuning the three model layers: input, transformation and output.

Model pipeline

The training model has three main layers: the input layer, the data transformation layer, and the output layer. The purpose of the transformation layer is to enrich the feature set, filter out noise, and emphasize important signals. Conversely, the input data itself is complex, consisting of sparse information with a large number of rows representing point clouds. Each row only contains three features: the x, y, and z positions. Consequently, a sophisticated transformation layer is proposed to effectively handle this type of data.

ML architecture for detection of defects (anomalies) on 3D printed products
Figure 10: ML architecture for detection of defects (anomalies) on 3D printed products

In more detail, there are two parallel processes within the transformation layer. One process involves estimating the propensity score for each point, while the other process focuses on encoding the mesh into a latent space representation. This transformation results in each mesh being represented by a long vector, which serves as the input for the output layer. In the output layer, point clouds are classified to determine if defects are present or not. Furthermore, visualizations can be obtained using the propensity model values, providing additional insights into the data.

Propensity score

This layer consists of three main parts: comparison of inspected point clouds with the reference point cloud, evaluation of points based on their probability of belonging to the defected part of the object, and aggregation of point cloud information into a single vector.

The comparison layer is implemented to calculate the distance between each point in the inspected object and its closest corresponding point in the reference model. Prior to distance calculation, it is necessary to align the two point clouds. To achieve this, a suggested pre-processing step and algorithms from the Open 3D documentation are utilized. The alignment process involves global registration using the Random Sample Consensus (RANSAC) algorithm for robust estimation, followed by local refinement using the Point-to-Plane Iterative Closest Point (ICP) algorithm to enhance precision.

The basic idea of RANSAC is to randomly select a subset of the data points, then fit a model to this subset. This model is then used to classify the remaining points as either inliers (consistent with the model) or outliers (inconsistent with the model). The algorithm is repeated multiple times, and the model with the most inliers is selected as the best fit. On the other hand, the main idea behind ICP is to use the surface normals of the points in one cloud to estimate the planes that best fit those points. Then, for each point in the other cloud, the algorithm finds the closest point on the estimated plane and calculates the distance between the two points. The goal is to minimize the sum of the squared distances between the points in one cloud and their corresponding planes in the other cloud. By performing this alignment, the inspected and reference point clouds are brought into spatial correspondence, enabling accurate distance calculations for subsequent analysis and defect detection.

Alignment of point clouds
Comparison of point cloud with reference mesh

Figure 11: Alignment of point clouds (left) and comparison of point cloud with reference mesh (right)

After enriching the dataset with the distance feature for each point, the data is fed into the Random Forest classifier algorithm to estimate the propensity score of each point belonging to the defected part. The feature set used for classification includes five types of information: x, y, z coordinates, distance to the reference point cloud, and the label indicating whether the point is normal or a defect. It is important to note that the dataset is imbalanced, as the majority of example meshes are labeled as normal, while the defect examples constitute a significantly smaller portion of point clouds.

To address this class imbalance, we include weights for rebalancing the internal dataset during the training process. Additionally, we propose using the Precision-Recall Area Under the Curve (PR-AUC) for hyperparameter tuning, as it is a suitable evaluation metric for imbalanced datasets. Finally, all points are aggregated using various statistical measures, that is:

  • Distance from inspected to closest reference point: mean, max, percentile {60, 70, 80, 90, 95, 96, 97, 98, 99)
  • Probability that point is anomaly: mean, max, percentile {60, 70, 80, 90, 95, 96, 97, 98, 99)

Encoding data

For the encoding step, we utilized the PointNet deep learning architecture, which has proven to be a powerful and flexible framework for processing unordered point clouds, and has achieved state-of-the-art results. PointNet takes a set of 3D points as input and generates a fixed-size feature vector as output, which can be utilized for various tasks such as classification and segmentation. The key concept behind PointNet is the application of permutation-invariant neural network operations, meaning they are agnostic to the order of the input points. For more details on how PointNet works, we suggest reading the original paper, PointNet: Deep learning on point sets for 3D classification and segmentation [4].

Before feeding the data into PointNet, a downsampling step is performed to reduce the dimensionality of the input vectors. In this research, we experimented with different dimensions, specifically 256 and 512. Our experiments indicated that using a dimension of 256 yielded better results, while using 512 showed potential benefits for capturing smaller anomalies. In the literature, Iterative Farthest Point Sampling (IFP) is suggested as a downsampling technique. IFP selectively reduces the number of points in a point cloud while preserving its salient geometric features. It achieves this by iteratively selecting points that are farthest from the previously selected points. IFP has been shown to better represent the mesh compared to uniformly sampling points from the cloud.

However, the current setup of IFP downsampling has a limitation when dealing with smaller anomalies. The technique may inadvertently skip over the area of the anomaly, especially when the anomaly is small. To address this issue, we propose a novel downsampling approach. Instead of uniformly sampling points or using IFP, we suggest drawing weighted points based on the propensity output from the previously discussed layer. This means that points with a higher propensity score for being a defect will have a greater chance of being selected. As a result, our model will be trained not to classify the 3D object itself, but rather to determine whether the object has defects or not in the latent space.

Finally, the PointNet architecture can be divided into two main components: the PointNet encoder and the PointNet classifier. For our purposes, we only needed the first part of architecture, although we tested results for the complete model as well.

Output layer

The final layer of the pipeline comprises a classifier model and a proposed method for visualizing the results. For the classification task, various machine learning (ML) or deep learning (DL) models can be utilized. In this research, we employed two variations: Random Forest and PointNet classifier.

To create the input vector for the final model, we combined the outputs from the propensity estimation and encoder pipeline, and this input configuration was found to be effective in our experiments. However, we also explored other setups, which are presented in the results section of this blog post. Finally, the output of the propensity layer is used one more time to visualize anomalies of the inspected object.

Training models

The structure of the training model with experiments is shown in the diagram below. The model architecture tested in this research is shown on the left, while various experiment configurations are defined on the right side of the diagram. The training dataset includes two different ratios of defect vs normal: balanced and imbalanced. Moreover, the training pipeline is designed to simulate three different sizes of anomalies using the following parameters:

  • Large anomalies
  • Strength of defect scale: 5
  • Size of defect scale: 10
  • Medium anomalies
  • Strength of defect scale: 2
  • Size of defect scale: 5
  • Small anomalies
  • Strength of defect scale: 1
  • Size of defect scale: 2
Different training architectures and experiment setup
Figure 12: Different training architectures and experiment setup

It’s important to note that as the size of the anomalies decreases, the dataset for the propensity model becomes more imbalanced. This means that fewer data points are influenced by the defective parts. Additionally, smaller defect sizes tend to resemble noise. Therefore, we incorporated several different types of information into the final classifier, such as embedding layers and anomaly probability.

The second step in the data generation process involved splitting the data. In order to avoid data leakage from one model to another model down the stream, and since the data generation process provided an unlimited number of examples, we created separate datasets with different seeds for the propensity model and the embedding/classification model.

Propensity model

For the propensity model, we generated examples with 10 different iterations per 10 different bisects, resulting in a total of 100 anomalies and 100 normal examples of the reference mesh. However, considering the large number of points (approximately 60 million), we sampled 20% of the points to use as the training dataset, amounting to around 7.5 million points. The test results are presented below:

RF

50/50

L

1,527,756

714

16586

5241

0.24

0.88

0.37

RF

90/10

L

843,728

418

2917

607

0.17

0.59

0.26

RF

50/50

M

1,550,572

197

2907

1214

0.29

0.86

0.43

RF

90/10

M

860,395

128

607

311

0.34

0.71

0.45

RF

50/50

S

1,553,902

53

725

210

0.22

0.80

0.35

RF

90/10

S

861,372

27

27

15

0.36

0.36

0.35

Feature importance for random forest propensity model
Figure 13: Feature importance for random forest propensity model

The results demonstrate that the RF propensity model is capable of learning the presence of anomalies to a certain degree. However, when the size of anomalies is smaller and the data imbalance is greater, the precision decreases. This suggests a significant imbalance between normal and defective data points. Furthermore, the trained model indicates that the position on the Z-axis is the most crucial feature, which aligns with the expected behavior of the simulated process, where anomalies are generated in the highest layer of the mesh. Additionally, it is apparent that all other features are important and none of them are redundant. This implies that our model may be underfitting in general and cannot be used as a standalone model.

Embedding and downsampling models

For the embedding layer, which utilizes the PointNet model, we generated examples using 200 different iterations per 10 different bisects, resulting in a total of 4,000 anomalies and normal examples of the reference mesh. To create the embedding, we employed two distinct downsampling techniques: Iterative Farthest Point (IFP) and RF propensity (RF prop). Moreover, we found the most suitable values of hyperparameter, which are:

batch_size = 32

sample_rate = 1024

epochs = 10

learning_rate = 0.0001

For evaluating the test results, we used the classifier part of the PointNet model. Therefore, we obtained the following results for a different downsampling method and experiments setup:

Downsampling

Balance

Anom.

TN

FP

FN

TP

Precis.

Recall

F1

IFP

50/50

L

300

100

190

210

0.68

0.53

0.59

RF

50/50

L

211

189

54

346

0.65

0.87

0.74

IFP

90/10

L

300

100

22

22

0.18

0.5

0.27

RF

90/10

L

211

189

3

41

0.18

0.93

0.3

IFP

50/50

M

320

80

204

196

0.71

0.49

0.58

RF

50/50

M

280

120

204

196

0.62

0.49

0.55

IFP

90/10

M

320

80

30

14

0.15

0.32

0.2

RF

90/10

M

280

120

18

26

0.18

0.59

0.27

IFP

50/50

S

243

157

154

246

0.61

0.62

0.61

RF

50/50

S

160

240

132

268

0.53

0.67

0.59

IFP

90/10

S

243

157

13

31

0.16

0.7

0.27

RF

90/10

S

160

240

18

26

0.1

0.59

0.17

The results do not provide clear evidence on which downsampling technique yields better results. Surprisingly, IFP demonstrates better performance for smaller samples. One possible explanation for these findings could be the smaller number of epochs, preventing the model from converging to a stable point where the reduction in the loss function becomes negligible. However, similar to the previous model, the metrics indicate that the model can learn to distinguish defects from normal samples. Nevertheless, as the dataset becomes more imbalanced and the defect size decreases, maintaining an acceptable level of detection becomes challenging.

Progress of learning during the first ten epochs of the PointNet model
Figure 14: Progress of learning during the first ten epochs of the PointNet model

Classification model

In the final step, we combined the two previous models and utilized their outputs to train the last layer of the architecture, which is the classification model. We employed the Random Forest algorithm once again and experimented with three different stacked upstream setups:

  • RandomForest: Propensity + RandomForest -> Classifier
  • RandomForest: Propensity + PointNet (with Iterative Farthest Point downsampling): Embedding + RandomForest -> Classifier
  • RandomForest: Propensity + PointNet (with Random Forest propensity score downsampling) -> Embedding + RandomForest -> Classifier

The training and testing were conducted on a newly generated dataset, with an equal number of different instances per bisect as done for the PointNet model. The experiments were performed consistently across various combinations of different balancing techniques and anomaly sizes to draw conclusive insights on how the model should be treated in different cases. The results obtained from this experiment are as follows:

RF-P + RF-C

50/50

L

399

1

9

391

1.00

0.98

0.99

RF-P + PN-(IFP) + RF-C

50/50

L

400

0

6

394

1.00

0.99

0.99

RF-P+ PN-(RF-P) + RF-C

50/50

L

394

6

12

388

0.98

0.97

0.98

RF-P + RF-C

90/10

L

399

1

2

42

0.98

0.95

0.97

RF-P + PN-(IFP) + RF-C

90/10

L

400

0

1

43

1.00

0.98

0.99

RF-P + PN-(RF-P) + RF-C

90/10

L

394

6

2

42

0.88

0.95

0.91

RF-P + RF-C

50/50

M

378

22

49

351

0.94

0.88

0.91

RF-P + PN-(IFP) + RF-C

50/50

M

378

22

33

367

0.94

0.92

0.93

RF-P + PN-(RF-P) + RF-C

50/50

M

369

31

55

345

0.92

0.86

0.89

RF-P + RF-C

90/10

M

378

22

10

34

0.61

0.77

0.68

RF-P + PN-(IFP) + RF-C

90/10

M

378

22

5

39

0.64

0.89

0.74

RF-P + PN-(RF-P) + RF-C

90/10

M

369

31

8

36

0.54

0.82

0.65

RF-P + RF-C

50/50

S

326

74

131

269

0.78

0.67

0.72

RF-P + PN-(IFP) + RF-C

50/50

S

306

94

131

269

0.74

0.67

0.71

RF-P + PN-(RF-P) + RF-C

50/50

S

284

116

102

298

0.72

0.75

0.73

RF-P + RF-C

90/10

S

326

74

14

30

0.29

0.68

0.41

RF-P + PN-(IFP) + RF-C

90/10

S

306

94

17

27

0.22

0.61

0.33

RF-P + PN-(RF-P) + RF-C

90/10

S

284

116

9

35

0.23

0.80

0.36

The results presented in the table demonstrate that the suggested approach achieves an industrially acceptable F1 score of 0.99. This indicates that, without further tuning, the model is capable of identifying all anomalies except in one case where a defect was present but evaluated as negative. These results were obtained using the proposed ML architecture, where Iterative Farthest Point (IFP) is utilized for downsampling the embedding layer.

Furthermore, the results indicate good performance for medium-sized anomalies, while for smaller anomalies, additional tuning is necessary to improve accuracy. This suggests that in situations with higher data imbalance and smaller anomalies, generating larger datasets, conducting more iterations, and fine-tuning hyperparameters may be required to achieve more accurate models. Interestingly, the results show that the PointNet model does not enhance accuracy for smaller anomalies; in fact, it leads to worse results. As observed previously, this suggests that in this experimental setup, the model should be trained with more examples and epochs until the loss function converges to a stable value.

To provide a better understanding of how the model works, we have illustrated the feature importance for human-readable attributes (without PointNet vectors) in the accompanying figure. It is evident that points evaluated as highly likely to be anomalous contribute the most information to the downstream model. This highlights the importance of training the propensity model to be sensitive enough to distinguish high-defect points from normal ones. Furthermore, compared to the distance feature, it is clear that the propensity model provides highly valuable information to the downstream classification models.

Feature importance of random forest classifier model
Figure 15: Feature importance of random forest classifier model

Lastly, for visualization purposes, we utilized the output from the Random Forest propensity model. The red channel values were calculated by multiplying the probability of anomalies by 255, while the blue channel values were obtained by multiplying (1 – probability of anomalies) by 255. Moreover, visualization could be obtained by training PointNet for segmentation purposes, but this was out of the scope for this blog post. The results of anomaly visualization are shown in the following figure:

Anomaly example 1
Anomaly example 2
Anomaly example 3
Anomaly example 4

Figure 16: Visualization of anomalies that used propensity score for illustrating defects

Conclusion

In conclusion, the main hypothesis of this study has been confirmed: the proposed pipeline can be utilized for training models for the verification and VQC of additive manufacturing. However, there are several important considerations to take into account. Firstly, it was observed that smaller anomalies require a larger amount of data and a more complex infrastructure to accurately detect them. This highlights the need for a carefully designed data collection strategy and a robust infrastructure for capturing subtle anomalies.

Additionally, it was found that the downsampling technique used in the PointNet model may need adjustments to improve its accuracy in detecting anomalies, especially smaller ones. Ensuring that downsampling does not skip small anomalies and adequately represents them is crucial for reliable detection. The implementation of a propensity model for classifying points individually has proven to enhance the performance of the tested infrastructure. This approach boosts the overall anomaly detection capabilities and contributes to more precise identification of anomalies within the point cloud data.

Furthermore, it is worth mentioning that post-processing of synthetic data, such as adding support geometry, removing cloud elements that do not belong to the product, and altering internal patterns, may be necessary. Fine-tuning the model based on this post-processed data can further improve its performance. And lastly, it is important to note that the proposed pipeline is best suited for low-volume production, as for the best performance, the model should be overfitted to a specific product.

In summary, while the pipeline demonstrates promising results for VQC in additive manufacturing, it requires further refinement and adjustments to address challenges related to smaller anomalies, downsampling, post-processing, and scalability. With continued development and fine-tuning, this approach holds significant potential for enhancing the verification and quality control processes in additive manufacturing.

References

  1. Chen, Y., Peng, X., Kong, L., Dong, G., Remani, A., & Leach, R. (2021). Defect inspection technologies for additive manufacturing. International Journal of Extreme Manufacturing, 3 022002, DOI 10.1088/2631-7990/abe0d0
  2. Li, R., Jin, M., & Paquit, V. C. (2021). Geometrical defect detection for additive manufacturing with machine learning models. Materials & Design, 206, 109726.
  3. Leonel Yuan, Canada, source: https://www.thingiverse.com/thing:2817975/files
  4. Qi, C. R., Su, H., Mo, K., & Guibas, L. J. (2017). PointNet: Deep learning on point sets for 3D classification and segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 652-660).

Get in touch

We'd love to hear from you. Please provide us with your preferred contact method so we can be sure to reach you.

    Visual quality control in additive manufacturing: Building a complete pipeline

    Thank you for getting in touch with Grid Dynamics!

    Your inquiry will be directed to the appropriate team and we will get back to you as soon as possible.

    check

    Something went wrong...

    There are possible difficulties with connection or other issues.
    Please try again after some time.

    Retry