• Aucun résultat trouvé

An Adaptive Machine Vision System for Parts Assembly Inspection

Jun Sun, Qiao Sun, and Brian Surgenor

AbstractThis paper presents an intelligent visual inspection methodology that ad-dresses the need for an improved adaptability of a visual inspection system for parts verification in assembly lines. The proposed system is able to adapt to changing inspection tasks and environmental conditions through an efficient online learning process without excessive off-line retraining or retuning. The system consists of three major modules: region localization, defect detection, and online learning. An edge-based geometric pattern-matching technique is used to locate the region of ver-ification that contains the subject of inspection within the acquired image. Principal component analysis technique is employed to implement the online learning and defect detection modules. Case studies using field data from a fasteners assembly line are conducted to validate the proposed methodology.

Keywords Visual inspection system · Adaptability · Online learning · Defect detection·Principal component analysis·Parts assembly

14.1 Introduction

In mass-production manufacturing, the automation of product inspection processes by utilizing machine vision technology can improve product quality and increase productivity. Various automated visual inspection systems have been used for quality

J. Sun () and Q. Sun

Department of Mechanical and Manufacturing Engineering, University of Calgary, Calgary, Alberta, Canada T2N 1N4

e-mail: jun.sun@ucalgary.ca B. Surgenor

Department of Mechanical and Materials Engineering, Queen’s University, Kingston, Ontario, Canada K7L 3N6

S.-I. Ao et al. (eds.),Advances in Computational Algorithms and Data Analysis, 185 Lecture Notes in Electrical Engineering 14,

c Springer Science+Business Media B.V. 2009

assurance in parts assembly lines. Instead of human inspectors, a visual inspection system can automatically perform parts verification tasks to assure that parts are properly installed and reject improper assemblies. However, most of the existing visual inspection systems are hand-engineered to provide an ad-hoc solution for inspecting a specific assembly part under specific environmental conditions [1–3].

Such system does not work properly when changes take place in the assembly line.

For instance,

1. Changing products in response to market demands. This requires changing the inspection algorithms of the existing system to deal with a new assembly part.

2. Changing environmental conditions, for example, lighting conditions, camera characteristics, and system fixation after a certain period of system operating time. This may render the existing system obsolete, and thus require adjusting the original inspection algorithms for new environmental conditions.

Developing a new algorithm or adjusting the original inspection algorithm is not a trivial task. It is time consuming and expensive. Therefore, an ideal visual inspection system is required to quickly adapt to the changes in the assembly line.

For the past two decades, researchers have attempted to apply supervised ma-chine learning-based strategies to improve the adaptability of visual inspection sys-tems [4, 5]. Conventional approaches typically use pixel-based matching templates or feature-based matching patterns that are pre-defined manually through a trial and error procedure. In contrast, a learning-based approach allows the system to estab-lish inspection functions based on the recognition patterns learned from the training samples. The role of human inspectors is to label the training samples based on the quality standard for a particular inspection task. As such, the system is flexi-ble to be trained to handle different inspection proflexi-blems in variaflexi-ble environmental conditions.

Machine learning can be conducted through line or online processes. In off-line learning, all training data are previously obtained, and the system does not change the learned recognition knowledge (e.g., functions, models or patterns) after the initial training phase. In online learning, the training data are presented incre-mentally during the system operation and the system updates the learned knowledge if necessary.

Popular off-line learning techniques used in visual inspection system include probabilistic methods, multi-layer perceptron (MLP) neural networks, adaptive neuron-fuzzy inference systems (ANFIS), and a decision tree (e.g., C4.5). These learning techniques have been used to build the recognition and classification func-tions in a visual inspection system [4–9]. However, it is often raised as a concern by the end user that the performance of an off-line learning based system relies heavily on the quality of the training data. In many situations it may be difficult or even im-possible to collect sufficient representative, defective and non-defective, samples for training the system over a limited period of time. To solve this problem, both Beck et al. [4] and Hata et al. [10] developed defect image generation units for surface inspection systems to increase the number of training samples. Defect images were generated by artificially manipulating the defect geometrical characteristics, such

as size, intensity, location, orientation, and edge sharpness. Evidently, this approach may not be practicable in the assembly parts inspection due to a large variation and complexity of assembly parts.

Recently there has been an increasing interest to apply online learning techniques to the development of an adaptive visual inspection system. By learning the inspec-tion patterns incrementally during operainspec-tion, the system does not require an exces-sive off-line training process when it faces the situations of changing inspection tasks or environmental conditions. The first systematic study on such a system was published by Abramovich et al. [1]. They proposed a novel online part inspection method based on a developmental learning architecture that used the incremental hi-erarchical discriminant regression (IHDR) tree algorithm. The method was capable of adapting to the variation of parts and environmental properties incrementally by updating rather than reprogramming inspection algorithms. Although the aforemen-tioned research has shown promising results in utilizing online learning techniques to realize an adaptive visual inspection, there is still a need of more systematic stud-ies with the support of practical experiments. Particularly, the following desire and objectives led to this research:

Investigating potential online learning techniques for optimal feature extraction and effective recognition or classification

Improving the efficiency of an online learning process by minimizing human operator involvement in the supervised learning process

Developing an effective method to evaluate the sufficiency of the inspection func-tion/model that is established thought online learning

This paper presents an intelligent visual inspection system that addresses the need for an improved adaptability of a visual inspection system for parts verification in assembly lines. The remaining of this paper is organized as follows. Section 14.2 de-scribes the architecture of the adaptive visual inspection system. Sections 14.3–14.5 introduce the three major modules of the proposed system, i.e., region localization, defect detection, and online learning. Section 14.6 provides a case study to illustrate the performance of the proposed system. Section 14.7 summarizes this research.

14.2 Architecture of an Adaptive Visual Inspection System

In general, an adaptive visual inspection system can be designed in such a way that the system is capable of defect detection using a recognition model and updating the recognition model online if necessary. As illustrated in Fig. 14.1, the proposed adaptive visual inspection system consists of three major modules:

1. Region Localization Module

This module locates the region of interest (ROI) containing the installation assembly area within the acquired image. It then extracts the corresponding region of verifi-cation (ROV) containing the assembly part to be inspected.

Region Localization Defect Detection Online Learning

An Acquired Image

Region of Verification (ROV) Region of Interest (ROI)

Result ROV

Recognition Model

Training Sample

Fig. 14.1 Architecture of an adaptive visual inspection system

2. Defect Detection Module

This module checks whether the inspection case appears as a defective assembly case, e.g., part missing or improper installation, based on the located ROV. A recog-nition model is used in the inspection process. In our system, the recogrecog-nition model characterizes non-defective assembly cases, thus an inspection case is considered defective if it derivates from the recognition model.

3. Online Learning Module

This module builds and updates the recognition model with each newly arrived inspection case if it is selected as a training sample. Here, the training samples for the supervised learning are actual non-defective assembly cases. Evidently, it would be impracticable and inefficient to require human inspectors to label non-defective cases during the system operation (i.e., online). In order to minimize the human inspector’s involvement, the proposed system also makes use of the defect detection module to effectively select the training samples that require manually checking/labeling for updating the recognition model. The training strategy will be described in details in Section 14.5.

14.3 Localization of ROV

Region localization is required so that the amount of data processing can be reduced by processing the region of verification (ROV) instead of the whole acquired im-age. In the proposed system, the region of interest (ROI) within an acquired image contains two sub-regions, as shown in Fig. 14.2.

1. Region of Verification (ROV): It must include the inspection subject, that is, the assembly part being inspected. Appearance verification of this region may indi-cate part missing or improper installation.

2. Region of Matching (ROM): This region contains features that are invariant to both non-defective and defective cases. Therefore, the appearance pattern of this region can be used as a matching reference/template to search and locate

Region of Matching (ROM) Region of Verification (ROV)

Region of Interest (ROI)

= ROV + ROM

Case A Case B Case C

Fig. 14.2 Defining ROI, ROV, and ROM

the ROI and its corresponding ROV. The ROM provides efficient and effective identification of ROV. The ROM can be defined manually during the system setting-up and tuning stage.

An edge-based pattern-matching technique is employed in the region localization module. Instead of comparing pixels of the whole image, the edge based technique compares edge pixels with the template. It offers several advantages over the pixel-to-pixel correlation method. For example, it offers reliable pattern identification when part of an object is obstructed, as long as about 60% of its edges remain vis-ible. Since only edge information is used, this technique can rotate and scale edge data to find an object, regardless of its orientation or size. In addition, this technique can provide good results with a greater tolerance of lighting variations. In this re-search, the region localization module is implemented using an edge-based geomet-ric pattern-matching function (i.e., Geometgeomet-ric Model Finder) in Matrox Imaging Library (MIL-version 8), a commercial software provided by MatroxImaging Inc.

More background on this function can be obtained in MatroxMIL8 – User Guide (2005).

14.4 Defect Detection using a Principal Component Analysis Technique

The system performs the defect detection of part assembly based on a recognition model that is built using a principal component analysis (PCA) technique. Principal component analysis involves a mathematical procedure that allows optimal repre-sentation of images with a reduced order of basis vectors called eigenpictures. The eigenpictures are generated from a set of training images. The projection of an image onto the subspace of eigenpictures is a more efficient representation of the image.

Many works on image recognition and reconstruction have adopted the idea of the PCA based image representation and decomposition.

Given a set of N training images, X = [x1,x2, . . .,xi, . . .,xN], each image con-sists of w by h pixels. In PCA, an image is represented by a vector of size wh,xi= [p1,p2, . . .,pi, . . .,pwh]T where pidenotes the intensity value of pixel i.

The set of eigenpicturesU= [a1,a2, . . .,ai, . . .,awh], withai= [v1,v2, . . .,vwh]T, can be obtained by solving the following equation:

UT(X−m) (X−m)TU=Λ (14.1) where vector µ=[m1,m2, . . .,mwh]T is the mean of the training set X, andΛ is a diagonal matrix of eigenvalues:Λ =diag(λ1,λ2, . . .,λwh).

Corresponding tollargest eigenvalues,lmajor eigenpictures are selected to form a subspace of the eigenpicturesUl= [a1,a2, . . .,al], By projecting onto the subspace ofUl, a newly acquired imagexcan be represented by the a set of projection coeffi-cientsc= [c1,c2, . . .,cl]T:

c=UlT(x−m) (14.2)

These coefficients can be used to reconstruct the original images within certain er-ror tolerance. The image being reconstructed based on its projection coefficients in Eq. (14.2) can be represented as:

y=Ulc+m (14.3)

The image reconstruction error measures the difference between the original and the reconstructed images. For a newly acquired image x, the reconstruction error can be defined as the sum of the residual squares between the original imagexand the reconstructed imagey:

Q= (x−y)T(x−y) (14.4)

The reconstruction errors can be used to detect abnormality or novelty in image recognition [11, 12]. In this research, this concept is adopted for defect detection as follows.

Letθ1,θ2, andθ3denote the summations of first order, second order, and third order of eigenvalues going froml+1 toK, respectively. That is:

θ1= where l andK denote the number of major eigenvalues and the total number of non-zero eigenvalues, respectively.

Assuming that all training images are non-defective cases of part assembly, a Gaussian approximation for the distribution of the reconstruction error,Q, can be represented as [11, 13]:

That is,qis obtained from the normalizing transformation ofQand follows a normal distribution with the mean valuemand the standard deviationσ.h0is the so-called joint moment being used in the transformation function.

In the defect detection module of this system, the PCA based recognition model is used to detect defective assembly cases. The recognition model is built on non-defective cases, thus an inspection case may be considered defective if it ap-pears as an outlier of the recognition model with a certain confidence intervalα. For instance, 99.74% confidence level translates toα=3 for a normal distribution.

A newly arrived case is considered defective if

q⊂ασ,µ+ασ] (14.7) where µασ andµ+ασ are the lower and upper thresholds of non-defective cases, respectively. Equation (14.7) can be used as the defect detection criteria.

14.5 Online Learning of PCA based Recognition Model

In the existing literature, the principal component analysis technique (PCA) is used in off-line learning mode that requires all the training data to be available in before-hand. It is unsuitable to applications that demand online updates to the recognition model. This research proposes an efficient online training algorithm that can build and update the recognition model as new training samples emerge.

A major challenge to perform a supervised online learning is that all training samples need to be verified and labeled manually by human inspectors. The costly human involvement affects the efficiency of an online learning process. To address this issue, the following two learning strategies are proposed in this system, as illus-trated in Fig. 14.3.

Online Learning

Defect Detection

False Rejected Case The Updated

Recognition Model

Inspection Case

Rejected Case

Start

Time

Model Execution Phase Result Verification (Human Inspector)

……

Model Development Phase

Fig. 14.3 Online learning strategies