• Aucun résultat trouvé

Pattern Classification Based on Design and Manufacturing Features The application of neural networks based on design and manufacturing features can be placed within

Neural Network Applications for

CELLULAR MANUFACTURING

4.3 A Taxonomy of Neural Network Application for GT/CM

4.3.1 Pattern Classification Based on Design and Manufacturing Features The application of neural networks based on design and manufacturing features can be placed within

the contexts of part family identification, engineering design, and process planning blocks shown in Figure 4.1. Based on a review of ANNs developed for these application areas, they may be classified further into four subcategories. These include the use of neural networks primarily to

1. Facilitate classification and coding activity

2. Retrieve existing designs based on features required for a new part 3. Form part families based on design and/or manufacturing features 4. Support GT-based design and process planning functions.

TABLE 4.1 Pattern Classification Based on Design and Manufacturing Features

Supervised Learning Unsupervised Learning

Application Area Back-Propagation Hopfield Competitive Learning Interactive Activation Kohonen’s SOFM ART1 and Variants Fuzzy ART Facilitate Classification and Coding

Kaparthi & Suresh [1991]

Design Retrieval Systems

Kamarthi et al. [1990]

Venugopal & Narendran [1992]

Part Family Formation

Kao & Moon [1990, 1991]

Moon & Roy [1992]

Chakraborty & Roy [1993]

Liao & Lee [1994]

Chung & Kusiak [1994]

Support GT-Based Design Process

Kusiak & Lee [1996]

Traditionally, the identification of part families for group technology has been via a classification and coding system which, as stated earlier, has generally given way to more direct methods that analyze process plans and routings to identify part families. Neural network applications for GT/CM have undergone a similar evolution. Table 4.1 presents a classification of the literature based on the above four categories.

Table 4.1 also categorizes them under various supervised and unsupervised network categories. As seen in the table, most of the methods developed for this problem are based on supervised neural networks, especially the feedforward (back-propagation) network.

The work of Kaparthi and Suresh [1991] belongs to the first category. This study proposed a neural network system for shape-based classification and coding of rotational parts. Given the fact that classi-fication and coding is a time-consuming and error-prone activity, a back-propagation network was designed to generate shape-based codes, for the Opitz coding system, directly from bitmaps of part drawings. The network is first trained, using selected part samples, to generate geometry-related codes of the Opitz coding system. The examples demonstrated pertained to rotational parts, but extension to

TABLE 4.2 Pattern Classification Based on Part–Machine/Tool Matrix Elements

Supervised Learning Unsupervised Learning

Suresh, Slomp & Kaparthi [1999]

Part-Tool Matrix Elements

Arizono et al. [1995]

prismatic parts was seen to be feasible. The network was viewed as an element of a computer-aided design (CAD) system, serving to facilitate design procedures in general, and to retrieve existing designs and foster standardization and variety reduction among parts.

Works such as Kamarthi et al. [1990] and Venugopal and Narendran [1992b] have addressed the problem of design retrieval. Kamarthi et al. [1990] used the feedforward neural network model trained with the back-propagation algorithm. It was shown that neural networks can be effectively utilized for design retrieval even in the presence of incomplete or inexact design data. Venugopal and Narendran [1992b] applied a Hopfield network to model design retrieval systems in terms of a human associative memory process. Test cases involved both rotational and nonrotational parts.

The third category of methods involve the use of neural networks for forming part families based on design and/or manufacturing features. Instead of resorting to the laborious part classification and coding activity, these methods are aimed at clustering directly from part features presented to the networks.

Almost all these methods have utilized the three-layer feedforward network, with part features forming the input. The network classifies the presented features into families and helps assign new parts to specific families. The basic mode of operation is as follows.

First, design features are identified to cover design attributes of all the parts. Features are design primitives or low-level designs, along with their attributes, qualifiers and restrictions which affect func-tionality or manufacturability. Features can be described by form (size and shape), precision (tolerances and finish) or material type. The feature vector can either be extracted from a CAD system or codified manually based on part features. Almost all the works have considered the feature vectors as binary-valued inputs (with an activation value of one if a specified feature is present and a value of zero otherwise). However, future implementations are expected to utilize continuous-valued inputs.

The neural network is constructed as shown in Figure 4.3. The processing units (neurons) in the input layer correspond to all the part features. The number of neurons in the input layer equals the number of features codified. The output layer neurons represent the part families identified. The number of neurons in the output layer equals the expected (or desired) number of families. The middle, hidden FIGURE 4.3 Feedforward network.

Output layer

Middle (hidden) layer Family 1

Input Patterns: Feature Vector Family 3 Family 2

Input layer

layer provides a nonlinear mapping between the features and the part families. The number of neurons required for the middle layer is normally determined through trial and error. Neural networks are at times criticized for the arbitrariness, or absence of guidelines for the number of neurons to be used in middle layers.

The input binary-valued vector is multiplied by the connection weight wij, and all the weighted inputs are summed and processed by an activation function fι(netpi). The output value of the activation function becomes the input for a neuron in the next layer. For more details on the basic operation of three-layer feedforward networks and back-propagation learning rule, the reader is referred to standard works of Wasserman [1989] and McClelland and Rumelhart [1988].

The net input, netpi, to a neuron i, from a feature pattern p, is calculated as

netpi = Σjwij apj Equation (4.1) api = fι(netpi) = 1/[1 + exp(-netpi)] Equation (4.2) where api is the activation value of processing unit p from pattern p. This is applied for processing units in the middle layer. The net input is thus a weighted sum of the activation values of the connected input units (plus a bias value, if included). The connection weights are assigned randomly in the beginning and are modified using the equation

wij = εδpi apj Equation (4.3) These activation values are in turn used to calculate the net inputs and activation values of the processing units in the output layer using Equations 4.1 and 4.2. Next, the activation values of the output units are compared with desired target values during training. The discrepancy between the two is propagated backwards using

δpi = (tpi – api) fi(netpi) Equation (4.4) For the middle layer, the following equation is used to compute discrepancy:

δpi = fι(netpi) Σj δpk wki Equation (4.5) With these discrepancies, the weights are adjusted using Equation 4.3. Based on the above procedure, Kao and Moon [1991] presented a four-phased approach for forming part families, involving (1) seeding phase, (2) mapping phase, (3) training phase, and (4) assigning phase. In the seeding phase, a few very distinct parts are chosen from the part domain to identify basic families. In the mapping phase, these parts are coded based on their features. The network is trained in the training phase utilizing the back-propagation rule. In the assigning phase, the network compares a presented part with those with which it was trained. If the new part does not belong to any assigned family, a new family is identified.

Moon and Roy [1992] developed a feature-based solid modelling scheme for part representations.

Again, part features are input to a three-layer feedforward network and classified into part families by the output layer. The training proceeds along conventional lines using predetermined samples of parts and their features. The network can then be used to classify new parts. The new part needs to be represented as a solid model, and the feature-extraction module presents the features to the input layer of the neural network. The system was found to result in fast and consistent responses. This basic procedure is followed in other works shown in Table 4.1.

A fourth application area of neural networks is to support engineering functions more generally.

Several researchers have approached the design process in fundamental terms as a mapping activity, from a function space, to a structure space and eventually to a physical space. The design activity is based on an associative memory paradigm for which neural networks are ideally suited. Works such as Coyne and

Postmus [1990] belong to this type of application, which falls somewhat outside the realm of group technology. Similarly, neural networks are beginning to be applied in the context of computer-aided process planning (CAPP) in fundamental terms. The reader is referred to Zhang and Huang [1995] for a review of these closely related works.

A promising new application of neural networks is to utilize them to support GT-based design activity, within a concurrent engineering framework. A related aim is design for manufacturability (DFM), to ensure that new designs can be produced with ease and low cost, using existing manufacturing resources of a firm.

The procedure developed by Kusiak and Lee [1996] represents a promising new approach in this context. It utilizes a back-propagation network to design components for cellular manufacturing, keeping a concurrent engineering framework in mind. It utilizes two middle layers, as shown in Figure 4.4.

The inputs for the network include design features of a component, extracted from a CAD system.

After passing through the input layer, the feature vectors are fed into the first hidden layer, referred to as the feature family layer. The second hidden layer corresponds to manufacturing resources. The output layer yields desired features to fully process the part within a manufacturing cell. Thus the system provides immediate feedback for the designer regarding producibility and potential manufacturing problems encountered with a proposed new design.

The procedure consists of three phases: (1) formation of feature families, (2) identification of machine resources (cells), and (3) determination of a set of features required.

The number of neurons in input layer corresponds to n, the total number of features derived for parts from a CAD system. The inputs are in the form of a binary n-dimensional vector. The number of neurons in the first hidden layer (feature family level) equals the number of feature families. Features of all existing parts are collected and a feature–part incidence matrix is formed. The hidden layer neurons, representing feature families, are connected only to related input neurons that represent the various sets of features.

FIGURE 4.4 ANN for design of components for GT/CM. (From Kusiak, A. and Lee, H., 1996, Int. J. Prod. Res., 34(7): 1777–1790. With permission.)

Output layer Feature Level Desired Features

Input Patterns: Feature Vector

Input layer Feature Level

Second Hidden Layer Manufacturing System Level

First Hidden Layer Feature Family Level

The number of neurons in the second hidden layer (manufacturing system level) equals the number of machine resources to process the feature families. The selection of manufacturing resources (which may be machines, tools, fixtures) is made based on the features. This may be performed in two phases:

first, an operation (e.g., drilling) may be selected based on a feature, followed by selection of a machine resource to perform the operation. Thus, for a feature family, a set of machines and tools is identified and grouped together into a machine cell.

The number of neurons in the output layer (feature level) equals the number of features, like the input layer. Each neuron in the second hidden layer is connected to the output layer neurons. The output layer neurons are connected to the machine neurons only when the features can be performed on the machine.

Given this network structure, the following sequence of operation takes place:

1. The binary feature vector for a part, extracted from CAD system, is input to the first layer.

2. A feature family is identified by computing

Sj = Σiwij xi Equation (4.6) yj= max ( S1,.. Sp) Equation (4.7) where xi is the input to neuron i, Sj is the output of neuron j, wij is the connection weight between an input and hidden layer neuron, and yj is the output value of the activation function.

3. Next, machine resources to process the selected feature family are identified by computing Sk = Σj wkj yi Equation (4.8) yk = fa(Sk) = 1 / ( 1 + exp(- (Sk + Φk )) Equation (4.9) where Φk is the activation threshold of a sigmoidal transfer function and wkj is the connection weight between the hidden layer neurons.

4. Next, features that can be processed by the current manufacturing system are identified by computing

Sl = Σkwlk yk Equation (4.10) yl = fa(Sl) = 1 / ( 1 + exp(- (Sl + Φl)) Equation (4.11) Thus, the network processes the input features and generates desired features as an output, based on current manufacturing resources and their capabilities, fostering design for manufacturability and con-current engineering.

It is very likely that further development of neural networks for design and process planning functions will be along these lines.

4.3.2 Pattern Classification Based on Part–Machine Incidence Matrices