• Aucun résultat trouvé

Properties of Inspection Policies

5 Simulation Experiments—Selected Results 5.1 General Remarks

5.4 Properties of Inspection Policies

The main goal of the research described in this paper is to develop an efficient inspection procedure for a production process when the quality of produced items is observed indirectly. When classical SPC tools, such as control charts, are used there are two main questions to be answered. First, how to design the chosen control chart. Second, what is the ability of this chart to detect process’ deterioration. In the problem considered in this paper we have to add a third question about the influence of the quality of used classifiers on the effectiveness of process’ inspection.

In the classical SPC the effectiveness of a control chart is measured by its Average Run Length (ARL) defined as the average number of samples taken between the occurrence of deterioration and the alarm or as the average number of samples taken between consecutive alarms when the process is under control. When we use SPC procedures for the analysis of the results of 100 % inspection we have to use the

Process Inspection by Attributes Using Predicted Data 131

Average Time to Signal (ATS) characteristic. ATS in this case is defined as the average number of items produced between the occurrence of deterioration and the alarm or as the average number of items produced between consecutive alarms when the process is under control.

When quality of produced items is evaluated by attributes (either conforming or nonconforming) Shewhart p-charts with one-sided (upper) control limits are usually used. Charts with two-sided control limits are used only when we want to detect improvements of the process (e.g. when the input raw material is changed to a new and supposedly better one). In our experiments we have considered both types of control charts. However, for reasons explained in the previous section (actual deterioration of a process can be signaled as its “improvement”—see the case of theX4predictor) we have decided to suggest the usage of two-sided control charts.

Let us consider the application of the Shewhart control chart for the process inspection. First, let us consider the case when the inspected process is under control.

In Table10we present the values of ATS (usually denotedATS0) for different sizes of samples (segments of the process). The parameters of the control chart have been calculated using the estimated value of the fraction of nonconforming items estimated from the sample of 1000 elements taken from a stable process.In the column labeled

“Actual” we present the values of ATS of the chart for the actual, but not observed, values of the quality characteristic.

The similar values for the MAV control chart are presented in Table11.

The results presented in Tables10and11are strikingly different, but this difference is not difficult to explain. In the case of the Shewhart control chart decisions are taken after observing a sample ofnelements. Thus, the value of, e.g.,ATS0=30000 when the sample size is, e.g., equal to 100 means that on average 300 samples are evaluated before the alarm (This is the value ofARL!). In the case of the MAV control chart the decision is taken after each produced item. Thus, the value ofATSis the same as the value ofARL. It means that the “waiting time” in terms of the number of taken decisions is in the case of the MAV chart much larger than in the case of the Shewhart chart, but the relationship between the respective values of the average time to signal (ATS) is just opposite.

Note that in the case of a stable inspected process each alarm is a false one.

Therefore, we should prefer larger sample sizesnin order to have these alarms not so frequently. This can be achieved by the widening of distance between control

Table 10 Average time to signalATS0—Shewhart chart

Sample size Actual RegBin LDA C4.5

100 31119 34055 32883 31526

200 48194 50710 50338 49556

300 65289 73189 71630 64340

400 75432 77902 78753 70359

500 93990 92629 92638 86232

Table 11 Average time to signalATS0—MAV chart

Sample size Actual RegBin LDA C4.5

100 4158 4700 4091 4201

200 6593 7317 6314 6742

300 8715 9596 8431 8486

400 10828 11004 10650 10037

500 12221 11513 12521 11761

Table 12 Average time to signalATS(shifted process)—Shewhart chart

Sample size Actual RegBin LDA C4.5

100 3359 39669 27618 2935

200 3204 50710 37381 4592

300 4008 63318 40791 5699

400 3140 60572 43864 6781

500 2835 68963 45039 11445

Table 13 Average time to signalATS(shifted process)—MAV chart

Sample size Actual RegBin LDA C4.5

100 638 5247 3837 501

200 815 6349 4879 562

300 782 8746 5382 729

400 905 9864 6430 657

500 744 10234 6284 2030

limits on a chart. The effect of such a change may be, unfortunately, detrimental if we want to detect the deterioration of the inspected process as quickly as possible.

In Tables12and13we present the values of ATSwhen the expected value of the explanatory variable (predictor)X3is shifted downwards by 0,5σ. From the second column of the Table9 we see that this shift results in the increase of the fraction nonconforming by 20 %. This a really severe deterioration of the process and should be detectes as quickly as possible.

From the analysis of simulation results presented in Tables10,11,12and13we see that the inspection is effective only in the case when the decision tree classifier C4.5 is used for the prediction of quality of inspected items. When the LDA clas-sifier is used the inspection process allows to detect deterioration but with visibly smaller efficiency. The binary regression RegBin classifier is in the considered case completely ineffective. From the analysis of Tables4,5,6,7and8we see that in the considered case the decision tree C4.5 classifier in comparison to its competi-tors is characterized by a larger value of Sensitivity and smaller values of Precision and Specificity. The same is when we compare the LDA and the RegBin

classi-Process Inspection by Attributes Using Predicted Data 133

fiers. This behavior of classifiers can be explained by noting that high sensitivity and low precision and specificity describe the situation when the observed percentage of nonconforming items is larger than the actual one. Therefore, in the case of process deterioration the probability of alarm increases, and the value of ATS decreases.

One should note, however, that in the case of a stable (under control) process the observed value of the fraction of nonconforming items is also larger than the actual one. This phenomenon results with a somewhat misleading information about the actual process level, but does not inflict the probability of false alarm (and theATS0 value), as the control limits are designed on the basis on observed but not actual values of the fraction of nonconforming items.

All the results described in this paper represent averages calculated with respect to different sets of classifiers. From more detailed results, presented in [5] for the case of the inspection based on the Shewhartp-chart, one can find that depending on the instance of the training set alarms may be triggered when the actual impact of shifts in explanatory variables on actual quality is negligible, and—vice versa—may not be triggered when it is needed. This behavior strongly depends upon the type of a classifier, and its parameters estimated from a training data. Moreover, In this paper we assumed that alarms are triggered by crossing either the lower or the upper control limit. When only the upper control limit of the control chart is active, the respective values of the ATS are much larger, especially in the case of no-shift or when the shift in the explanatory variable has a small effect on the quality variable of interest. The situation is even worse when the deterioration of the process is accompanied with lowering of the observed fraction of nonconforming items, as it is the case in the upwards shift of the explanatory variableX4. In such a case such deterioration may be never noticed using considered statistical methods.

6 Conclusions

The results presented in this paper add important information to that already given in [5]. However, this information is still of a very preliminary character, as the results from simulation experiments represent only one particular model of a process. They confirm the findings presented in [5] that in the case of non-normal distributions of quality characteristics, non-linear dependencies between observable (explana-tory), and not directly observable (only predicted!) values quality characteristics of processes the inspection procedures based on control charts may be not effective.

The most popular classifiers that are used for prediction purposes may not perform well, and their performance is difficult to be predicted in advance. Further research is needed with the aim to find ensambles of classifiers that can be more effective than single classifiers in finding process’ deterioration. Such ensambles have to be robust to the change of the model of data used in their design.

References

1. Basseville M, Nikiforov IV (1993) Detection of abrupt changes: theory and applications.

Prentice-Hall, Englewood Cliffs

2. Embrechts P, Lindskog F, McNeil A (2003) Modelling dependence with copulas and applica-tions to risk management. In: Rachev S (ed) Handbook of heavy tailed distribuapplica-tions in finance, Chapter 8. Elsevier, Amsterdam, pp 329–384

3. Hastie T, Tibshirani R, Friedman J (2008) The elements of statistical learning. data mining, inference, and prediction, 2nd edn. Springer, New York

4. Hryniewicz O, Karpi´nski J (2014) Prediction of reliability—pitfalls of using Pearson’s corre-lation. Eksploatacja i Niezawodnosc—Maintenance Reliab 16:472–483

5. Hryniewicz O (2015) SPC of processes with predicted data—application of the data mining methodology, In: Knoth S, Schmid W (eds) Frontiers in statistical quality control—12, Physica Verlag, Heidelberg, pp 219–235

6. Montgomery DC (2011) Introduction to statistical quality control, 6th edn. Wiley, New York 7. Nelsen RB (2006) An introduction to copulas, 2nd edn. Springer, New York

8. Noorsana R, Saghaei A, Amiri A (2011) Statistical analysis of profile monitoring. Wiley, Hoboken

9. Owen DN, Su YH (1977) Screening based on normal variables. Technometrics 19:65–68 10. Quinlan JR (1993) C4.5: programs for machine learning. Morgan Kaufmann, Los Altos 11. Witten IH, Frank E, Hall MA (2011) Data mining. practical machine learning tools and

tech-niques, 3rd edn. Elsevier, Amsterdam

12. Woodall WH, Spitzner DJ, Montgomery DC, Gupta S (2004) Using control charts to monitor process and product profiles. J Qual Techn 36:309–320

13. Wang YT, Huwang L (2012) On the monitoring of simple linear berkson profiles. Qual Rel Engin Int 28:949–965

14. Xu L, Wang S, Peng Y et al (2012) The monitoring of linear profiles with a GLR control chart.

J Qual Techn 44:348–362

15. Yamada M, Kimura A, Naya F, Sawada H (2013) Change-point detection with feature selection in high-dimensional time-series data. In: Proceedings of the 23rd International Joint. Confer-ence on Artificial IntelligConfer-ence, Beijing, pp 1827–1833