• Aucun résultat trouvé

Image Classification for Painter Recognition

Dans le document Présenté en vue de l’obtention du (Page 21-25)

1.2.1 Artists Identification through history

Painting authentication or the task of determining whether or not a given artwork was painted by a specific painter, was originally developed to cope with art forgery, which has been an active business dating back for thousands of years. Traditionally, the image authentication techniques for forgery de-tection relied principally on the discerning abilities of experts to deduce the authenticity of a painting or an artist’s well known work. Over time, these manually performed authentication techniques have been greatly enhanced by exploiting more characteristics other than the human eye, and using new tech-nologies to upgrade the traditional methods, such as spectrometry, chemical analysis, X-ray and infrared imaging [2].

Figure 1.1: Classification of paintings based on their painters

In recent years, with the rapid advancement in digital acquisition, editing and production of paintings and artworks, automated painting analysis and

1.2 Image Classification for Painter Recognition

painter recognition has become an important task not only for forgery detec-tion, but also for objects retrieval, archiving and retrieving artworks. With the vast digital collections being available especially on the internet or even for libraries or museums, painter recognition can also be very useful to provide some crucial information such as the authorship. Actually, this artist-based classification allows to create index for retrieving and organizing painting col-lections, identifying unknown paintings, as well as gaining new insights of the artistic style of given artists from their works [3].

With the availability of the recent high resolution digital technology, the exist-ing capabilities of art analysis and authorship detection are gettexist-ing more and more enhanced by developing new statistical and image processing techniques, by which artists style can be described using mathematical tools applied to high-resolution digitized versions of the paintings and artworks. We expose in figure 1.1 examples of digitized paintings classified into 5 classes based on their painters.

Figure 1.2: Image classification process

In fact, knowing that calligraphy and signature have been used for decades as a singular sign of any individual, it is evident that every person has its own particular way of moving the hand while painting or writing. Therefore, every painter can normally be identified from his own way of striking the painting board with the brush, leaving certain personal patterns that can be detected by applying computer vision and pattern recognition techniques to high reso-lution images of paintings [4].

In this context, the painter recognition task can be presented as an image classification problem that deals with deciding which artist painted a given painting based on the analysis of a set of hidden descriptors. Given a set of painting images of various artists ( with multiple paintings for each painter) as an input, the purpose here is to automatically extract and analyze these descriptors in order to classify a given painting with the corresponding artist.

Usually, this classification process is carried out through a typical set of levels as shown in figure 1.2.

1.2.2 Features extraction

In general, a digitized version of a painting can be represented as an RGB image that contains a large quantity of visual data with various complex connections called image features, such as statistical descriptors or spatial features. In order to discover these hidden connections in a large amount of images data, many feature extraction techniques can be applied as a funda-mental step in any content-based classification problem [5].

In fact, this allows to reduce the high dimensionality of visual data to a low-dimensional representations which can be easily manipulated for image under-standing. As a result, each painting image is represented as a feature vector whose dimensions are equal to the number of chosen features, allowing thus a transition from the image space to the feature space. Features extracted from the image might be whether global or local.

The global features do usually describe the considered image as a whole in order to generalize the entire object. They include shape descriptors, contour representations, as well as texture features. Some examples of global features that may be mentioned here are Invariant Moments,Shape Matrices and His-togram of Oriented Gradients (HOG) (shown in figure 1.3).

Figure 1.3: Histogram of Oriented Gradient example [6]

On the other hand, the Local descriptors do focus on image patches that are considered as key points in the image. Scale Invariant Features Transform -SIFT, Local Binary Patterns - LBP (shown in figure 1.4), Speeded-Up Ro-bust Features - SURF, Maximally Stable Extream Regions - MSER and Faste REtinA Keypoint - FREAK are some examples of local features.

1.2 Image Classification for Painter Recognition

Figure 1.4: Input image (left) processed by LBP (right) [7]

In general, in case of dealing with low level applications such as classification and object disclosure, it is more suitable to take advantage of global features however for applications with higher level just like object recognition, local features are usually more used. In some image processing tasks, the com-bination of both, global and local features, can improve the accuracy of the recognition yet this might generate a side effect of computational overheads.

1.2.3 Trainable classifiers

The features vector being already extracted, the image processing task’s flowchart achieves at this point the classification step. Generally, the objective behind a classification protocol is to assign an input presented pattern to a particular class with reference to the appropriate feature vector. The extracted vector serves to fill the input data into a new representation space called features space where the elements of the data set become more obviously separable as shown in the figure 1.5

The computer vision related literature presents a multitude of classifiers em-ployed to solve the patterns classification problem. The latter’s complexity, depends basically on the irregularity of the features vector values assigned to the patterns taking part of the same class compared to the difference between feature values back to patterns that belong to different classes. Hence, we can consider that the accuracy got while using a specific classifier depends mean-ingfully on the employed data-set. Therefore the fact of reaching the best possible performance on a specific pattern recognition task in not absolutely dependent on finding out the best performing single classifier.

Practically, there are several cases that we come across where none of classi-fiers, used individually, can reach an acceptable classification accuracy level.

In such cases it is probably better to combine a set of classifiers results to-gether in order to achieve the most accurate decision. That’s to say, since each classifier has its own way to well operate on a set of input features vec-tor, the fact of being under appropriate assumptions, combining a variety of classifiers can lead us to better generalization performance compared to any single trainable classifier.

Figure 1.5: Data representation in input space and in the features space [8]

Due to their variety, the computer vision tasks such as object detection, pat-tern recognition and identification , are no more restricted to testing a single approach on the studied case and application. However we are getting more into comparison of different approaches that each of them might be the com-bination of a multitudes of preceding employed methods. In this context, as part of the well known supervised and unsupervised classifiers in the image related thematic, we can mention the Bayesian Classifier [9], Decision Tree [9], Parzen Window [9], k-Nearest Neighbor [9] , Maximum Likelihood classifica-tion [10], Support Vector Machine[10] and the family of neural networks such as Multi-Layer Perceptron and Recurrent or Feed Forward Neural Networks [9].

Dans le document Présenté en vue de l’obtention du (Page 21-25)

Documents relatifs