• Aucun résultat trouvé

Source Camera Model Identification Using Features from contaminated Sensor Noise

N/A
N/A
Protected

Academic year: 2022

Partager "Source Camera Model Identification Using Features from contaminated Sensor Noise"

Copied!
11
0
0

Texte intégral

(1)

Features from contaminated Sensor Noise

Amel TUAMA2,3, Frederic COMBY2,3, Marc CHAUMONT1,2,3

1 NˆIMES UNIVERSITY , F-30021 Nˆımes Cedex 1, France

2 MONTPELLIER UNIVERSITY , UMR5506-LIRMM, Montpellier, France

3 CNRS, UMR5506-LIRMM, F-34392 Montpellier Cedex 5, France

Abstract. This paper presents a new approach of camera model identi- fication. It is based on using the noise residual extracted from an image by applying a wavelet-based denoising filter in a machine learning frame- work. We refer to this noise residual as the polluted noise (POL-PRNU), because it contains a PRNU signal contaminated with other types of noise such as the image content. Our proposition consists of extracting high order statistics from POL-PRNU by computing co-occurrences ma- trix. Additionally, we enrich the set of features with those related to CFA demosaicing artifacts. These two sets of features feed a classifier to per- form a camera model identification. The experimental results illustrate the fact that machine learning techniques with discriminant features are efficient for camera model identification purposes.

Keywords: Camera Model Identification, POL-PRNU, CFA, Co-occurrences matrix, Feature Extraction, Rich model.

1 Introduction

Source camera identification is one of the major interests in image forensics. It is the process of deciding which camera has been used to capture a particular image. The problem of establishing the origin of digital media obtained through an imaging device is important whenever digital content is presented and is used as evidence in the court. The general structure of a digital camera consists of lens system, filters, color filter array (CFA), imaging sensor, and digital image processor. The sensor is an array of rows and columns of photo-diode elements, or pixels. To produce a color image, a color filter array (CFA) is used in front of the sensor so that each pixel records the light intensity for a single color only.

An interpolation algorithm is used to generate the missing colors values from adjacent pixels. All these elements can be used in extracting features in order to identify a camera device.

There are two families of methods for camera identification. The first one is based on producing a model, for example a PRNU, and then compute the correlation between a given image and the model of a specified camera. The second one is based on features extraction on a machine learning approach.

(2)

From the first family of camera identification methods, a reliable one for identifying source camera based on sensor pattern noise is proposed by Lukas et al [1]. Due to imperfections in sensor manufacturing process, the Photo Response Non-Uniformity (PRNU) is a major source of pattern noise. This makes the PRNU a natural feature for uniquely identifying sensors.

Choi et al [2] proposed to use the lens radial distortion as a fingerprint to identify source camera model. Each camera model expresses a unique radial distortion pattern that helps to identify it.

Dirik et al [3] proposed the device identification from sensor dust in digital single lens reflex cameras (DSLR). This problem arises due to the dust particles attracted to the sensor. When the interchangeable lens is removed, a dust pattern is created in front of the imaging sensor. Sensor dust patterns are used as artifacts on the captured images to identify the camera device.

On the other hand, we have the second family of camera identification meth- ods related to features extraction and machine learning . Bayram et al [4] ex- plored the CFA interpolation process to determine the correlation structure pre- sented in each color band which can be used for image classification. The main assumption is that the interpolation algorithm and the design of the CFA filter pattern of each manufacturer (or even each camera model) are somewhat differ- ent from others, which will result in distinguishable correlation structures in the captured images.

Kharrazi et al [5] identified a set of 34 image features that can be used to uniquely classify a camera model. The proposed features are color features, Image Quality Metrics (IQM), and wavelet domain statistics. Celiktutan et al [6] used a set of binary similarity measures and a set of Image Quality Metrics to identify the source cell-phone.

Our approach is a mix of the two families of methods since we use a polluted PRNU in a machine learning framework. The polluted PRNU, that we called POL-PRNU, is the sensor noise but also some residual linked to the content of the image. In our approach, extracting the polluted PRNU from a single image leads to an easy way to extract statistics from an image (co-occurrences and color features from polluted PRNU). Indeed, the set of images used to train the classifier will be lightly scattered thus limiting the overfitting effect. Additionally we propose to use a bigger set of features (compared to the classical machine learning approaches) in order to better describe the statistics.

This paper is structured as follows. Section 2 explains the classical approach to compute PRNU. Section 3 presents all the details of our approach, from POL-PRNU extraction to the features computed from co-occurrences and CFA interpolation. In section 4, we describe the experiments, the results, and the database used for experiments. Finally, we conclude in Section 5.

2 Preliminaries

Camera sensor consists of a large number of photo detectors called pixels which convert photons to electrons. Each pixel in a digital camera’s sensor records the

(3)

Fig. 1.Sample image and its residual noise

amount of incident light that strikes it. Slight imperfections in manufacturing introduce small amounts of noise in the recorded image.

This noise is spatially varying and consistent over the time and can therefore be used for forensic purposes. It has a stochastic nature and is unique for each sensor. This makes it an ideal candidate for forensic applications, such as camera identification [7].

Generally, most PRNU-based image forensic techniques extract the residual noise from image by subtracting the denoised version of the image from the image itself as in equation(1):

N =IF(I), (1)

where I is the image, F(I) is the denoised image, and F is a denoised filter.

Wavelet based denoising filter is recommended and it is used in most cases because it provides the least amount of traces of the scene [7].

In order to extract the PRNU of a camera, multiple images are averaged. At least 50 images are used to calculate the reference pattern Kc [7] of a known cameraCas in equation(2).

Kc= Pn

i=1(NiIi) Pn

i=1Ii2 . (2)

A common approach to perform a comparison is to compute the Normalized Cross-Correlation which measures the similarity between the reference pattern Kcand the estimated noiseN of an image under test which is of unknown source [7]. Normalized Cross-Correlation is defined as:

ρ(N, Kc) = (NN).(KcKc)

kNNk.kKcKck. (3) WhereN andKc are the means of N andKc, respectively.

By applying equation(1) on an imageI, we obtain the residual noise. The residual noise is a sum of different noise. One of them is the sensor pattern noise PRNU. Other types of noise, such as image content, may pollute the PRNU and

(4)

are part of the residual noise given in equation (1). An example is provided in Figure(1) shows an image and its residual noise which contains some clear parts of the scene.

In this paper, we will consider only the residual noise and call it polluted PRNU (POL-PRNU). This POL-PRNU will then be used for extracting discrim- inant features. In a machine learning framework, the polluted PRNU is beneficial for the learning process. Indeed, the set of images for a given camera better fill the space and make the obtained cloud in the feature space more spread. Finally, this paper shows that extracting features of high dimension achieves very good results even if the learning database is small.

3 Proposed method

Camera model identification approach based on machine learning is used to classify the camera based on discriminant features extracted from images. In our approach we extract the features directly from what we called the POL- PRNU. The scheme presented in Figure 2 shows the functional diagram of our proposal. In general, the image is decomposed into its three color channels (r, g, b) considering the central 1024x1024 pixel image block. The POL-PRNU of the image is obtained by subtracting from the original image its filtered version by a wavelet denoising filter. Two sets of features are extracted from POL-PRNU for classification. The following two sub-sections describe the theoretical aspects of the major parts of our approach.

Crop middle of Image

Extract the three color channels

(Ir, Ig, Ib)

Denoising by wavelet based denoising filter

Apply zero mean to rows and

columns Recompose color

POL-PRNU 0.3.Lr + 0.6.Lg + 0.1.Lb

First set of features Co-occurrences

matrix

Training &

Testing by LIBSVM

Results Image I

Classification

POL-PRNU

Feature Extraction

Linear Pattern Apply zero mean

to rows and columns Apply zero mean to rows

and columns

Second set of features

CFA interpolation

3 3

3

-

Fig. 2.The proposed system framework

(5)

3.1 POL-PRNU Extraction

First of all after decomposing the image into its three color channels, the central block 1024x1024 is extracted. Using a small block from the original size reduces the computational complexity, and speeds up the matching process. In [10], the authors prove that the false-positive rate FPR decreases as the size of the image block is greater, which reaches the minimum when the block size is 1024x1024 pixels.

Our POL-PRNUN is extracted by subtracting the denoised version of the image from the image itselfI[1] as in equation(1). For the denoising process, a wavelet based denoising filter,F(I), is used based on a Wiener filtering of each wavelet sub-band for each channel as in [1].

In order to suppress all artifacts introduced by color interpolation and JPEG compression, a periodic signal of pattern noise, called the linear pattern L, is extracted by subtracting the average row (respectively average column) from each row (respectively column) ofN from each color channel separately [7]. This leads to the three linear patterns corresponding to each color channel, notedLr

for red channel,Lg for the green channel, and Lb for the blue channel.

Finally, the three linear patterns are combined into one pattern, noted L by using the conversion formula from RGB to gray-scale as in equation(4). Extract- ing the features from the recombined linear patterns will be more reliable due to the fact that the three linear patterns are highly correlated and provide a compact information for the classifier [7].

L = 0.3.Lr+ 0.6.Lg+ 0.1.Lb. (4)

3.2 Description of Features

Co-occurrences matrix The promising aspects of rich models approach [11]

can be adapted to extract co-occurrences of a POL-PRNU image. Rich models can play a potential role to provide a good model for forensics applications, especially, in forgery detection and localization [12, 13].

Indeed co-occurrences are a very good way to describe the statistics of some data owning neighborhood relations, which is the case for POL-PRNU images.

Calculating the co-occurrences of the POL-PRNU allows a reduction of the di- mension and gives a good representation of the statistical properties of finger- print. The co-occurrences feature vector is made of joint probability distributions of neighboring residual samples. In our case, the residual is the POL-PRNU im- age which is explained in section 3.1. We use four-dimensional co-occurrences matrices formed by groups of four horizontally and vertically adjacent residual samples after they were quantized and truncated as follows:

RtruncT(round( L/q)), (5)

wheretruncTis a function to minimize the residual range withT ∈ {−T, ..., T}, round(x) gives the nearest integer value ofx, L is the linear pattern of the POL- PRNU given in equation (4), andq∈ {1,1.5,2}is the quantization step.

(6)

The final co-occurrences matrix will be constructed from horizontal and vertical co-occurrences of four consecutive values fromRof equation (5). The horizontal co-occurrence matrixCdh is computed as follows:

Cdh=Z1

|

{(i, j)|Ri,j=d1, Ri,j+1=d2, Ri,j+2=d3, Ri,j+3=d4}

|

, (6) where Z is the normalization factor, with Ri,j N is the coefficient from the matrixR at position (i, j)∈ {1, ..., n}2, d= (d1, ..., d4)∈ {−T, ..., T}4 with T = 2.

Equivalently, we can compute the vertical co-occurrences matrix.

Color Dependencies The underlying assumption is that, the CFA interpola- tion algorithms leave correlations across adjacent pixels of an image. In digital cameras, the color filter array is placed before sensor to produce the colored im- age. The CFA is usually periodic and forms a certain pattern. The missing color components are interpolated using existing neighbor color components. The CFA pattern and the way of colors interpolation are important characteristics of the camera model and can be used in the camera identification process [4].

In this section we will describe the features extracted from the Lr,Lg, andLb

by computing local dependencies or periodicity among neighboring samples. The normalized cross-correlation is computed between the estimated linear pattern from POL-PRNU of color channels and their shifted version as in [14]. For each color channel pair (C1, C2), C1, C2 ∈ {Lr, Lg, Lb} and shift 41 ∈ {0, ...,3}, 42 ∈ {0, ...,3}. The normalized cross correlation between two matrices is de- fined as:

ρ(C1, C2,4) = P

i,j(C1i,jC1)(C2i−41,j−42C2) qP

i,j(C1i,jC1)2P

i,j(C2i−41,j−42C2)2

, (7)

whereρ is the normalized cross correlation, 4= [41 42]T is the 2D shift, C1 andC2 are sample means calculated from matricesC1 andC2 respectively.

This step results in 96 features which are the result of six combinations of color channels by 4×4 shifts of41 and42.

3.3 Classification

A Support Vector Machine (SVM) constructs a hyperplane, or a set of hyper- planes, in a high dimensional space which can be used for classification. The effectiveness of the SVM depends on the selection of kernel function, and the kernel’s parameters [18].

Using a kernel function provides a single point for the separation among classes. The radial basis function (RBF), which is commonly used, maps samples into a higher dimensional space that can handle the case when the relation between class labels and attributes is nonlinear.

Projecting into high-dimensional spaces can be problematic due to the so- called curse of dimensionality. As the number of variables under consideration

(7)

increases, the number of possible solutions also increases exponentially. The re- sult is that the boundary between the classes is very specific to the examples in the training data set. The classifier has to handle the overfitting problem, so as it has to manage the curse of dimensionality [15].

In our case, the training and testing sets have 100 instances each, and the number of features is 10860 which is considered much larger than the number of instances. Here, we have to proceed the learning process with a small data base and a large dimension. Thus, the overfitting and the curse of dimensionality problem may occurs.

Fortunately, when the SVM uses the cross validation procedure, the cost parameter that controls the over/under-fitting phenomenon, is set to a value that allows a better handling of the curse of dimensionality problem and then, can prevent the overfitting problem. A cross validation procedure splits the original training data into one or more training subsets. More precisely, the v-fold cross validation divides the training set into v subsets of equal size, v-1 subsets are used for training and the rest subset is left for testing.

4 Experimental results

4.1 Data Acquisition

The Dresden Image Database is designed to fill the needs for digital image foren- sics applications by providing a useful resource for investigating camera-based image forensic methods [16]. It provides 16,000 authentic digital full-resolution natural images in the JPEG format, and of 1,500 uncompressed raw images. It covers different camera settings, environments and specific scenes, facilitate rig- orous analyses of manufacturer, model or device dependent characteristics and their relation to other influencing factors. In our experiments 14 different camera models were used, as outlined in table 1. A set of 100 images for the training and an another one of 100 images for the test are used from Dresden database for each camera model. As a result 1400 images for training and 1400 images for testing were used from 14 camera model which are randomly selected.

4.2 Experimental Protocol

Since each color channel is denoised separately, an image is decomposed into its three color channels (R, G, B). It is recommended that when image blocks are used in forensic investigation, they should be taken from the image center before POL-PRNU extraction stage. This will reduce false positive rate [10]. The images from the training and testing sets are cropped to obtain the 1024x1024 central images. The essential step is to extract POL-PRNU from all images by applying wavelet denoising filter on the original image. This step then followed by subtracting the denoised image from original as explained in section 3.1.

Two sets of features are extracted from linear pattern of POL-PRNU of each image. The first set is the co-occurrences matrix which consists of 10764

(8)

Abbreviations Brand Model Resolution (A1) Agfa Photo DC-733s 3072x2304 (A2) Agfa Photo DC-830i 3264x2448 (A3) Agfa Photo Sensor 530s 4032x3024

(C1) Canon Ixus 55 2592x1944

(F1) Fujifilm FinePix J50 3264x2448

(K1) Kodak M1063 3664x2748

(N1) Nikon D200 Lens A/B 3872x2592 (O1) Olympus M1050SW 3648x2736 (Pa1) Panasonic DMC-FZ50 3648x2736 (Pr1) Praktica DCZ 5.9 2560x1920 (Sa1) Samsung L74wide 3072x2304

(Sa2) Samsung NV15 3648x2736

(So1) Sony DSC-H50 3456x2592

(So2) Sony DSC-W170 3648x2736

Table 1.Models used from Dresden database

features of different statistical relationships among neighboring pixels. While the second set consists of 96 features from normalized cross correlation between POL-PRNU and its shifted versions to get the CFA interpolation dependencies among neighbor pixels. See section 3.2 for the two feature sets. This resulting in 10860 as a total number of features.

For the feature normalization step, we used the method of min-max scaling for both training and testing sets. In this approach, the features will be re- scaled, to a specific range [0,1]. This will avoid attributes in the greater numeric ranges dominating those in the smaller ranges. For the classification, LIBSVM package was used [17] with the Radial Basis Function (RBF) and v-fold cross validation scheme. Although SVM is a binary classification model, LIBSVM package performs multi-classification by using one-versus-rest (OVR) approach.

We used the kernel parameter γ = 2−7 and cost parameter C = 4096 for the SVM after examining a grid search over a range of values. For γ {23,22,21, ...,2−15} and C ∈ {215,214,213, ...,2−5} as is recommended in [18].

The training and testing sets consisted of 100 images each for each camera model.

The method was implemented under corei7 processor with memory of 16 gega bytes. For the computation cost, the feature extraction process took few seconds for each image, while the training process took 30 minutes.

Filler et al [14] proposed a method for camera model identification which aims to classify camera models using some features. We have implemented this method for comparison purposes on the same set of images from Dresden database. The later method [14] proposed features are concerning statistical moments, cross correlation between color channels, block covariance, and cross correlation of the linear pattern. The images are cropped to 1024x1024. We did not perform the step of reducing feature space.

(9)

Camera Model A1 A2 A3 C1 F1 K1 N1 O1 Pa1 Pr1 Sa1 Sa2 So1 So2

A1 96.93 - 1.0 - - - - - 1.32 - - - - -

A2 1.53 97.92 - - - - - - - - - - - -

A3 - - 98.93 - - - - - - - - - - -

C1 - - - 99.57 - - - - - - - - - -

F1 - - - - 98.57 - 1.33 - - - - - - -

K1 - 1.29 - - - 98.21 - - - - - - - -

N1 - - - - - - 99.07 - - - - - - -

O1 - - - - - - - 98.93 - - - - - -

Pa1 - - - - - - - - 99 - - - - -

Pr1 - - - 1.37 - - - - - 97.79 - - - -

Sa1 - - - - - - - - - - 99.91 - - -

Sa2 - - - - - - - - - - 2.20 97.57 - -

So1 1.51 - 1.01 - - - - - - - - - 93 4.36

So2 - - - - - - - - - 2.83 - - 3.3 93.94

Table 2. Confusion matrix of the proposed method for the fourteen camera models, the symbol ’-’ refers to the values less than 1%.

4.3 Results and Discussion

A comparison is performed between our method and three other experiments.

From the proposed method, we take the first set of features (the co-occurrences matrix) and perform it alone. The experiment resulted in 96.91% as an identifica- tion accuracy. This proves the potential role of the statistical features represented by co-occurrences matrix.

The second experiment is performed by taking the set of CFA interpolation features alone from our proposed method. It gave a result of 86.93% of accuracy.

This is considered acceptable but not enough, and still less than the result of the first experiment of the co-occurrences computed on the POL-PRNU.

The method presented in [14] is tested under a similar conditions as discussed in section 4.2. This method only achieved 88.23% as an average identification accuracy. This low result may be explained by the use of probabilities of first order.

Back to our method, we gathered the two sets of proposed features. By combining the co-occurrences computed on the linear pattern from equation 4 and the CFA features, we gain almost 1% accuracy. The computation of the co- occurrences on the linear pattern is the most important features and the addition of CFA features allows to improve the efficiency. Our method achieved an average accuracy of 97.81%. We can see from the table 2 that some of the models are identified with a very high accuracy. For example, Samsung-L74wide, and Canon- Ixus55 achieve 99.91%, and 99.57% respectively. Also, most of the other camera models do so, except for the Sony-DSC-H50, and Sony-DSC-W170. They achieve the lowest rates of 93%, and 93.94% respectively, and this is, maybe, PRNU structure is very close between the two cameras.

Table 3 shows all the mentioned comparisons with their accuracy rates.

Finally, we conclude that our method always performs better than the com- pared method. This is due to the strength of the descriptive features of the

(10)

co-occurrences, and the additional interesting features of CFA interpolation char- acteristics.

Camera identification method Result(%)

CFA 86.93

Co-occurrences 96.91 Compared method in [14] 88.23 Proposed method 97.81

Table 3.Overall average identification rates for all the tested algorithms.

5 Conclusion

This paper proposes an algorithm for identifying camera sources combining tech- niques based on sensor pattern noise and machine learning. The algorithm is mainly composed of extracting two sets of features from the noise residual POL- PRNU. The first set is the co-occurrences matrix. The second set is the color dependencies from normalized cross correlation of the three color channels and their shifted versions . These sets of features served as input to the SVM which is used as a classifier. The effectiveness of the method for source camera model identification, was tested on a set of images from the Dresden data-base.

The results illustrate the efficiency of the proposed method since it provides an identification rate of 97.81%. Compared to Filler’s method [14], we increase the identification rate by 9.58% since it only achieved 88.23% of identification on the same data set.

One problem related to the PRNU correlation based methods is their weak detection rate if geometrical transformations such as, cropping or scaling, have been performed. The direct detection will not succeed because of the desynchro- nization introduced by additional distortion [9].

Our future work include enhancing the proposed method to a better classifi- cation accuracy by improving the former feature set, and considering the problem of the geometrical transformations. Adding an unknown class will also be one of the perspectives, as an additional class, to handle models which are not in the training set.

References

1. J. Lukas, J. Fridrich, and M. Goljan, Digital camera identification from sensor pat- tern noise, IEEE Transactions on Information Forensics and Security, vol. 1, no. 2, pp. 205214, June 2006.

2. S. Choi, E. Y. Lam, and K. K. Y. Wong, Source camera identification using foot- prints from lens aberration, in Proc. SPIE, San Jose, CA, 2006, vol. 6069, pp.

60690J60690J8.

(11)

3. A. E. Dirik, H. T. Sencar, and N. Memon, Source camera identification based on sensor dust characteristics,in IEEE Workshop on Signal Processing Applications for Public Security and Forensics, SAFE 07, Washington, USA, 11-13 April 2007.

4. S. Bayram, H. T. Sencar, and N. Memon, Improvements on source camera model identification based on cfa interpolation, in International Conference on Digital Forensics, Orlando, FL, 2006.

5. M. Kharrazi, H.T. Sencar, and N. Memon, Blind source camera identification, in Image Processing, 2004. ICIP04. 2004 International Conference on, Oct 2004, vol.

1,pp. 709712 Vol. 1.

6. O. Celiktutan, B. Sankur, and I. Avcibas, Blind identification of source cell-phone model., IEEE Transactions on Information Forensics and Security, vol. 3, no. 3, pp.553566, 2008.

7. J. Fridrich, Digital image forensic using sensor noise,IEEE Signal Processing Mag- azine, vol. 26, no. 2, pp.2637, 2009.

8. M. Goljan and J. Fridrich, Estimation of lens distortion correction from single im- ages, in Proc. SPIE, Electronic Imaging, MediaWatermarking, Security, and Foren- sics, San Francisco, CA, February 26, 2014.

9. M. Goljan and J. Fridrich, Camera identification from cropped and scaled images, in Proc. SPIE, Electronic Imaging, Forensics, Security, Steganography, and Water- marking of Multimedia Contents X, San Jose, CA, January 26-31, 2008.

10. C.-T. Li and R. Satta, Empirical investigation into the correlation between vi- gnetting effect and the quality of sensor pattern noise, IET Computer Vision, vol.

6, pp.560566(6), November 2012.

11. J. Fridrich and J. Kodovsky, Rich models for steganalysis of digital images, IEEE Transactions on Information Forensics and Security, vol. 7, no. 3, pp. 868882, June 2012.

12. X. Qiu, H. Li, W. Luo, and J. Huang, A universal image forensic strategy based on steganalytic model, in Proceedings of the 2nd ACM Workshop on Information Hiding and Multimedia Security IHMMSec, Salzburg, Austria, 2014, pp. 165170.

13. D. Cozzolino, D. Gragnaniello, and L. Verdoliva, Image forgery detection through residual-based local descriptors and block-matching, in IEEE International Confer- ence on Image Processing (ICIP) Oct. 2014, Paris, France, pp. 52975301.

14. T. Filler, J. Fridrich, and M. Goljan, Using sensor pattern noise for camera model identification, in Proc. of 15th IEEE International Conference on Image Processing ICIP, San Diego, California, October 12-15, 2008., pp. 12961299.

15. Y. Bengio, O. Delalleau, and N. Le Roux, The curse of dimensionality for lo- cal kernel machines, Tech. Rep. 1258, D´epartement d’informatique et recherche op´erationnelle, Universit´e de Montr´eal, 2005.

16. T. Gloe and R. B¨ohme, The dresden image database for benchmarking digital image forensics, in Proceedings of the ACM Symposium on Applied Computing, New York, NY, USA, 2010, SAC 10, pp. 15841590.

17. C.-C. Chang and C.-J. Lin, Libsvm: A library for support vector machines, ACM Transactions on Intelligent Systems and Technology (TIST), vol. 2, no.3, pp. 27:127:27, April 2011, Software available at http://www.csie.ntu.edu.tw/

cjlin/libsvm.

18. C.-W. Hsu, C.-C. Chang, and C.-J. Lin, A practical guide to support vector classi- fication, Tech. Rep., Department of Computer Science, National Taiwan University, http://www.csie.ntu.edu.tw/ cjlin/papers/guide/guide.pdf, 2003

Références

Documents relatifs

Moreover, this EPJB paper focuses on the pattern formation instability rather than the amplification of the level of cooperation, formalizes social forces resulting from

By improving the signal-dependent noise model and extracting block fingerprints, we develop a new statistical test that aims at identifying the individual camera device from

They use spatial and temporal image brightness; and avoids ”high level” processing (seg- mentation, explicit optical flow estimation,...).. In this work, we propose to estimate

In this paper, we evaluate the efficiency of using CNNs for source camera model identification based on deep learning and convolutional neural networks.. The contribution repre- sents

Chaumont, ”Camera Model Identification Based Machine Learning Approach With High Order Statistics Features”, in EUSIPCO’2016, 24th European Signal Processing Conference 2016,

The PRNU, which is directly exploited in [13, 14, 15], can be also used for camera model identification as proposed in [17] based on the assumption that that fingerprint obtained

Moreover, PRNU can be also used for camera model identification as proposed in [9] based on the assumption that the fingerprint obtained from images in the TIFF or JPEG format

between samples (to cluster) or between samples and clusters so that samples/clusters are gathered according to increasing distance order. Besides, in the perspective of