This paper proposes a new definition of Watermarking Secu- rity which is more in line with the cryptographic viewpoint. To this end, we derive the effective key length of a watermark- ing system from the probability of guessing a key equivalent to the original key. The effective key length is then computed for two zero-bit watermarkingschemes based on normalized correlation by estimating the region of equivalent keys. We show that the security of these schemes is related to the dis- tribution of the watermarked contents inside the detection re- gion and is not antagonist with robustness. We conclude the paper by showing that the key length of the system used for the BOWS-2 international contest was indeed equal to 128 bits.
the distortion point of view. This new method uses results of transportation theory: it consists in computing the optimal way to match the distribution of host contents to a distribu- tion of marked contents (given the secret key) by minimizing the global square euclidean distance. Section 2 recalls basics on spread-spectrum watermarkingschemes and more partic- ularly on the NW modulation. The link between security in the WOA framework and distributions of marked contents in the secret subspace is also presented. Section 3 details how we use results of transportation theory to minimize the distortion. Finally section 4 exhibits experiments on 2000 Gaussian signals to quantify the performance of this new method.
IRCCyN lab. Polytech’Nantes, Rue Ch. Pauc, BP 50609, 44306 Nantes, France
A tremendous amount of digital multimedia data is broadcasted daily over the in- ternet. Since digital data can be very quickly and easily duplicated, intellectual property right protection techniques have become important and first appeared about fifty years ago (see  for an extended review). Digital watermarking was born. Since its inception, many watermarking techniques have appeared, in all pos- sible transformed spaces. However, an important lack in watermarking literature concerns the human visual system models. Several Human Visual System (HVS) model based watermarking techniques were designed in the late 1990’s. Due to the weak robustness results, especially concerning geometrical distortions, the interest in such studies has reduced. In this paper, we intend to take advantage of recent advances in HVS models and watermarking techniques to revisit this issue. We will demonstrate that it is possible to resist to many attacks, including geometrical dis- tortions, in HVS based watermarking algorithms. The perceptual model used here takes into account advanced features of the HVS identified from psychophysics ex- periments conducted in our laboratory. This model has been successfully applied in quality assessment and image coding schemes [39,31]. In this paper the human visual system model is used to create a perceptual mask in order to optimize the watermark strength. The optimal watermark obtained satisfies both invisibility and robustness requirements. Contrary to most watermarkingschemes using advanced perceptual masks, in order to best thwart the de-synchronization problem induced by geometrical distortions, we propose here a Fourier domain embedding and detec- tion technique optimizing the amplitude of the watermark. Finally, the robustness of the scheme obtained is assessed against all attacks provided by the Stirmark benchmark. This work proposes a new digital rights management technique using an advanced human visual system model that is able to resist various kind of attacks including many geometrical distortions.
mation leakage is given in bits by the mutual information I(K; O N o ), and the equivocation h
e (N o ) ≜ H(K∣O N o ) measures how this leakage decreases the initial lack of infor- mation: h e (N o ) = H(K) − I(K; O N o ). The equivocation is a non increasing function. For most of the watermarkingschemes, the information leakage is not null, and as the ad- versary keeps on observing, the equivocation decreases down to 0 (discrete r.v.) or −∞ (continuous r.v.). This means that the adversary as collected enough observations so that he can uniquely identify the secret key k.
To build a copy protection system for consumer electronic devices, we are looking for a technique, which could embed in an original content a signal commonly called watermark. Compliant devices such as players or recorders are able to detect the presence of this watermark. In this particular case, its presence means that the content is protected and thus it is illegal to copy it. This embedded watermark must not be perceptible. Watermarkingschemes have been de- veloped since several years for audio, video or still image.
In the watermark detection scenario, also known as zero-bit watermarking, a watermark, carrying no hidden message, is inserted in a piece of content. The watermark detector checks for the presence of this particular weak signal in received contents. The article looks at this problem from a classical detection theory point of view, but with side information enabled at the embedding side. This means that the watermark signal is a function of the host content. Our study is twofold. The first step is to design the best embedding function for a given detection function, and the best detection function for a given embedding function. This yields two conditions, which are mixed into one ‘fundamental’ partial differential equation. It appears that many famous watermarkingschemes are indeed solution to this ‘fundamental’ equation. This study thus gives birth to a constructive framework unifying solutions, so far perceived as very different.
Watermarking techniques can be also classified according to the domain in which the wa- termark is embedded, i.e., the spatial domain or the transform domain. While the spatial do- main techniques are having least complexity and high payload, they can not withstand image compression and other common image processing attacks [Potdar2005 ]. Transform domain watermarkingschemes like those based on the discrete Fourier Transform (DFT) [Pun2006; Solachidis2001 ], the discrete cosine transform (DCT) [Hernandez2000; Chu2003 ] and the discrete wavelet transform (DWT) [Barni2001; Lu2012 ], typically provide higher image im- perceptibility and are much more robust to image manipulations. However, DWT has been used more frequently in digital image watermarking due to its time/frequency decomposition char- acteristics, which resemble to the theoretical models of the human visual system [Barni2001 ].
This paper presents yet another attempt towards robust and secure watermarking. Some recent works have looked at this issue first designing new watermarkingschemes with a security oriented point of view, and then evaluating their robustness compared to state-of-the-art but unsecure tech- niques. Our approach is, on contrary, to start from a very robust watermarking technique and to propose changes in order to strengthen its security levels. These changes in- clude the introduction of a security criterion, an embedding process implemented as a maximization of a robustness met- ric under the perceptual and the security constraints, and a watermarking detection seen as a contrario decision test.
161, rue Ada, 34392 Montpellier cedex 05, France contact: firstname.lastname@example.org
The Hyper-Cube watermarking has shown a high potential for high-rate robust watermarking. In this paper, we carry on the study and the evaluation of this quantization-based approach. We especially focus on the use of a Trellis Coded Quantiza- tion (TCQ) and its impact on the Hyper-Cube performances. First, we recall the TCQ functioning principle and we propose adapted quantizers. Second, we analyze the integration of the TCQ module in a cascade of two coders (resp. two decoders). Finally, we experimentally compare the proposed approach with the state-of-the-art of high-rate watermarkingschemes. The obtained results show that our Multi-Hyper-Cube scheme always provides good average performances.
IRISA – Universit´e Rennes 1, Rennes, France
Abstract. Watermarking techniques are used to help identifying copies of publicly released information. They consist in applying a slight and secret modification to the data before its release, in a way that should be robust, i.e., remain recognizable even in (reasonably) modified copies of the data. In this paper, we present new results about the robust- ness of watermarkingschemes against arbitrary attackers, and the for- malization of those results in Coq. We used the Alea library, which formalizes probability theory and models probabilistic programs using a simple monadic translation. This work illustrates the strengths and particularities of the induced style of reasoning about probabilistic pro- grams. Our technique for proving robustness is adapted from methods commonly used for cryptographic protocols, and we discuss its relevance to the field of watermarking.
Fig. 2. Data structure .
where is the number of IMFs and denotes the final residual. The IMFs are nearly orthogonal to each other, and all have nearly zero means. The number of extrema is decreased when going from one mode to the next, and the whole decomposition is guaranteed to be completed with a finite number of modes. The IMFs are fully described by their local extrema and thus can be recovered using these extrema , . Low frequency components such as higher order IMFs are signal dominated  and thus their alteration can lead to degradation of the signal. As result, these modes can be considered to be good lo- cations for watermark placement. Some preliminary results have ap- peared recently in ,  showing the interest of EMD for audio watermarking. In , the EMD is combined with Pulse Code Modu- lation (PCM) and the watermark is inserted in the final residual of the subbands in the transform domain. This method supposes that mean value of PCM audio signal may no longer be zero. As stated by the authors, the method is not robust to attacks such as band-pass filtering and cropping, and no comparison to watermarkingschemes reported recently in literature is presented. Another strategy is presented in  where the EMD is associated with Hilbert transform and the watermark is embedded into the IMF containing highest energy. However, why the IMF carrying the highest amount of energy is the best candidate mode to hide the watermark has not been addressed. Further, in practice an IMF with highest energy can be a high frequency mode and thus it is not robust to attacks.
This CPA is called BNSA (Blind Newton Sensitivity Attack) by its inven- tors [31, 32]. Its main advantage is that no assumption at all is needed with respect to the shape of the decoding region. Experimental simulations show that the algorithm quickly converges with the gradient option; around M = 10 iterations are needed. This makes the Hessian estimation not worth it at all. The final sensitive content is of very good quality, although some differences exist depending on the watermarking scheme. Some techniques are more robust than others against the BNSA, in the sense that the final attacked content is more degraded. The researchers suspect that some watermarkingschemes have bigger detection areas (for a given probability of false alarm) or more efficient embedders so that the watermarked feature vector f 1
4.2. Discussion on detecting attacks
In this section we discuss the performance of this scheme on detecting the modifications on the model. The modifications men- tioned in this section include adding noise, adding/deleting faces, inserting/removing vertices, remeshing the model, renumbering vertices, and translating/rotating/scaling the model. These mod- ifications are common in practical use and widely used in the former literature to test the performance of the watermarkingschemes [ 10 , 27 , 29 , 34 , 37 ]. As discussed later, the proposed scheme in this paper can detect these kinds of modifications on the mod- els. We need to point out that this scheme is immune to certain incidental data operations. These operations do not destroy the integrity of the model, such as re-numbering vertices. Translat- ing/rotating/scaling the model are also not breaking the integrity of the model. The users may still use the model if it is transformed. Ideally the fragile watermarking scheme could provide the conve- nience to filter these operations. However, our scheme can only view them as kinds of attacks. It needs further work to improve the scheme.
bit-rate. This technique is robust against slight shift, moderate downscaling and re-compression. To keep the watermark artifacts invisible and to ensure the robustness, fidelity and robustness filters were used. In , the authors
present an algorithm for watermarking intra frames without any drift. They propose to exploit several paired-coefficients of a 4 ⇥ 4 DCT block to accumu- late the embedding induced distortion. The directions of intra-frame prediction are used to avert the distortion drift. The proposed algorithm has high em-
If dim(X) ≥ 2, the main difference with the case of curves is the presence of obstructions : given X n they are
– Obstruction to the extension of a vector bundle on X n−1 to a vector bundle on X n . – Obstruction to the extension of X n to a primitive multiple scheme of multiplicity n + 1. We will see that these obstructions depend on the vanishing of elements in cohomology groups H 2 (X, E), where E is a suitable vector bundle on X. Hence if dim(X) = 1 the obstructions disappear, and it is always possible to extend vector bundles or primitive multiple schemes. This is why one can obtain many primitive multiple curves.
IRCCyN-IVC, Polytech’Nantes, rue Ch. Pauc, 44306, Nantes, FRANCE
This work is motivated by the limitations of statistical quality met- rics to assess the quality of images distorted in distinct frequency range. Common quality metrics, which basically have been designed and tested for various kind of global distortions, such as image coding may not be eﬃcient for watermarking applications, where the distortions might be restricted in a very narrow portion of the frequency spectrum. We hereby want to propose an objective quality metric which performances do not depend on the distortion frequency range, but we nevertheless want to provide a simpliﬁed objective quality metric in opposition to the complex HVS based quality metrics recently made available. The proposed algo- rithm is generic (not designed for a particular distortion), and exploits the contrast sensitivity function (CSF) along with an adapted Minkowski error pooling. The results show a high correlation between the proposed objective metric and the mean opinion score (MOS). A comparison with relevant existing objective quality metrics is provided.
Figure 2: Illustration of the decision frontier of a binary classifier (without loss of generality for the method in higher dimensions). Initially trained model frontier is represented as a line, while the tweaked frontier appears dashed. Instead of (a) relying on trivial points that would not discrim- inate classifiers when querying remote neural network, or on (b) fine-tuning (i.e., watermarking) the model using those trivial points that would significantly degrade model accuracy, the stitching algorithm first (c) identifies specific data points by the decision frontier (both adversaries and false adversaries that are all close to the frontier), and then (d) fine-tune the classifier to include the ad- versaries (8 of them here, bar-free letters), resulting in a loyal watermarked model and a key size of |K| = 12 (the 4 remaining are the false adversaries, depicted as letters with bars). This process resemble “stitching” around datapoints, inspiring the name of our proposed algorithm.
2. ENHANCED QUALITY METRIC
The previously proposed CPA metric omitted the contrast mask- ing property of the HVS for simplicity. The contrast masking is an important feature when modeling HVS behavior, it ba- sically discriminates the frequency discrepancies. In brief, a high frequency distortion would inevitably have a much lower impact on the visibility when added up on a high frequency image area than in the lower frequencies. The contrast mask- ing is thus particularly important in watermarking framework, where noise-like watermark could be embedded uniformly on the image without regard to the image masking capabilities. In HVS-based OQMs, the contrast masking is exploited by a threshold elevation step after a perceptual sub-band decom- position , which is computationally expensive. So, in this work the perceptual sub-band decomposition is replaced with a simple block-based frequency decomposition.
The rational Chow ring A ∗ (S [n] , Q) of the Hilbert scheme S [n] parametrising the length n zero-dimensional subschemes of a toric surface S can be described with the help of equivariant techniques. In this paper, we explain the general method and we illustrate it through many examples. In the last section, we present results on the intersection theory of graded Hilbert schemes.
A strategy to tackle this problem is to consider watermarking as stochastic optimization prob- lem and apply EC to such end (optimization of embedding parameters).
The main reason for the use of EC in the optimization of watermarking systems is that the objective functions are usually noisy (multi-modal). Since EC techniques are based on pop- ulation of candidate solutions, it is less likely to the optimization algorithm to get stuck in a local optimum. Moreover, due to the modular nature of a watermarking system (with numer- ous different techniques for each module) the use of EC provides ﬂexibility to the optimization process, since it does not require gradient information of the function under consideration (Par- sopoulos and Vrahatis, 2002). There are many methods based on this strategy in the literature (Table 1.2). Actually, the majority of the intelligent watermarking methods are based on this strategy. Regarding the number of objective functions employed, there are two main optimiza- tion strategies – one consisting of the use of a single objective function (e.g. ﬁdelity), known as Single Objective Optimization Problem (SOOP) and another one consisting of the combination of many objective functions, known as Multi Objective Optimization Problem (MOOP). With respect to the GA or PSO algorithms employed to deal with MOOP, there are two strategies. One which consists of aggregating many objective functions into one through weighted sum – and then use classical GA and PSO – and another which consists of handling many conﬂicting objectives during optimization – which is the case of Multi Objective GA (MOGA) and Multi Objective PSO (MOPSO).