Other studies have tried to measure the magnitude of the ripple-effect since the 1990s. Koubi and Lhommeau, (2007) estimated the spillover effects of the minimum wage increases induced by the 35h workweek law (see above) during the period 2000–2005. They analyzed the wage scales (i.e. BMWs) of a representative sample of firms with 10 employees or more. In the short run (3 months), they find some spillover up to a wage level equivalent to twice the SMIC. The elasticity is particularly high for wages just above the SMIC: a 1 per cent SMIC increase induces a 0.38 per cent increase for wages between 1 and 1.1 SMIC, a 0.19 per cent for wages between 1.1 and 1.2 SMIC, and still a 0.07 percent increase for wages between 1.4 and 1.5 SMIC. A year after the SMIC increases, the estimation of the spillover is much more dependent on the specification - in particular if firms' fixed-effects are introduced or not. The elasticity for wages between 1 and 1.1 SMIC ranges from 0.47 to 1.19, and decreases when climbing the wage scale, to a range from 0.31 to 0.49 for wages between 1.4 and 1.5 SMIC. Note that as the dependant variable is base wage, thestudy does not allow to analyze if substitution may have taken place between base wage and some other elements of compensation (premiums, bonuses…). Using the same methodology and data base, Goarant and Muller (2011) replicated thestudy adding the 2006-2009 period and found similar results - exactly the same short term elasticity (0.38) for wages between 1 and 1.1 SMIC, but lower elasticities for higher wages 27 . Differentiating by industries, one interesting result is that the spill-over effect, both in terms of magnitude and in terms of diffusion up the wage scale, is higher in industries with higher share of low wage workers. Concerning the timing of the spillover, that can be measured by introducing time lags, they found it is concentrated during the two first quarters following the minimum wage increase for wages between 1 and 1.1 SMIC, while higher wages increase later. This lag means that there is a higher wage compression in the short run (two quarters) than in the medium run (four quarters).
Background. More than 330 food additives (e.g. artificial sweeteners, emulsifiers, dyes) are authorized in Europe, with a great variability of use across food products. Objective. The objective of this study was to investigate the distribution and co-occurrence of food additives in a large-scale database of foods and beverages available on theFrenchmarket. Design. The open access crowdsourced Open Food Facts database ( https://world.openfoodfacts.org/ ) was used to retrieve the composition of food and beverage products commonly marketed on theFrenchmarket (n = 126,556), based on the ingredients list. Clustering of food additive variables was used in order to determine groups of additives frequently co-occurring in food products. The clusters were confirmed by network analysis, using the eLasso method. Results. Fifty-three-point eight percent of food products contained at least 1 food additive and 11.3% at least 5. Food categories most likely to contain food additives (in more than 85% of food items) were artificially sweetened beverages, ice creams, industrial sandwiches, biscuits and cakes. The most frequently used food additives were citric acid, lecithins and modified starches (>10,000 products each). Some food additives with suspected health effects also pertained to the top 50: sodium nitrite, potassium nitrate, carrageenan, monosodium glutamate, sulfite ammonia caramel, acesulfame K, sucralose, (di/tri/poly) phosphates, mono- and diglycerides of fatty acids, potassium sorbate, cochineal, potassium metabisulphite, sodium alginate, and bixin (>800 food products each). We identified 6 clusters of food additives frequently co-occurring in food products. Conclusions. Food additives are widespread in industrial French products and some clusters of additives frequently co-occurring in food products were identified. These results pave the way to future etiological studies merging composition data to food consumption data to investigate their association with chronic disease risk, in particular potential ‘cocktail effects’.
We propose here a case study of theFrench serial, or soap opera Plus Belle la Vie. We consider the soap opera to be a particular case of serial, as the episodes are likend to each other, featuring family-type intrigues, ro- mances and moral conflicts. The specificity of these series is that they are generally broadcast on a daily basis, and during access-prime time hours 17 . This choice is mostly due to the high number of observations (more than 1300 episodes), and to the fact that its broadcasting was uninterrupted for the whole period. This series gives a good example of habitual effects, as defined by equation 3.6, as most of the audience is live, and the broadcasting pattern each day at a precise hour establishes an appointment for consumers. Table 3.3 presents the evolution of live audience (actual and differenti- ated) over the period. We observe peaks in the audience, that we interpret as season’s finale effects. Negative peaks can be interpreted as rebroadcasts of old episodes, independent from the current narrative. We reject the presence of a unit root in the data using an augmented Dickey-Fuller test.
must be chosen with particular care and exogeneity assumptions can lead to different results. In a different way, Heitmueller (2006) controls for participation and sector selections, but in cross-sectional analysis.
In order to overcome these potential biases, Disney and Gosling (2003) uses the natural experiment that happened in the UK in the nineties with the privatization programme. And they show that their results are robust to self-selection. Bargain and Melly (2008) use panel data to control for both sector choice and individual fixed effects, and compare the quantiles of both distributions. Raising close but different issues, Bell, Elliott, and Scott (2005) exploit the mobility between both sectors, and studythe wage incentives to change sectors. They identify the wage premium after a job change. Other studies focus on the link between the wage distribution and mobility. Postel-Vinay and Turon (2007) and Cappellari (2002) focus on earnings dynamics and lifetime values of employment in both sectors. They argue that public and private sectors differ not only in their log wage distribution but also in their income mobility. They conclude, for UK and resp. for Italy, that life cycle of earnings matters in the private sector whereas it does not in the public sector. 1
EVIDENCE FROM THEFRENCH CASE
Abstract. In this study, we provide the first empirical analysis of Ferguson and Shockley (2003) theoretical framework on theFrench stock market for the period from July 1984 to June 2001. The objective is to studythemarket, SMB, HML and the leverage factors in explaining cross-sectional returns. Indeed, Ferguson and Shockley (2003) argue that the CAPM doesn’t work because, in empirical studies, we use an equity-only proxy for the true market portfolio and we ignore the debt claims. Book to market and size, variables which are correlated with leverage, will appear to explain returns. Our main result is that the leverage factor doesn’t subsume the SMB and HML factors. In cross-sectional regressions, only the size premium is statistically significant and help explaining returns. In time-series regressions, the three factors (SMB, HML and leverage), with themarket portfolio, do a good job. This result suggests that the leverage portfolio has an additional improvement of the model. Nevertheless, it doesn’t subsume the SMB and HML factors in theFrench case.
4- Empirical study on French buyback programs (1998-2005)
a. Data description
Since the purpose of our study is to examine the reality of share repurchase programs in France and their impact on companies’ share prices, we have decided to conduct this analysis over the period going from September 1998, when buybacks have been generally authorized by theFrench regulation, to December 2005. We have decided to choose as sample the constituents of the CAC 40 Index, French 40 largest market capitalizations. Since this index has been significantly changed by the two recent IPOs of theFrench public utility companies, Gaz de France (July 2005) and EDF (November 2005), we have considered them as non significant, since both companies did not have any quotation history, and had not launched any buyback program yet. Consequently, we have adjusted our sample and replaced these two recently introduced companies by their predecessors in the CAC 40 Index, the media company TF1 and the retailer Casino Guichard-Perrachon (aka. Casino). Our sample is thus the list of the CAC 40 Index as of January 2005 (listed in Appendix 2, or in Appendix 3.a ranked by market values as of 3 January 2005), which enables us to study share buybacks announced by large companies from all main sectors of the economy. Among these forty companies, four of them where not listed in 1998, but where listed later, following an IPO (theFrench bank Crédit Agricole or the utility company Veolia Environnement (ex. Vivendi Environnement)), or a merger (the European aeronautics leader EADS, and the bank Dexia Belgium). For coherence purposes, we have chosen to keep the same sample over the entire period, even if data will be partially unavailable for these four companies, considering that a switch with other companies would introduce a more important bias and reduce the coherence of the sample. But we will take into account the changing number of companies in the sample to establish our statistical analysis.
Keywords: conditional beta, market risk premium, ARCH models 1. Introduction
At the beginning of the 1980s, a phenomenon known as the size effect was observed. This size effect noted that small cap securities generate, on average, a greater risk adjusted return than large caps. Banz (1981) and Reinganum (1981) were the first researchers to studythe influence of market capitalisation on security returns. They demonstrated that small cap securities generated greater returns than those of large capitalisation and attributed this overperformance of small caps to the remuneration of an additional risk factor. In France, this effect was observed by Hamon and Jacquillat (1992), however, according to them, the size effect would not be observable outside of the year-end transition period.
al., 2005). At low levels of market volume, greater liquidity reduces excess volatility.
However, after a certain point, the confusion caused by speculation creates a positive relationship between liquidity and excess volatility.
Since theoretical predictions are ambiguous, it is important to examine the impact of the FTT empirically. In this paper, we studythe introduction, in 2012, of a 0.2 percent tax on daily acquisitions of French equity securities. We are interested in calculating the impact of this STT on market quality measured by market liquidity and volatility. Our contribution to the existing literature is twofold. First, we believe that our study provides a rigorous investigation of causality between STT and market quality. This is possible due to the unique design of theFrench STT. As the tax is levied only on large French firms – all of them listed on Euronext – this provides two control groups: smaller French firms and foreign firms also listed on Euronext. Hence, we can rely on difference-in-difference methodology to isolate the impact of the tax from other economic or regulatory developments during the analyzed period. Although some earlier studies follow this approach, their control groups are not fully convincing because stocks are traded in a completely different institutional environment, such as foreign or over-the-counter market (Umlauf, 1993; Pomeranets and Weaver, 2012). It is important to note that theFrench STT is virtually the only tax in the world that has affected
To improve the measurement of financial education more satisfactorily, there appears to be a promising way for calculating scores from a large number of questions. The problem is that this method is very expensive in terms of the number of questions. For example, Arrondel and Masson (2017) use such a scoring method for measuring individual preferences for saving (risk aversion, time discounting, and altruism). These summary and ordinal scores are computed on the basis of over one hundred questions covering a wide range of economic and social areas, such as consumption, leisure, investments, work, family, health, retirement, etc. These questions are often concrete or related to everyday life or plans, and are relatively easy to answer; others are more abstract, and pertain to responses to fictional scenarios or lottery choices. Arrondel and Masson (2014) show that these scores have better statistical properties than traditional measures of preferences (scales or lotteries). In the same vein, Nicolini (2019) uses the 50 survey questions of his questionnaire to compare financial literacy levels in Europe, but all the questions used are part of the financial domain and are therefore difficult to use to build an exogenous explanatory score to study financial behaviour (problem of reverse causality).
In parallel to this literature on fuel retail price trajectories, we find a large empiri- cal literature that investigated the determinants of price levels and their dispersion. In particular, this literature has examined whether fuel retail prices are associated with market power and spatial competition. Using data at the level of the city or state, the first studies on this issue typically estimate reduced form price equations to analyse how market structure and concentration may affect competitive pricing behavior, and then price levels. Using average monthly retail prices across eleven Canadian cities over the 1991-1997 period, Sen (2003) finds that the local retail market concentration, measured by the Hirschman-Herfindahl index, is positively and significantly associated with gasoline retail prices. However, results are not statistically significant when themarket concentration is proxied by the density of stations. In another study on the same gasoline market, Sen (2005) shows that this positive relationship between con- centration and average retail prices is particularly evident when themarket share of small and independent gasoline retailers is relatively high. Sen (2005) explains this re- sult by the fact that an increase in market concentration among smaller firms reduces themarket concentration among vertically integrated firms, and then results in lower average retail prices. Similar results are obtained by Hastings (2004) using data from the San Diego and Los Angeles Metropolitan areas.
main explanation could be the difference in the type of records used for the validation population, but informa- tion was not fully detailed in Saatchi et al. (2013) to con- firm this hypothesis. In our Charolais study, the validation population concerned only animals without offspring’s records, whose DEBV had a low reliability. The simple correlation between DEBV and GEBV was therefore a very strong underestimation of the correlation between true breeding value and GEBV. The only study that al- lows a fair comparison with our results is from Saatchi et al. (2011) on American Angus because their American Angus population has both an effective size and a training population size close to our French Charolais population. The large training population of 2,500 Angus bulls with average reliability of DEBV of 0.8 and 0.7 for birth and weaning weights, respectively, has to be compared to the 2,000 Charolais bulls with average reliability of 0.6 and 0.5 for birth and weaning weights, respectively. Saatchi et al. (2011) assessed the accuracy by the correlation be- tween DEBV and GEBV divided by the square root of heritability. Transforming their results to simple correla- tion between DEBV and GEBV, they reported accuracies of 0.33 for birth weight and 0.25 for weaning weight un- der a BayesC and validation on youngest animals, which are greater than for Charolais, with 0.25 and 0.21 for birth and weaning weights, respectively.
Since investments required to deploy optical access infrastructures are huge , and can apparently not be met easily in the near future, it is important to identify what are the most profitable deployment strategies, and assess the re- spective influence of various factors such as selected technology, take-up rate and geographical data. The present study used simple techno-economic tools in order to assess various deployment strategies. The comparison of all the technologies and associated business models have been performed on the basis of three main metrics: Discounted Payback Period (DPP) i.e. the time required to recover the cost of an investment using a discounted cash flow, Net Present Value (NPV) i.e. the value of an investment in a particular year taking under consideration expected revenues minus the size of initial investments, Internal Rate of Return (IRR) i.e. the discount rate which makes the NPV equal zero.
We measure disclosure quality using the annual report prices of AGEFI and Euronext. First, we carried out a multiple correspondences analysis (MCA) after recoding the quantitative variables. The use of the MCA permit us to study and to put in evidence on the mapping not only the strong values of the variables but also the weak ones. Therefore, it permitted us to characterise better both groups of firms of the sample, namely those who have a good disclosure quality and those who have poor disclosure quality. The result of the MCA shows that firms with poor disclosure quality are characterised by a high ownership concentration in the hands of families, a low proportion of outside directors in the board, little presence of institutional investors in the capital shares, no executive stock options plans and the presence of dual class shares. On the other hand, we find that firms with good disclosure are not controlled by families and are characterised by a high proportion of outside directors in the board, ownership dispersion, and a significant presence of institutional investors in the capital. Second, using a binary LOGIT regression, we find a negative association between ownership concentration and disclosure quality. One explanation for this relationship is that under high ownership concentration, controlling shareholders are less reliant on minority shareholders and may expropriate them; therefore, they have fewer incentives to disclose information and they prefer to retain it. The results also show a negative association between family control and disclosure quality. This is consistent with the assumption that family controlled firms have little incentive to disclose information to public because these families hold many of the senior staff positions; therefore the demand for information in such companies is relatively low because the major investors already have that information. Finally, we find that disclosure quality is negatively associated with double voting rights shares.
Our contributions are as follows. Understanding why firms invest in disclosure transparency is useful not only for preparers and users of accounting information but also for regulators (Meek, Roberts and Gray (1995)). Family controlled firms or firms with high capital concentration are less transparent than their counterparts. Should regulatory authorities impose more disclosure requirements on them? Second, most of the tests of the relationship between ownership structure and disclosure quality have been conducted using US or UK firms which may not face the same expected agency problems as French firms. La Porta et al. (1998) argue that the lack of strong legal protection and other external governance mechanisms in many continental countries (France, Germany, Belgium…) increases the severity of agency problems between controlling insiders and outsiders. Using a sample of French listed firms, mostly controlled by families or by individuals, our study extends previous research by looking at disclosure transparency in a context of lower law enforcement and a lack of investor protection.
To our knowledge, this paper is the first attempt to model the impact of the law NOME on retail competition. Albeit we study a reform to be implemented in theFrench electricity market, we believe our analysis sheds light on the ongoing debate regarding the real extent of retail competition in electricity markets (see Defeuilly (2009) for an overview). Indeed, how to enhance competition at the downstream level remains a concern, as many papers point out. For instance, Joskow and Tirole (2006) show that retail competition has attractive welfare properties only if real time consumption can be accurately measured. Green (2003) finds that an electricity retailer facing competition will be limited in its ability to pass on the costs of long-term contracts, should the spot price fall below the price in those contracts. This creates a distortion as the optimal level of contracting for retailers is not attained. Von der Fehr and Hansen (2010) find that retailers exert market power by exploiting the reluctance of some customers to switch suppliers in Norway. More broadly, our paper contributes to the analysis of market power mitigation measures such as horizontal divestiture or capacity release programmes (Weigt et al., 2009).
life-cycle hypothesis, postulates that individuals adopt forward-looking, time-consistent (not
contradictory over time) behaviour and consume in accordance with their preferences, albeit constrained by their total resources over their entire lifetimes (Modigliani & Brumberg, 1954). Individuals use assets, as a reserve of deferred consumption, to smooth their consumption over their life cycle in keeping with their income profile (permanent income). It is also possible to studythe optimal composition of these assets over time (Merton, 1969). This basic model’s initial message has moreover been enriched by considering other savings motives: precautionary savings to provide for future contingencies, especially income incidents (Kimball, 1993); and a bequest motive in terms of transferring an inheritance to offspring (Arrondel & Masson, 2006). This standard theory posits, at least implicitly, that individuals have knowledge of certain financial principles to be able to make their decisions, in particular to determine their constraints, such as discounting, inflation and calculation of interest, and that they have a certain amount of information on the financial and economic environment. Psychological economics research programmes on information, financial literacy and cognitive ability tend to show that this is not the case (Lusardi & Mitchell, 2014).
This is supported by the results from the Independent Traveller Survey (2007). Participants considered that they were travellers (46%) as opposed to tourists (23%). The majority of the respondents (over 70%) stated that they had “non-leisure” purposes for their trip, such as exploring places and cultures, studying/working abroad, volunteering or learning a language. Over 80% of the young travellers rated exploring other cultures, increasing ones knowledge and experiencing everyday life as important or very important motives for their trip. Interacting with local people was very closely behind at 76%. Two motivational statements that saw a considerable difference between survey’s carried out in 2002 and 2007 were “learning more about myself” with an increase of 12% and “helping people and making a positive contribution” (17%+). From the research carried out by the WYSE Travel Confederation and ATLAS respectively (2007) more than 80% of young travellers believed that their trip had influenced their lifestyle with an overall majority stating that they were now travelling in a more “responsible manner” being more conscious of the impact of their travel behaviour and with more thought toward ethical issues such as social justice and poverty. These attributes are linked to the Global Code of Ethics for Tourism and are applied by an array of researchers when defining and measuring responsible tourism. The above results are also supported by a more recent New Horizons III study (2013), which was carried out by the WYSE Travel Confederation in 2012. The results showed that travel is still a form of education and a place to gain knowledge that is required in highly competitive recruitment market place (David Chapman, the Director General of WYSETC, 2013).
[Insert table 4 here]
4. Size, Book to Market and Borrowing Ratio Sorted Portfolios 4.1. Database and Methodology. The aim of this sub-section is to present the methodology used to construct the leverage factor for our sample. In their empirical investigation, Ferguson and Shockley (2003) propose two mesures to capture the missed beta risk. The first portfolio is based on the ratio debt to equity and it is associated to relative leverage. The second one, based on Altman’s Z-score, is used to express the relative distress. As the two authors mentioned, this distinction between relative leverage and relative distress is important. We are aware about that. Nevertheless, the database enables us to construct only the leverage portfolio. For our 636 stocks, only 341 have data about their borrowing ratio 20 . Because of Datastream data limitations, the period covered is only from July 1984 to June 2001 (204 months).
The significance and signs of the association between the profitability and size variables remain unchanged (p < 0.01). This shows that levels of « profits » and « size » influence positively the probability that firms become payers or continue to be payers.
The coefficient of the principal independent variable « market-to-book ratio » is positive but less significant (0.2205, p < 0.05) in the full model as in all other tested models. However, the coefficient of the variable « ∆Debts » is very significant in the full model unlike models (2) and (P2) where it wasn’t. The association is negative, which means that the level of debts affects negatively the probability of dividends payment;