• Aucun résultat trouvé

the case of education in the European Union

N/A
N/A
Protected

Academic year: 2021

Partager "the case of education in the European Union"

Copied!
65
0
0

Texte intégral

(1)

https://lib.uliege.be https://matheo.uliege.be

Research-Thesis, COLLÉGIALITÉ

Auteur : Labasse, Sophie Promoteur(s) : Jurion, Bernard

Faculté : HEC-Ecole de gestion de l'Université de Liège

Diplôme : Master en sciences économiques, orientation générale, à finalité spécialisée en economic, analysis and policy

Année académique : 2019-2020

URI/URL : http://hdl.handle.net/2268.2/8889

Avertissement à l'attention des usagers :

Tous les documents placés en accès ouvert sur le site le site MatheO sont protégés par le droit d'auteur. Conformément aux principes énoncés par la "Budapest Open Access Initiative"(BOAI, 2002), l'utilisateur du site peut lire, télécharger, copier, transmettre, imprimer, chercher ou faire un lien vers le texte intégral de ces documents, les disséquer pour les indexer, s'en servir de données pour un logiciel, ou s'en servir à toute autre fin légale (ou prévue par la réglementation relative au droit d'auteur). Toute utilisation du document à des fins commerciales est strictement interdite.

Par ailleurs, l'utilisateur s'engage à respecter les droits moraux de l'auteur, principalement le droit à l'intégrité de l'oeuvre et le droit de paternité et ce dans toute utilisation que l'utilisateur entreprend. Ainsi, à titre d'exemple, lorsqu'il reproduira un document par extrait ou dans son intégralité, l'utilisateur citera de manière complète les sources telles que

mentionnées ci-dessus. Toute utilisation non explicitement autorisée ci-avant (telle que par exemple, la modification du document ou son résumé) nécessite l'autorisation préalable et expresse des auteurs ou de leurs ayants droit.

(2)

Public expenditure efficiency:

the case of education in the European Union

Supervisor

Bernard Jurion Readers

Sergio Perelman

Dominique Lafontaine

Sophie Labasse

Master’s Degree Thesis in Economics, specialization in Economic Analysis and Policy

Academic year 2019/2020 - University of Liège

(3)

Acknowledgements

I would like to express my deep gratitude to Professor Bernard Jurion, my thesis supervisor, for his patient guidance, persistent encouragement and useful critiques of this research thesis.

I would like to express my very great appreciation to Professor Sergio Perelman for his advice, methodological help with data analysis and constructive recommendations.

I would also like to thank Professor Dominique Lafontaine for her enthusiastic encourage- ment about this research question.

My grateful thanks are also extended to Mr. Jerome Schoenmackers, who helped me solve data and econometrics issues, to Mr. Barnabé Walheer for his help starting the data analysis and to Mrs. Valérie Quittre for her support in the PISA data retrieving.

Finally, I wish to thank my family for their support and encouragement throughout my studies.

(4)

Executive Summary

This thesis aims to analyze public expenditure efficiency in the education area at macro level and to explain cross-country differences in the European Union.

This work first estimates European Member States’ public expenditure efficiency in education using Data Envelopment Analysis. In particular, it focuses on pre-primary, primary, lower secondary and upper secondary education levels. In details, it estimates four distinct efficiency frontiers that compare the resources employed by governments in education, Expenditure per student

GDP per capita , with four possible measures of educational performance, namely: PISA mean score 2018, Early leavers from education, Foreign languages skills, and the Employment rate of recent graduates.

Overall, the top spenders per student in relation to their economic potential do not necessarily correspond to the top performers and reversely. Consequently, it means that countries are not equally efficient in the European Union. We find that there is scope for many countries to reach better levels of efficiency. In particular, countries can improve their efficiency either by reducing their public spending without diminishing the results, or by improving their performance in education without spending additional resources.

Moreover, the countries identified as the most efficient can set guidelines for reducing inefficiencies.

Later, we compare the countries in terms of efficiency and performance using aggregated scores. This method separates the countries that are efficient "just" because they spend little and the countries that are spending too much to reach high performance from the countries that are achieving high efficiency and high performance at the same time.

The last chapter seeks to identify the variables at the macroeconomic level that could explain efficiency variations across countries that stand out in the previous results. The variables selected focus on three matters: education system, teachers’ salary and other macroeconomic variables. The variables of interest will be successively analyzed and be regressed on the global efficiency scores to try to estimate their effect on the efficiency of education public expenditure using Ordinary Least Squares (OLS) estimation.

The resulting model produces statistically significant results, confirming part of the intuitive analysis. Nevertheless, some organizational aspects promoted by the European Union are found to have a negative impact on efficiency in terms of education public expenditure.

The overall conclusion can be that each additional unit of expenditure should be compen- sated by a greater increase in performance. If it is not the case, efficiency can be improved working on the spending or on the performance. For this purpose, each country has to conduct an in-depth analysis of its possibilities to decide which direction to take.

All in all, in order to reach European Union’s objectives in education while complying with the budgetary regulations, Member States strongly need to ensure the efficiency of their public expenditure on education. 1

1This work contains 19 842 words from the introduction to the conclusion.

(5)

Table of contents

1 Introduction 1

2 Literature review 2

3 Measuring efficiency 4

3.1 Methodology . . . 4

3.2 Application to education public expenditure in the European Union . . . . 5

3.2.1 Input measurement . . . 7

3.2.2 Output measurement . . . 9

3.3 Shortcomings . . . 13

3.4 Data preliminary analysis . . . 15

3.4.1 Public expenditure in the European Union . . . 15

3.4.2 Performance indicators . . . 17

3.4.3 Preliminary analysis conclusion . . . 21

3.5 Results . . . 21

3.5.1 DEA models . . . 21

3.5.2 Efficiency and performance . . . 28

4 Explaining cross-country difference in efficiency 31 4.1 Variables of interest and methodology . . . 31

4.2 Data analysis . . . 33

4.3 Regressions . . . 40

5 Conclusion 45 A Annexes 51 A.1 Constant Return to Scale . . . 51

A.2 Detailed description of the subjects assessed in PISA 2018 . . . 52

A.3 Public expenditure by function COFOG . . . 53

A.4 Public expenditure data : construction of the input variable . . . 54

A.5 PISA scores and short-term performance change . . . 55

A.6 Performance . . . 56

A.7 Performance VS Efficiency . . . 57

A.8 Risk of poverty and social exclusion indicator definitions . . . 58

(6)

List of Figures

1 DEA VRS frontier . . . 4

2 Evolution of total public expenditure in % of GDP in the EU 28 aggregate 15 3 Functions’ share in total public expenditure for the EU 28 aggregate . . . . 15

4 Input variable Expenditure per student GDP per capita detailed : 2000-2017 mean . . . 16

5 PISA Mean score 2018 Classification . . . 17

6 Global performance score πg . . . 20

7 DEA efficiency frontiers 1 and 2 . . . 22

8 DEA efficiency frontiers 3 and 4 . . . 23

9 Efficiency global scoresθg input and output oriented . . . 29

10 Efficiency VS Performance . . . 30

11 DEA VRS and CRS frontier . . . 51

List of Tables

1 PISA mean scores and short-term performance change from 2015 to 2018 . 18 2 Performance for other outcomes : 2013-2017 average . . . 19

3 Output variables: summary statistics . . . 21

4 DEA efficiency scores and rank : input oriented . . . 24

5 DEA efficiency scores and rank: ouput oriented . . . 24

6 Variables’ definition . . . 33

7 Data for the variables of interest . . . 34

8 Regressions with 28 observations . . . 43

9 Regressions with 140 observations . . . 44

10 EU 28 aggregate’s public expenditure by function in % of GDP . . . 53

11 EU 28 aggregate’s share of each function in total public expenditure . . . . 53

12 Public expenditure : 2000-2017 mean . . . 54

13 PISA scores 2015, 2018, detailed by subject and short-term performance change between 2015 and 2018 . . . 55

14 Performance indicators, normalized performance indicators and global per- formance score πg : 2013-2017 average . . . 56

15 Performance VS Efficiency . . . 57

(7)

List of Acronyms

COFOG Classificaton of the Functions of Governments CRS Constant Return to Scale

DEA Data Envelopment Analysis DMU Decision Making Unit DWL Dead Weight Loss

EACEA Education, Audiovisual and Culture Executive Agency EMU European Monetary Union

ESA European System of Accounts EU European Union

FDH Free Disposal Hull GDP Gross Domestic Product

ISCED International Standard Classification of Education MCF Marginal Cost of Public Fund

OECD Organisation for Economic Co-operation and Development OLS Ordinary Least Squares

PISA Programme for International Student Assessment PPP Purchasing Power Parity

SFA Stochastic Frontier Analysis VRS Variable Return to Scale

(8)

1 Introduction

The analysis of public expenditure efficiency compares the resources employed by the general government to the public services objective’s achievement or performance. Efficiency is thus achieved when the maximum output, performance, is produced, employing a given amount of input, resources. Similarly, efficiency is achieved when the minimum resources possible are used to produce a given output, performance.

The need to analyze public expenditure efficiency is twofold. First, it is important to control the increase in expenditure assiduously in the interest of budgetary discipline. Issues related with public finances sustainability such as demographic trends (i.e. ageing population) and globalization (i.e. mobile taxpayers) limit the scope for countries to increase public spending. In particular, EU Member States are submitted to the compliance of the Stability and Growth Pact. Second, the public sector is required to improve the public services delivered, which are highly correlated with the performance of the country’s economy. In fact, better productive public services are key to improve welfare and potential growth.

Overall, it is important to ensure that resources are used as efficiently as possible in the view of the growing pressure on public finances and the need to sustain growth.

Intensive search for efficiency in all government functions is essential and this work looks at education in particular. Economic theory suggests that education is a source of human capital and economic growth as productivity gains need increasingly skilled workforce. In fact, education quantity — educational enrollment and attainment— as well as education quality are significant determinants of income and GDP growth. Also, as part of the European 2020 Strategy for smart, sustainable and inclusive growth set in 2010, the European Union drives our attention to the performance of education systems that will allow to achieve a knowledge-based economy and assert European competitiveness (European Commission, 2010; Roth and Thum, 2010). In addition, education is one of the most important services delivered by governments in the EU, representing around 10% of Member States’ public expenditure.

Efficiency of education public expenditure is here after measured by the estimation of efficiency frontiers applying the non-parametric Data Envelopment Analysis (DEA) method. It compares resources used to obtain some outputs in education from pre-primary to secondary level in the European Union Member States. The particularity of this work relies on the estimation of distinct efficiency frontiers for each selected output indicator.

Then, this work attempts to explain cross-country differences in efficiency scores with environmental and organizational variables in an econometric analysis.

This thesis aims to analyze public expenditure efficiency in the education area at macro level. First, after reviewing related literature, it presents the theoretical framework to measure efficiency, its application to education public expenditure and the possible shortcomings. Then, it presents a preliminary analysis based on hard data, the results of the DEA analysis’ estimation and the comparison between efficiency and performance.

Finally, it intends to explain cross-country differences in efficiency scores by means of additional variables.

(9)

2 Literature review

International comparisons analyzing the efficiency of public expenditure through the estimation of efficiency frontiers are mainly made at country level.

In this framework, an alternative non-parametric approach to DEA, Free Disposal Hull (FDH)—which slightly differs in assumptions— and parametric methods such as Stochastic Frontier Analysis (SFA) are also used for efficiency analysis purpose. However, since this paper applies a non-parametric approach, the scope of the literature is restricted to non-parametric and mixed methods.

First of all, some papers are assessing the aggregated public expenditure efficiency such as Afonso et al. (2005) and Afonso et al. (2010), the former with a sample of 23 industrialised countries and the latter with a sample of new EU Members States and emerging markets.

They compute efficiency scores with FDH putting together many indicators of performance.

Their main findings have been diminishing marginal return on public expenditure.

However, the vast majority of the literature on cross-country expenditure efficiency relies on specific areas such as education and health care, among others, representing a great part of every country’s budget. In addition, Mandl et al. (2008) show that "function- by-function approach" is more promising than aggregated versions, allowing for better identified determinants and better specified models.

Herrera and Pang (2005) applied both DEA and FDH to education and health sectors for 140 countries between 1996 and 2002. Also, they tried to explain cross-country differences in efficiency scores through a Tobit model. Afonso and Aubyn (2005) similarly used and compared DEA and FDH methods in the same sectors in a sample of OECD countries, concluding that DEA and FDH results are broadly comparable.

Cornille et al. (2017) and Eugène (2008) studied public expenditure efficiency in 3 or 4 public areas using similar methods with a focus on Belgium, which spends more than the EU average.

More recently, Dutu and Sicari (2016) used DEA analysis and quantified efficiency im- provements over time at OECD level.

At EU level, Mandl et al. (2008) analyzed the efficiency and effectiveness of public expenditure in education and R&D areas and compared the results of 4 authors2 about the efficiency of education spending.

In its report on Public Finances in EMU, the European Commission (2016b) estimated efficiency scores separately for different education performance indicators using DEA.

In terms of education centred papers, we can cite Afonso and Aubyn (2006) who studied the efficiency of secondary education applying a two stage procedure regressing DEA scores on independent variables in a Tobit model, for a sample of OECD countries. They showed that inefficiency was related to the GDP per capita and to the adult’s educational attainment. Other studies applying DEA and SFA on education expenditure include Sutherland et al. (2007) on primary and secondary education and Aubyn et al. (2009) on tertiary education efficiency.

2Clements (2002); Afonso and Aubyn (2006); Gonand et al. (2007) and Mattina/Gunnarson (2007).

(10)

One notes that some papers also analyze cross-country education public expenditure efficiency using micro data at school level such as Agasisti and Zoido (2015) for secondary schools in 30 industrialised countries or Cordero et al. (2017) for primary schools in a sample of EU countries3.

Overall, most studies show that important inefficiencies are at work.

3Without focusing on public spending, the education efficiency is analyzed way more precisely at micro level such as students or schools level.

(11)

3 Measuring efficiency

3.1 Methodology

Data Envelopment Analysis originates from Farrell (1957) and was further developed by Charnes et al. in 1978. We review here after the intuitive basics of this approach4. DEA is a non-parametric linear programming method for evaluating the efficiency of a set of peer units called Decision Making Units (DMUs) that are converting inputs into outputs.

As it requires very few assumptions, DEA is used to evaluate the efficiency of a variety of different DMUs since its introduction such as industries, hospitals, firms, countries, etc.

In the view of the following cross-country analysis of public expenditure efficiency, DMUs are thus countries.

Figure 1: DEA VRS frontier

Source: Illustration based on Ji and Lee (2010)

In practice, this method envelops observed input-output vectors in order to estimate a "best practice frontier". This frontier thus connects countries with best input-output combinations to each other.

Concerning the shape of the efficiency frontier, it depends on the assumption about the returns to scale. Constant returns to scale (CRS)5 models assume that all units operate at their optimal scale, which is quite extreme when talking about countries. This is why the following analysis assumes variable returns to scale (VRS), identifying the units that define the frontier by starting from the country spending the fewest resources to the country getting the best results. This approach assumes convexity.

Accordingly, DEA estimates efficiency scores— calledθhere after— by measuring countries’

position relative to the frontier, considering the countries situated on the frontier as efficient.

It means that this measure is completely relative to the supposed efficiency of the countries situated on the frontier. In their Handbook on DEA, Cooper et al. (2011) defined relative efficiency as follow: "A DMU is fully efficient if and only if the performance of other DMU does not show that some of its inputs or outputs can be improved without worsening some of its other inputs or outputs".

4For a more in-depth and analytical review of DEA, see for example Thanassoulis (2001) or Boussofiane et al. (1991).

5See Annex A.1

(12)

In addition, DEA approach can be input or output-oriented. Input-oriented models aim to minimize inputs employed while keeping the level of outputs unchanged and output-oriented models try to maximize outputs, without requiring more inputs. The input-oriented efficiency score of a country A (Figure 1) is defined as the ratio of the frontier optimal input on the country’s actual input: θi =InputI/InputA. The output-oriented efficiency of a country A is defined as the ratio of the country’s actual output on the frontier optimal output: θo =OutputA/OutputO.

Therefore, countries that are efficient get a θ score of 1 and countries inside the frontier are given scores between 0 and 1, reflecting at which point they are close to the frontier in terms of inputs or outputs. Potential efficiency gains are then detected and defined as 1−θ. On the figure 1, countries B, C and D are on the frontier, thus classified as efficient relatively to the sample. A is under the frontier. It has two means to reach the frontier via efficiency gain. This could be through an input reduction of 1−θi or through an output increase of 1−θo. Between these two extremes, there exist combinations of potential gains in terms of both reducing costs and improving performance.

Non-parametric approaches such as DEA present important advantages such as the fact that they do not require important assumptions and allow to consider simultaneously multiple inputs and outputs. However, the principal limitation of this method is that the relative nature of efficiency measure means that the results depend heavily on the quality of the sample, data and outliers.

3.2 Application to education public expenditure in the Euro- pean Union

This work aims to analyze public expenditure efficiency in the education area in the European Union.

Units studied are the Member States of the European Union. In fact, EU Member States are required to follow common standards, rules and directives. Their admission is limited to their compliance with certain conditions for Membership — best known as Copenhagen Criteria — that set a series of requirements in terms of institutions stability, democracy, human rights, competition law and market economy, and in the ability to join the political, economic and monetary union (European Commission, 2016a). Additional rules, part of the Stabilisation and Association process, have been set in the case of the Western Balkans’ adherence. We can therefore assume a certain degree of homogeneity that allows to conduct a significant cross-country analysis. This analysis will also serve as an intuitive tool to measure the degree of convergence in terms of education within the

"Union". Moreover, the European Union can count on a statistical office called Eurostat that provides statistics enabling comparisons between Member States. This work considers data until 2017 and analyzes correspondingly the 28 Member States at that time, including the United Kingdom.

Also for the sake of homogeneity and the consistence of cross-country analysis, the public expenditure efficiency analyzed will focus on the following education levels: pre-primary, primary, lower secondary and upper secondary corresponding to the levels 0, 1, 2 and 3 in the last International Standard Classification of Education (ISCED 2011)6.

6The International Standard Classification of Education (ISCED) is part of the United Nations

(13)

In fact, the funding of these education levels is public in large majority (96.31% on average) throughout the European Union and these levels are covering most countries’ compulsory education. In most Member States, compulsory education begins at the primary level.

However, it is also compulsory to attend at least the last year of pre-primary in 13 Member States7 (Eurydice, 2017a). Also, one must note way much more disparity in tertiary education systems, in terms of funding or enrollment for instance.

In the DEA framework, public expenditure is said to be efficient if a target output is produced, while spending the fewest resources possible or if the input allocated to the provision of education produces the highest output possible. Indeed, public sector’s action must ensure that it allocates its resources efficiently in order to reduce the public expenditure or to improve the quality of public services, or even to pursue these two objectives simultaneously.

Fiscal consolidation policies have flourished since the crisis in Europe. Indeed, public spending caused a large public debt in 2008 and the European Member States are subject to the Stability and Growth Pact, limiting the public deficit to no more than 3% of GDP and the public debt to no more than 60% of GDP. Moreover, the current problems linked to the ageing population in Europe or the growing mobility of taxpayers are additional risks for the sustainability of public finances. Within this framework, public expenditure must be reduced or at least controlled with the utmost precaution. While in the short term, consolidation policies reduce domestic production, they can have a positive effect in the long term according to Nautet and Van Meensel (2011). In this context, it is recognized that reducing expenditure has more favourable effects than increasing taxation (Jurion, 2019).

The public sector is also required to improve the public services delivered, which are highly correlated with the performance of the country’s economy. In fact, better productive public services such as education are key to improve welfare and potential growth. Education is an important source of economic growth through several channels according to the economic theory8. It increases individual earnings, addresses distributional issues, and raises human capital and living standards. In details, education quantity — educational enrollment and attainment— as well as education quality are required to improve the productive capacity and enhance GDP growth. The European 2020 strategy puts the objective of improving education systems in Europe at the centre, in order to achieve a knowledge-based economy and to assert European competitiveness (European Commission, 2010; Roth and Thum, 2010). Moreover, education is one of the most important services delivered by governments in the EU as affirmed by Afonso and Aubyn (2006), representing around 10% of Member States’ public expenditure.

The following analysis will look at both the input orientation as well as the output orientation. In concrete terms, it will determine which countries achieve certain results by spending less than others and similar attention will be given to determine which countries have the best results for a given level of spending.

International Family of Economic and Social Classifications. The purpose of these classifications is to make international data comparable. For education programs, levels and related fields, ISCED is the reference qualification. ISCED 2011 is a revision of the classifications level ISCED 1997 (UNESCO, 2012).

7Austria, Bulgaria, Czechia, Greece, Croatia, Cyprus, Latvia, Lithuania, Luxembourg, Hungary, the Netherlands, Poland and Finland.

8See Krueger and Lindahl (2001) and Hanushek and Wößmann (2007) for example.

(14)

Both orientations can be found in the literature, simultaneously or not. On the one hand, some authors9 consider that spending is something that governments can control much more easily than the education outcomes. The possible efficiency gains would therefore be obtained by reducing the resources allocated to education, while keeping the results fixed.

On the other hand, other papers10 focus on output improvement, simply by interest but also because educational expenditure is said to be productive and several studies agree that this type of expenditure should not be reduced. Indeed, even in the framework of budgetary consolidation, in view of the contribution of education to the country’s economy, it would be difficult to make savings in this area (Dembour, 2014). A reduction in education expenditure can have serious negative consequences on economic growth as well as on income equality and public investment. The focus should therefore be, here, on improving the performance to improve the efficiency.

3.2.1 Input measurement

In general terms, we can define an input as what enters in the production process of a certain output. In our view, inputs are what government employs as resources to deliver education as a public service.

Inputs can be measured in terms of physical units (number of teachers, average class size, availability of computers, number of teaching hours, etc.) or in monetary terms, as suggested by Eugène (2008). Pestieau (2009) explains the two production processes at work: (1) from financial spending to physical inputs and (2) from physical inputs to outputs. In this work, we choose to shortcut from spending to final outputs in order to assess the efficiency of public spending.

Among the literature, one can find two alternatives to compute education expenditure in monetary terms:

1. Expenditure in part of GDP: Education expenditure in part of GDP can reflect the effort put by the country in the provision of education, relatively to its economic potential. Also, it makes comparison easier by removing the problem of units.

However, it appears clearly that the economic potential does not account for the effective number of recipients of the education services. It can therefore lead to approximate results.

2. Expenditure per student: Education expenditure per student can be seen as more appropriate and precise to assess expenditure efficiency because this approach allows to consider the demographic bias, taking into account the real number of education recipients in the country. However, education expenditure is mainly composed of teachers’ salaries, which can vary greatly between the different Member States, in direct link with the economy’s potential. This measure does not take into account these variations across countries. Some authors are thus using expenditure per student PPP11 adjusted. However, it does not fit here because it is observed that it varies in a much greater way than our output indicators, resulting in misleading results in the DEA model.

9European Commission (2016b) for example.

10Afonso and Aubyn (2006); Mandl et al. (2008) for example.

11Purchasing Power Parity

(15)

In the view of above mentioned options, this analysis will take as input variable the following ratio:

Expenditure per student GDP per capita

First, it allows to keep the precision brought by expenditure per student measure.

In addition, this measure allows to take into account disparities in teachers’ salaries across countries by expressing expenditure in terms of GDP per capita12, which is a valid approximation of the average salary level13. In fact, education expenditure is mainly composed of teachers’ salaries among other costs, which are directly related to the potential of the economy. As shown by the Balassa-Samuelson effect, the productivity of labour tends to be higher in high income countries, which causes consequently higher wages and higher relative prices for non-tradable goods, mainly services (Herrera and Pang, 2005; Sutherland et al., 2007). Concerning education, which is a labour intensive activity, making international comparisons without taking into account wages disparities would make no sense and countries where teachers are better paid will appear as inefficient.

Accordingly, once price level differences removed, this variable isolates the remaining differences in expenditure in teachers’ salary. In fact, if we can assume that professors are equally performing across EU, the remaining disparities in wages across countries can also reflect differences with regard to the average income of the rest of the population, to teachers’ required qualifications and to the availability of other resources in the schools like computer and rooms (Sutherland et al., 2007).

Finally, relating expenditure to a measure of GDP acts as an indicator that reflects the effort a country makes for its education in relation to its economic potential, in addition to facilitating comparisons by getting rid of units.

According to Afonso et al. (2010), it must be taken into account that this expenditure is financed by taxes when analyzing the efficiency of public expenditure. Therefore, the efficiency of the tax system would also play a role in the efficiency of public spending and the budgetary cost could underestimate the inefficiency.

The total cost would include the additional cost to the society of raising an extra euro to finance government spending including the dead weight loss14 (DWL), administration and compliance costs. The Marginal Cost of Public Funds (MCF) theory suggests that the cost of increasing government spending is greater when the government spending is higher (Jousten, 2018). In fact, the principal source of inefficiency in taxation system, the DWL, increases more than proportionately with the taxes. In addition, it increases with the size of the government.

The optimal level of spending and the optimal allocation of resources should be thus determined. This work will not, however, address this issue, assuming that taxation systems are equally efficient in the European Union. This issue could be the subject of a full analysis and can be put on the research agenda.

12an alternative to the broader used: expenditure per student PPP adjusted.

13GDP per capita can be defined, under perfect income distribution assumption, as the average salary level.

14"Dead weight loss (or excess burden) of taxation is the individual loss in utility from paying taxes that exceeds the direct loss in utility originating from the tax payment itself" (Jousten, 2019).

(16)

Moreover, one notes that public expenditure efficiency, obviously, depends as well on regulatory policies, equity and distributive matters that will not be covered here.

In practice, the input variable is constructed as follow: education expenditure is retrieved from the national accounts breaking down government expenditures by function (Classifica- tion of the Functions of Government—COFOG). The expenditure is in current euros. The sum of educational expenditure for ISCED levels 0 to 3 was then divided by the number of students for the corresponding levels of education. The ratio Expenditure/studentwas finally divided by the GDP per capita in current euros15 as well.

Finally, the mean over 2000-2017 was taken with the purpose to account for the time lag.

Indeed, we assume that education output depends on the formation the students receive throughout their whole schooling time. In addition, the impact of an additional effort by the public authorities in the field of education does not directly affects the results, but sometimes takes several years to materialize.

One last remark on this input measure: we only account for public expenditure. Indeed, some papers analyzing the efficiency of education take into account private expenditure, arguing that the source of expenditure cannot be distinguished in the results analyzed.

However, the reader should bear in mind that this work analyzes, above all, the efficiency of public expenditure. Although in the context of education, it can be assumed that the results are affected by private household expenditure, in the first part of this work, it is assumed that this case is negligible given that the levels of education chosen are financed by public funds to the average level of 96.31% in the Member States of the European Union.

3.2.2 Output measurement

Assessing the value of public services delivered by the government is not straightforward.

Indeed, public services are non-market by definition. They are provided for free or at a price that is not economically significant, for example, a nominal fee.

According to the National Accounts (ESA 2010), the output of public services is measured by taking the sum of production costs, including intermediate costs, compensation to employees and fixed capital, in addition to other taxes (Eurostat, 2013). However, approaches based on the measurement of inputs, in addition to being unsuitable here for obvious methodological reasons, do not measure possible productivity gains and neither reflect the qualitative differences across education systems, making international comparisons insignificant according to Schreyer (2010). With regard to education in particular, measures of the volume of output are now recommended and these methods are increasingly used. For example, Eurostat (2016) Handbook says that the education output measure should reflect the sum of individual benefits and proposes to measure it as the sum of teaching received per student. In addition, in order to adjust this measure to account for the quality of education, the Handbook suggests to take education outcomes as quality adjustments.

A conceptual difference between outputs and outcomes needs to be clarified. In the simplest version, outputs are what refers to the quantity criteria. In education, it can

15Data in euro are derived from transmitted national currency series using historic exchange rates and are suitable for comparison and aggregation (Eurostat, 2020).

(17)

be the student enrollment, the education attainment, the quantity of learning hours, etc.

While outcomes reflect the quality, the extent to which the objectives pursued are attained such as learning achievements, tests scores, employability, preparation for further studies, etc. (Sutherland et al., 2007; Mandl et al., 2008)

Nonetheless, what is implemented for the purpose of national accounting is not suitable for this policy analysis. In order to assess education public expenditure efficiency and follow the intuitive approach presented upward that links education to growth and welfare objectives, it is too limited to only rely on outputs measures. This work will thus head towards a more comprehensive approach, better reflecting the educational contribution to human capital, taking outcomes into account.

In order to measure the education outcomes, relevant performance indicators16are identified and explained below. Four performance indicators have been chosen in the view of what is used in the literature and the availability of harmonized data.

The four selected indicators to assess education expenditure outcomes are: the PISA mean score 2018, the share of early leavers from education and training, the foreign languages skills and the employment rate of recent graduates.

DEA framework

Most of the literature starts from the premise that the objectives pursued by public expenditure on education are multiple and substitutable. This assumption is challenged here.

Indeed, in the DEA analysis, the vast majority of authors integrate several outputs into a composite performance indicator. Firstly, because integrating several outputs with one single input would cause major problems of bias in the results. Secondly, the use of a composite indicator allows trade-offs and complementarities between the different indicators. Indeed, the more indicators are measured at the same time, the more efficient countries become. Countries can thus appear in their best light and be classified as efficient when they perform well in only one of the indicators or in the indicator that weighs the most.

Nonetheless, in the following analysis, it is assumed that public expenditure pursues one objective that is performance. However, what determines education performance is not unanimously agreed and it can be considered from different angles. Hence, this work will test the DEA model with each indicator individually. To be clear, 4 different models will be estimated through DEA, relating education Expenditure per student

GDP per capita to each of the 4 outputs, performance indicators. As a result, 4 efficiency frontiers and 4 efficiency scores will be estimated for each country in the sample: θP ISA,θEarly leavers, θF oreign languages and θEmployment.

Hence, it identifies countries’ public expenditure efficiency in education with four mea- sures of performance possible in 4 separate single-input single-output estimations. This assumption allows decision makers to identify in which performance area they are less efficient and which measure of performance they want to prioritize.

16Please note that, in the rest of the analysis, the following words will be used interchangeably: output, outcome, performance and results. The reason is simple, our performance indicators, used to measure outcomes are in fact the outputs in the framework of our DEA model.

(18)

It is only later, when countries’ efficiency is compared for each indicator, that we will con- struct an efficiency global score that will allow us to rank countries and make comparisons between efficiency and performance.

We define θg as the global efficiency score where PISA efficiency score θP ISA accounts for 50%, the other 50% being equally divided between the other 3 indicators’ efficiency scores:

θEarly leavers,θF oreign languages andθEmployment. The simple reason for this special weighting is that the PISA mean score 2018 measures 3 skills which are reading, mathematics and science, while other indicators are measuring a single aspect.

θg = (θP ISA∗3) +θEarly leavers+θF oreign languages+θEmployment

6 .

In this more aggregated view, this work will attempt to cluster efficient countries related with their performance. In this case, we will compare the efficiency global scoreθg with a performance global score πg constructed as follow: after being standardised (See Box 1), performance indicators will be aggregated the same way as efficiency scores (50% PISA, 50% other indicators). This comparison will allow to highlight how efficiency is related to performance. This will help to identify efficient and effective countries out of the sample.

πg = (SIP ISA∗3) +SIEarly leavers+SIF oreign languages+SIEmployment

6 .

where SIs are the standardised performance indicators for each outcome indicator in a given country.

Box 1 : Standardisation

In this case, standardisation is used to aggregate indicators value expressed in different units and the method employed is the following one :

SIj = IjI¯ σI

where:

SIj is the standardised performance indicator for country j.

Ij is the performance indicator of country j before standardisation (namely its PISA score 2018, its share of early leavers, its average number of languages learned at ISCED 2 or its employment rate of recent graduates).

I¯is the arithmetic mean of the different countries’ performance for this indicator.

σI is the standard deviation of the different countries’ performance for this indicator.

Source: Eugène (2008)

(19)

Indicators in details

1. PISA mean score 2018.

The Programme for International Student Assessment (PISA) is a survey organized by the Organisation for Economic Cooperation and Development (OECD) every 3 years since 2000 with the aim to test the skills of 15-year-old students in reading, science and mathematics17.

First of all, it is an internationally harmonized measure of students’ competences that is particularly representative. Indeed, there are two stages of sampling. The program first selects, in each country, 150 schools based on criteria such as location and level of education. Thereafter, about 42 15-year-old students are selected per school. In total, the test is conducted in the 37 OECD member countries and in 42 OECD partner countries in 2018, with each country assessing between 4,000 and 8,000 students. The very good representativeness of this study is therefore a forward-looking tool for dealing with national and international policy analysis (Schleicher, 2019).

Furthermore, PISA emphasizes that the test does not simply assess students’ performance in specific areas, but evaluates "the extent to which students have acquired the knowledge and skills required for a full participation in economic and social life" (OECD, 2019b).

This concept is defined as literacy, i.e. the ability of students to apply their theoretical knowledge in everyday life and in different contexts. Measurement therefore includes not only the assessment of academic knowledge in certain subjects but also the ability to mobilize these acquirement outside school as well as the willingness to learn and learning strategies. In addition, the PISA test places the assessment of theoretical knowledge in a broader context, paying attention to the importance of environmental factors. It identifies exogenous factors that can affect students’ performance such as socio-economic conditions, cultural and family background, and the national education system.

In the following analysis, the simple average of the scores in reading, mathematics and science from each country assessment in 2018 is taken, named here after: PISA mean score 2018. This measure is relevant here and, according to Sutherland et al. (2007), equal weighted scores allow to account for approximately all the variance. Data is retrieved from OECD PISA database.

2. Early leavers from education and training.

This indicator is measured in percentage of the population between 18 and 24 years old.

In details, it measures the share of the 18-24 years old population with at most a lower secondary education and who has not attended any further education or training program during the 4 weeks before the conduction of the survey. It is based on a 2013-2017 average and data is retrieved from Eurostat database.

Reducing the proportion of early school leavers to less than 10% is one of the targets of the Europe 2020 strategy. According to the European Union, the upper secondary school level would be the minimum education level to be attained for a European citizen nowadays. This indicator measures the extent to which students actually complete their secondary education. The European Union justifies the intuition arguing that completing

17See Annex A.2 for detailed definitions.

(20)

upper secondary level reduces significantly the risk of poverty and social exclusion. In addition, attaining this level facilitates the entrance in the labour market and allows access to tertiary education.

3. Foreign languages skills.

This indicator is measured by the average number of foreign languages learned at ISCED level 2 (lower secondary).

Foreign languages is one of the subjects learned in secondary school, beside mathematics, science and reading, already assessed through PISA. In addition, learning foreign languages is part of the construction of a "European Education Area", alongside with similar education performance objectives such as learning mobility. Moreover, economists have found positive earnings and employment returns to foreign languages learning and the benefits from it for the European integration are not negligible (European Commission, 2019).

The measure is taken at ISCED level 2 to be more consistent when comparing performance in this indicator with performance in the other subjects studied by PISA. It is based on a 2013-2017 average and data is retrieved from Eurostat database.

4. Employment rate of recent graduates.

This indicator measures the employment rate of 20-34 years old who have attained at least upper secondary school and have completed their highest educational attainment 1, 2 or 3 years before the survey.

Again in the EU 2020 framework, the importance of education for employability and easy access to the labour market is emphasized. The target for 2020 is to reach an employment rate of recent graduates of 80% in the European Union. To this end, this indicator furnishes information on the transition between education and the labour market. In addition, it reflects the adequacy between school programs and labour market requirements, to assess if education fits with the actual economy’s need.

One notes here that what is measured is the employment rate of recent graduates having achieved upper secondary school or a higher level. In fact, we assume that this measure actually reflects the performance of all the previous education levels (ISCED 0, 1, 2 and 3) as a whole and the extent to which the education until upper secondary allows students to pursue tertiary education or to be employed. Indeed, the employment rate of recent graduates obviously depends on the pursuit and the quality of further education, tertiary education for example, that will not be covered here. However, we can make the assumption that the pursuit and the completion of tertiary education may be conditioned to a compulsory education of quality. It is based on a 2013-2017 average and data is retrieved from Eurostat database.

3.3 Shortcomings

Non-parametric models such as DEA obviously present some weaknesses that have to be taken into account.

Sensitivity: As said earlier, the relative nature of DEA efficiency scores’ estimation leads to a strong sensitivity of non-parametric models to measurement errors, statistical noise

(21)

and outliers (Sutherland et al., 2007). However, measurement errors are supposed to be possibly less pronounced in a homogeneous sample such as EU Members States.

Omitted variable bias: Small-sample bias resulting from the omission of relevant countries with best practices could underestimate the inefficiency. This issue worsen when the number of inputs or outputs increases, which is not the case in our DEA model with single input and single output. However, bias can also arise from the omission of any important output factor. It would inversely overestimate inefficiency scores of the countries that perform better in the omitted performance indicator. The issue is that their measure is often impossible when they are related with non-cognitive variables or when they differ across countries for example. It is evident that this work is a simplification of the reality, only measuring few parameters separately to analyze the complex reality of education public expenditure efficiency.

Input measurement: DEA models can lead to misleading results when input variables present much greater variations than what they are compared with, output variables.

However, the use of the variable Expenditure per student

GDP per capita allows to control extreme variations observed in the variable Expenditure per studentcompared to the little variations degree observed in our output indicators.

Time lag: The results of the policies put in place take years to materialize. As a result, today’s education is the reflection of the several reforms implemented in education in the recent years. This is why it is the 2000-2017 average expenditure that is taken into account. Time lag is also a problem on the output side. Education performance today may rely on the continuous effort made in the different performance indicators. We took the average of the output indicators over 5 years in order to smooth and simplify international comparisons.

Conclusions: Public expenditure efficiency analysis using cross-country data is less consis- tent when discussing overall spending according to Mandl et al. (2008). This is why our model deals with a specific area rather than in overall. An analysis focused on a specific expenditure thus allows to enunciate concrete policy recommendations.

Weighting: The choice was made to estimate four different efficiency frontiers to not allow trade-offs and complementarities implied by the use of a composite indicator, that could affect the efficiency measure of public expenditure. In this sense, efficiency scores are not affected by any weightings choice. It is important to note that global scores are constructed, in this chapter, for the efficiency-performance comparison in order to cluster and identify countries reaching high efficiency and high performance at the same time. Obviously, weightings used to construct the global scores are a matter of subjective choice. However, the efficiency global score constructed does not aim to be interpreted per se. Instead, it will be discussed only in comparison with the global performance score, aggregated in the same way. The aggregation choice is thus affecting both the efficiency and the performance score equally, allowing the comparison. In this sense, as neutrally as possible, it has been chosen to aggregate the different outcomes’ indicators assigning them equal weights and to take the PISA mean score instead of taking the scores in reading, math and science separately in order to account for approximately all the variance. Moreover, it is not uncommon for PISA scores to account for 50% of a composite indicator alongside with other indicators in the literature, another rationale to this weighting choice.

(22)

3.4 Data preliminary analysis

3.4.1 Public expenditure in the European Union

Overall, European Member States’18 total public expenditure represents 45.7% of GDP in 2017. Over the years, one can observe a quite static evolution until 2007 when spending represented 44.6% followed by a rise to 50% in 2009 as a result of the economic and financial crisis (Figure 2). In fact, the financial crisis of 2008 compelled European countries’ governments to intervene in order to avoid economic recession because of interior production reduction. In order to do so, governments chose to increase their public spending to support economic sectors in difficulty, particularly the banking sector. In addition, governments had to bear the burden of the debt. From 2011 on, most of European Member States managed to reverse the trend.

Over the 2000-2017 period, the greatest spenders have been France, Denmark, Finland, Belgium, Sweden and Austria with a total public expenditure average representing over 50% of GDP. On the other hand, the ones that have spent the smallest part in % of GDP on the 2000-2017 average have been Lithuania and Romania with less than 37% of GDP in terms of public expenditure.

Figure 2: Evolution of total public expenditure in % of GDP in the EU 28 aggregate

Source : Eurostat Database (See Annex A.3)

Figure 3: Functions’ share in total public expenditure for the EU 28 aggregate

Source : Calculations based on Eurostat Data (See Annex A.3)

18EU 28 aggregate is constructed by aggregating national data. When expressed in ratio, EU 28 aggregate is the arithmetic mean adjusted with population size of national ratios (Eurostat, 2020).

(23)

When looking at the share of each function in total public expenditure (Figure 3), one can see that the larger functions are Social protection, General public services, Health and Education. From 2000 to 2017, one observes an increase in Social protection spending of nearly 3 percentage points, an increase in Health spending of 2 percentage points and a decrease of General public services spending of ±3 percentage points.

For its part, Education spending accounts for 10.07% of the EU 28 public expenditure in 2017. It has diminished a little since 2000 but stays in strong position. In its Education and Training Monitor 2019, the European Commission comments that public investment in education almost did not take into account the demographic tendency overtime. With regard to the current European Union demographics, namely the overall decline of the school-aged population, the stable trend in education spending over time results, de facto, in an increase in expenditure per student. The need to control the efficiency of education public expenditure is quite clear. Observing education expenditure limited to the education levels of interest which are pre-primary, primary and secondary in particular, it accounts for 7.22% of EU Member States’ total public expenditure out of the 10.07% attributed to total education in 2017.

Figure 4: Input variable Expenditure per student

GDP per capita detailed : 2000-2017 mean

Source : Calculations based on Eurostat Data (See Annex A.4)

As presented above, the input variable selected for the DEA model is Expenditure per student GDP per capita . Expenditure per student is limited to pre-primary to secondary level for the rest of the analysis. In figure 4, countries are sorted from the highest to the lowest level of

Expenditure per student

GDP per capita on the left while the right part details the levels in Expenditure per student (e) and in GDP per capita (e) separately.

In terms of expenditure per student in euro, one can almost cluster expenditure per student per European region: Nordic countries spending the most, Eastern19 countries spending

19Eastern countries here refer to Central, Eastern and Baltic European countries.

(24)

the least while Western and Mediterranean countries are sharing the middle of the list.

This is because expenditure per student is highly correlated with living standards, GDP per capita. However, taking the ratio between the two measures allows to get rid of the differences in terms of GDP per capita between countries and to measure each country’s effort in relation to its economic potential. Therefore, one can see that, in part of GDP per capita, countries that spend the most per student are Greece, Slovenia and Portugal.

In particular, one can see that Greece presents the highest ratio. In fact, Greece spends nearly e100 more than Germany per student while having a GDP per capita almost half smaller. This trend is also seen in other Mediterranean countries, except Spain, and in some Eastern countries such as Slovenia, Latvia, and Estonia which are presenting ratios higher that the EU average. Belgium spends per student 19.82% of GDP per capita, which is higher than the Netherlands and Luxembourg and just behind France.

Yet, Luxembourg and Finland, which present high expenditure per student in euro, are situated near the EU average in terms of Expenditure per student

GDP per capita . This is because of their particularly high GDP per capita. Then, the end of the list is composed of Anglo-Saxon countries and Romania and Slovakia.

3.4.2 Performance indicators

Figure 5: PISA Mean score 2018 Classification

Source : Calculations based on OECD PISA Data (See Annex A.5)

When looking at the first output, the PISA mean score 2018, one can see that Estonia, Finland, and Poland present the best scores, meaning that their students present the best ability to mobilize theoretical knowledge in reading, math and science in different situations.

We observe that Belgium shows a good performance but that there are wide disparities between the communities. The Flemish Community is in 4th position while French and German Communities are in the middle of the classification. For PISA scores, there is a

(25)

subdivision by region, which is far from being the case for all data. That is why the three Belgian regions are mentioned here in addition to the countries of the sample. The EU and OECD averages are also added to provide a context to the results.

The OECD average is different from the EU 28 average due to the fact that the OECD average only takes into account the 37 OECD Member countries. However, five European Union Members States are actually OECD Partners countries, not Members: Bulgaria, Croatia, Cyprus, Malta and Romania. In addition, OECD Members count 14 countries outside the EU that are: Japan, Korea, Canada, New-Zealand, Switzerland, Australia, the United States, Norway, Israel, Turkey, Chile, Mexico, Colombia and Iceland. EU Members States that are not OECD Members but only Partners are part of the least performing countries along with Greece, Slovakia, and Luxembourg.

Besides mean scores in 2018, the short-term performance changes between 2015 and 2018 mean scores are impressive for some countries (Table 1). Overall, there is much more deterioration than scores’ amelioration. For instance, Bulgaria shows a decrease of 9 points overall. In details, every subject’s scores greatly diminished: reading (-12), math (-5), science (-10). Germany’s scores also dropped in every subject. Croatia, Finland, Italy and Luxembourg’s mean scores decreased by 10, 6, 8 and 7 points respectively due to a fall in reading and science scores. Greece, the Netherlands and Slovenia’s mean scores diminished principally because of reading scores’ fall while Spain’s decrease is mainly due to science score. On the positive side, one notes a great improvement in every subject’s scores in Poland (+6 in reading, +11 in math, +10 in science) that enables Poland to reach the top 3 of the classification in 2018. Sweden and Slovakia also presented an improvement in every subject leading to mean scores’ increase of 7 points each. In addition to these great variations, the other countries are presenting more stable scores. For instance, Belgium’s mean score decreased by 3 points due to the following variations: -6 in reading, +1 in math and -3 in science.

Table 1: PISA mean scores and short-term performance change from 2015 to 2018

Mean score 2018 Performance change

OECD 488.358 -1

EU 28 484.772 -4

Austria 491.038 -1

Belgium 499.903 -3

Bulgaria 426.653 -9

Croatia 471.852 -10

Cyprus 438.016 -2

Czechia 495.493 5

Denmark 501.055 -3

Estonia 525.513 1

Finland 516.422 -6

France 493.664 -2

Germany 500.437 -8

Greece 453.472 -5

Hungary 479.327 5

Ireland 504.608 -4

Italy 476.962 -8

Latvia 487.359 1

Lithuania 479.711 4

Luxembourg 476.725 -7

Malta 458.850 -4

Netherlands 502.466 -5

Poland 512.846 9

Portugal 491.989 -5

Romania 427.794 -4

Slovakia 469.399 7

Slovenia 503.750 -6

Spain 482.322 -9

Sweden 502.539 7

United Kingdom 503.455 4

Flemish Community 509.818 -6

French community 494.077 -2

German Community 486.474 -6

Source: Calculations based on OECD PISA Data

(See Annex A.5 for detailed scores and performance change per subject)

(26)

Table 2: Performance for other outcomes : 2013-2017 average

1-Early leavers (%)

Rank Av. Nlang Rank Emp. Rate (%) Rank

EU 28 88.92 1.67 77.38

Austria 92.78 9 1.1 25 88.16 4

Belgium 90.28 18 1.28 23 80.14 12

Bulgaria 86.94 23 1.2 24 71.48 22

Croatia 96.8 1 1.58 18 63.42 26

Cyprus 92.56 11 1.9 9 68.94 24

Czechia 93.92 5 1.58 19 84.1 8

Denmark 92.08 13 1.82 11 82.82 9

Estonia 88.88 20 2 5 79.34 13

Finland 91.18 15 2.2 2 77.34 17

France 90.92 16 1.6 17 74.34 20

Germany 90.04 19 1.3 22 90.22 2

Greece 92.16 12 1.94 8 46.14 28

Hungary 88.04 22 1 26 80.56 11

Ireland 93.36 6 1 27 78.82 14

Italy 85.14 24 1.96 7 50.02 27

Latvia 90.64 17 1.7 16 78.68 15

Lithuania 94.42 4 1.8 13 80.92 10

Luxembourg 93.14 7 2.56 1 84.3 7

Malta 80.24 27 2.14 3 94.24 1

Netherlands 91.74 14 2.04 4 88.18 3

Poland 94.7 3 1.8 14 77.7 16

Portugal 84.68 25 1.84 10 72.78 21

Romania 81.78 26 2 6 69.36 23

Slovakia 92.66 10 1.78 15 75.86 18

Slovenia 95.5 2 1.44 21 74.74 19

Spain 79.44 28 1.48 20 66.02 25

Sweden 92.82 8 1.82 12 86.16 5

UK 88.64 21 0.96 28 84.76 6

Source: Eurostat Database (See Annex A.6) Countries in red: OECD Partners not Members

Table 2 presents countries performance indicators in the other output variables selected.

One must note that the variable Early Leavers is here presented as 1 - Early leavers, representing therefore the share of 18-24 students that do not leave education without completing their upper secondary studies. In fact, as the DEA model output-oriented aims to maximize outputs variables, we need to express the variable in order to maximize what reflects better performance, here the proportion of 18-24 years-old completing their studies, expressed as 1 - Early leavers.

Concerning this output, Croatia, Slovenia and Poland are the countries presenting the lowest share (±5%) of early leavers from education and training in the sample while around 20% of 18-24 year-old people are leaving education before completing upper secondary in Malta, Romania and Spain. Correspondingly, the EU 2020 target to lower the early leavers share to less than 10% has been not achieved by 9 Member States over the 2013-2017 average: Bulgaria, Estonia, Hungary, Italy, Portugal, and the United Kingdom in addition to the above mentioned Malta, Romania and Spain.

Looking at the average number of foreign languages learned in lower secondary, the top performers are Luxembourg, Finland and Malta. Luxembourg and Malta justify their

Références

Documents relatifs

Guidelines in the field of pediatric diabetes, which have been published by International Society for Pediatric and Adolescent Diabetes (ISPAD) (1), the Australian Paediatric

We use those results to obtain an upper bound on the average complexity of algorithms to generate the minimal transversals of an hypergraph.. Then we make our random model more

Average performance of the sparsest approximation using a general dictionary. CMLA Report n.2008-13, accepted with minor modifi- cations in Comptes Rendus de l’Acad´emie des

Considering the uniform distribution on sets of m non-empty words whose sum of lengths is n, we establish that the average state complexities of the rational operations

We write forbidding line for a line determined by two pebbles in the upper half that intersects all horizontal grid lines in the lower half of the unit square..

Knuth [5] shows ( n 5/3 ) for the average-case of p = 2 passes and Yao [14] derives an expression for the average case for p = 3 that does not give a direct bound but was used by

For every p-pass Shellsort algorithm and every increment sequence, every subset of n!/2 n input permutations of n keys contains an input permutation that uses ⍀ ( pn 1⫹(1⫺⑀)/p

In particular, the output feedback case has been solved here by relying on a Lu- enberger observer providing an estimate of the plant state, which is used to define the flow and