• Aucun résultat trouvé

Details of the modelling tools used and impacts of modelling assumptions

of studies on system effects

Annex 3.A2. Details of the modelling tools used and impacts of modelling assumptions

This annex provides some background information and details on the mathematical tools used in this study as well as the most relevant numerical simplifications that have been used in order to make the problem numerically tractable with current calculation tools.

The first part of this annex provides a more complete description of the GenX (Optimal Electricity Generation Expansion) model that has been used to perform the quantitative analysis in this study. In the second part of the annex the more significant assumptions implicitly or explicitly considered are described and potential qualitatively impact on the results discussed.

GenX simulation tool

GenX is a power system simulation model developed by researchers at the Institute for Data, Systems, and Societies of the Massachusetts Institute of Technology (MIT). GenX focuses on the operation and planning of electrical power systems and is used in a wide range of planning and operational situations. These include long-term generation and transmission expansion planning, and short-term operational simulations. It determines investment decisions on electricity resource assets that, if operated optimally, can fulfil the electricity load of a particular system at minimum cost, subject to defined operational constraints such as ramping and cycling. By changing certain parameters, the tool can also model the effect of different energy policies, such as carbon prices, carbon emission targets, renewable standards network tariffs, subsidies, and other policy or regulatory decisions on the equilibrium capacity mix. Like other, similar tools for power system analysis, GenX makes the implicit assumption of perfect market competition and risk-neutral agents.

From a centralised planning perspective, this model can help to determine the future investments that will be needed to supply future electricity demand at minimum cost. In the context of liberalised markets, the model can be used by regulators for indicative electric power system planning in order to establish a long-term vision of where efficient markets with increasing penetration of low-carbon generation, storage and demand-side resources would lead.

The model has been designed to carry out the following types of analyses:

1. the optimal expansion plan (centralised utility or decision maker);

2. the optimal investments (independent power producers);

3. the economic feasibility and the economic impact of new technologies (e.g. storage, demand-side management [DSM], distributed energy resources [DERs], VRE, advanced nuclear);

4. determine the equilibrium effect of any given policy (carbon caps, carbon taxes, renewables standards).

Model description

The GenX model was developed at MIT to improve classical methods by incorporating operational flexibility, inter-temporality and network representation in power system analyses. At the same time, the model development has been motivated by the need to

expand from electricity generation capacity expansion to electricity resources capacity expansion, including options such as DERs, combined heat and power systems, demand-side resources and energy storage as well as new technology designs.

GenX uses mathematical optimisation techniques such as linear programming (LP), and mixed-integer programming to solve for optimal investment and operational decisions. LP studies the case in which the objective function f(x) is linear and the constraints are specified using only linear equalities and inequalities. Mixed-integer programming (MIP) studies linear programmes in which some or all variables are constrained to take on integer values, creating a much more difficult problem than regular LP problems.

The GenX formulation tractably includes the operational details of thermal units and unit commitment constraints in capacity planning optimisation on a multi-zonal multi-level framework, while subject to renewable mandates and CO2 emissions constraints; thereby allowing interaction between the electricity and heat markets.

The formulation developed uses technological systems that are currently under development, generation unit clustering, and T&D power flow approximations to tractably co-optimise seven interlinked power systems decision layers:

• capacity expansion planning;

• optimal generation dispatch;

• T&D power flows;

• T&D expansion;

• operating reserves requirements;

• clustered unit commitment operations;

• interactions between electricity and heat markets.

This formulation makes it possible to model operational flexibility impacts on capacity planning in a single, monolithic optimisation problem that otherwise would have been necessary to solve in different, separated stages (Palmintier, 2013 and Sisternes, 2014). At the same time, it is possible to model network interactions, and to model heat-electricity market synergies within regions.

Formally, the model can be divided into two components: a first component where electricity resource building decisions are made (capacity expansion); and a second component incorporating the operational decisions associated with the different electricity resources that have been built in the first stage (unit commitment and economical dispatch);

see Figure 50. The particularity of GenX is that its cost function includes not only capital cost and variable operating costs, but also the costs of a more intense cycling regime, subject to an array of technical constraints that guarantee the technical feasibility of the modelled system.

Figure 50. Schematic representation of the GenX model

Available centralised generation resources include: combined cycle gas turbines, open cycle gas combustion turbines, pulverised coal, nuclear, wind, solar PV and hydroelectric resources. Other possible centralised generation resources include geothermal, biomass, solar thermal, pumped hydro storage and possible thermal storage for solar thermal or nuclear

Long-term/investment Capacity and network

expansion

Short-term/operation Unit commitment and

dispatch Capacity expansion with clustered UC constraints (MILP)

units. Eligible DERs include: solar PV, electrochemical storage, thermal storage, flexible demands and batteries. Capacities can be imposed exogenously to the system (brownfield development) as well as can be determined endogenously by the model (greenfield approach).

The capacity of all generating units and DERs is represented as a continuous decision variable, except for large thermal units, which may be represented as integer plant clusters by region if desired (Palmintier, 2013). Units of incremental capacity for all DERs, large-scale wind and solar, and open cycle gas turbines (OCGTs) are all small enough that this abstraction is minor, while larger thermal units can be represented as integer clusters if the discrete nature (or lumpiness) of these investment decisions is considered important. Operational decisions for generating units and DERs are continuous decisions, with the exception of cycling decisions for large thermal units, which can be represented as either continuous decisions or integer decisions (e.g. how many units within each cluster of similar plants to turn on or off) as desired. Integer clustering of similar plants entails the simplifying assumption that all plants within a cluster are identical and that all committed units within a cluster are operating at the same power output level. Treating commitment decisions as continuous variables further relaxes the problem and allows commitment of fractions of a plant. Both options introduce modest approximation errors but significantly improve computational performance, enabling greater detail in other features, such as network complexity. Since on/off decisions for individual DERs and even OCGTs are fast and occur in small increments, representing them as continuous decisions is also a minor approximation.

Capacity investment and operational decisions are indexed across each node or region in the system, enabling the model to select the optimal location of capacity investments and operations in each location. Thus the model balances the different economies of scale at different voltage levels on the one hand, with the differential impacts or benefits of location at different regions or voltage levels on the other – a key advantage over other models.

Power flows between regions and voltage levels are modelled as simple transmission flows.

Maximum power flows across these interfaces capture key network constraints. Losses are a function of power flows between voltage levels or regions, implemented as a piecewise linear approximation of quadratic resistive losses. Distribution network reinforcement costs associated with changes in peak power injections or withdrawals at each node are represented as linear or piecewise linear functions parameterised by experiments and optimal power flow modelling.

Reserve requirements are modelled as day-ahead commitments of capacity to regulation and spinning/non-spinning contingency reserves, to capture the commitment of capacity necessary to robustly resolve short-term uncertainty in load and renewable energy forecasts and power plant or transmission network failures.

The time interval evaluated in this methodology is one year, divided into one-hour periods and representing a future year (e.g. 2050). In that sense, the formulation is static because its objective is not to determine when investments should take place over time, but rather to produce a snapshot of the minimum-cost generation capacity mix under some pre-specified future conditions.

The dimensionality challenge

Capacity expansion analysis presents a dimensionality challenge due to the exponentially-increasing number of decision variables as the time, operational detail and network representations of the model are increased. Figure 21 presents the GenX simulation domain showing the different detail representations for the different modelling dimensions. Going from, in the simplest case, a single node with economic dispatch but without inter-temporality considerations and using time blocks only to a full network representation with AC power flow calculations and considering unit commitment and reserves independently for each power plant in a multi-year context.

It is worth noting that not all features can be turned on at same time. Computational limitations entail trade-offs along each dimension, so more detail in one area typically means greater abstraction in others. The configurable characteristic of the model allows the selection of the most relevant features for each specific project.

Description of some calculation assumptions and their impact on results

This section provides additional details on some assumptions used in this study to make the problem numerically tractable with current calculation tools: i) Risk-neutrality and perfect market competition, ii) Modelling of T&D grid, iii) Perfect forecast of future demand and VRE generation and iv) Representation of a single year.

Risk-neutral agent – perfect market competition

Annualised investment costs for all technologies considered in the present study are calculated using a common real discount rate of 7%. This value can be considered as a good proxy of capital costs for generating companies in OECD countries. The same discount rate is used for all generating and storage technologies; this implicitly assumes that the level of risk is the same for all investments in generation, is constant throughout the lifetime of a generator and does not change with each scenario analysed. Such an approach, also referred to as “risk-neutral agent”, is commonly adopted in most analyses of the electricity system seen in the research literature.

However, this discount rate captures only some of the elements considered by generating companies in the investment process. When assessing the financial feasibility of a project, the expected cash flows should be discounted using an appropriate rate that takes into account not only the cost of capital for the company undertaking the project (the weighted average cost of capital [WACC], for example) but also the level of risk of the specific project and its correlation with company’s existing assets and liabilities. This specific risk depends on the technology and, within a given technology, changes considerably with each phase of the project. For instance, there is a little doubt that a nuclear power project faces a much higher financial risk than a CCGT in the construction phase, owing to the significant uncertainties on the overall overnight costs and on the duration of the construction. By contrast, the good operational track record of most NPPs, combined with lower marginal costs means that market revenues from an NPP are less volatile than those of a CCGT, thus implying a lower risk to this technology during the operational phase.

As seen in the results, the level and volatility of electricity prices, and consequently the market risk for all generating technologies, changes significantly depending on the scenario analysed. However, the discount rate has not been adjusted to reflect such change in the level of financial risk. Also, no attempt was made to evaluate how different policies aimed at curbing carbon emissions may shift the risk from the electric power producer to other entities and thereby affecting the discount rate that should be applied to each generation technology.

For example, granting a specific technology a fixed price for its electricity generation (e.g. by means of a long-term contract, perhaps obtained competitively through auctions) would significantly lower its market risk and should be thus reflected in a lower discount rate.

Finally, decisions on investments and power plant dispatch are modelled assuming perfect market competition, without considering possible market manipulation from different actors.

Modelling transmission and distribution grids

The single-node approach, often referred to as a “copper plate approach” is commonly used in economic analysis and optimisation of power systems. Transmission and distribution grids are not modelled, implicitly assuming that the electricity can be transferred from generators to customers without physical limitations, bottlenecks or losses.

The present study refines this approach by considering two separate regions, which are linked by an interconnection of a given net transfer capacity. Power exchanges between the two regions are limited by the maximal capacity of the interconnections, and no transmission losses are considered between the two regions. Each region is represented as a single node, without transmission constraints or losses; and a copper plate approach is thus taken within each region.

Taking into account the geographical distribution of both generation and load within each region would have added another constraint into the optimisation process and thus have led to higher generation costs in all scenarios. The level of these constraints and the additional costs are likely to be higher for the scenarios featuring high shares of VRE for the following two reasons:

1. the electricity is more likely to be transported over longer distances as the location, which maximises VRE generation, is not necessarily close to the load centres.

Transmission losses are therefore more significant in scenarios with high levels of VRE resources.

2. the geographical concentration of VRE resources and the fact that their generation tends to be highly correlated in the same geographical areas means that the risk of congestion in the transmission grid increases with VRE penetration levels1.

Perfect forecast

The optimisation of the generation mix and the operational dispatch of all resources are based on a predictability and perfect foresight of the future load, on the future generation levels of VRE, as well as of the operation of all other power units. Commitment of all generation resources and the charging/discharging patterns of storage capabilities are therefore optimised ex-post and provide the maximal value for the system. This is clearly different from real-world experience where operational decisions are made under uncertainty and with limited knowledge of the future, which inevitably leads to non-optimal choices and to a sub-optimal use of resources – in particular for storage plants. Considering this effect, and thus modelling choices and optimisation strategies taken under uncertainty, would definitively increase generation costs in all scenarios; most likely, the scenarios characterised by higher uncertainties on the residual demand and by a larger use of storage capabilities, such as those with a larger share of VRE, would see their costs increase proportionally.

Only one year – no stochastic representation of inter-annual variability

The generation mix has been optimised based on the data collected for a single year, 2015; the data used includes the level and shape of electricity demand, the realised generation or load factors from renewable resources such as wind, solar PV and hydroelectric run-of-the-river resources, water inflows for hydroelectric reservoirs, etc. A different set of input data, derived from a different year for example, would lead to different outcomes in terms of optimal generation mix as well as in generation costs. A more robust assessment of the optimal generation mix would have required the analysis of hundreds of representative scenarios encompassing several years of different demand patterns, and for each year dozens of different scenarios representing the stochastic variability from variable resources. This kind of analysis, however, would be incompatible with the limits of current calculation tools as well as with the available resources for this study.

These limitations should be kept in mind when interpreting the outcomes presented here.

Considering a larger sample of years in terms of load and different regimes for variable resources would have led to a different optimal generation mix and to an increase of generation costs for all scenarios. The resulting generation mix would be more robust, i.e. capable to satisfy the demand over a wider range of possible situations, but not optimal for each specific year. Clearly the difference in the overall generation structure and the cost increase are likely to be more significant for the scenarios featuring prominent levels of VRE, whose generation may vary substantially across different calendar years. However, the

1. The needs in terms of grid reinforcement associated with VRE deployment are mostly captured by grid costs, i.e. by the cost of building and operating a larger and more complex transmission and distribution grid. However, there is also an impact on the generation structure, which should be captured and integrated in the profile costs.

simpler approach taken for this study still captures most of the relevant phenomena associated with the integration of variable resources and allows for a consistent comparison of different scenarios.

A quantitative estimate of these effects has been done by Nagl et al. (2013). The study compares the optimal generation mix and the total generation costs of a system optimised under a deterministic or a stochastic representation of the RES generation pattern. The authors conclude that, compared to a deterministic analysis, taking into account the stochastic availability of RES resources increases the total costs of generation and, symmetrically, reduces the value of electricity generated by RES. The increase in total generation costs is almost linear until the penetration level of renewables reaches a value of 70%, after which it becomes more significant2.

2 A cost increase of 1.6 EUR/MWh (i.e. about 2.2% of the total generation cost) is reported for a RES penetration level of 50%. At 80% RES penetration level, the cost difference is estimated at 3.6 EUR/MWh, or 4% of the total generation costs, to reach a value of 14.2 EUR/MWh, i.e. 12.3% of the generation costs, when a 95% target of RES generation is achieved.