3. Updates of nutrition guidelines andfood governance policies
Most of nutritional guidelines are still incompatible with the 2°C objective 8 (Behrens et al. 2017). United States and Australia’s nutritional guidelines are particularly GHG intensive as their overall adoption would increase GHG emissions compared to the 2050 projection (Ritchie, Reay, and Higgins 2018). Four countries (Brazil, Sweden, Qatar, and Germany) have nevertheless introduced sustainability objectives in their official recommendations (Fischer et al. 2016). In its new nutritional recommendations, France has also integrated environmental issues, encouraging consumers to eat local, seasonal and organic if possible. Meat reduction is not explicitly associated with an environmental objective, but it is recommended to replace meat with dried vegetables (Santé Publique France 2019). In France, local authorities are progressively taking over food issues and introducing GHG reduction targets. In 2017, the agglomeration community Bordeaux-métropole for example was the first in France to have a sustainable food governance advisory council. Other communities are following this initiative, like Montpellier, Nantes, Greater Lyon, or the city of Paris, which has unveiled its sustainable food strategy and is preparing the creation of its own food governance body.
To conduct the bibliographic research, we used the keywords outlined in Table 2 , and applied them to several academic journal databases (WOS, Scopus, Econlit, Food science, Elsevier, Sage, Wiley, Springer, Cairn, HAL, Open Edition, MDPI), as well as to Google Scholar. A total of 157 publications are included in this review, 14 of which address topics indirectly related to market-based SFSCs or to sustainability (i.e., publications discussing topics such as new indicators of wealth, organic movement, community gardens, participatory guarantee systems, among others). The remaining 143 publications are described in Table 3 , and deal primarily with either SFSC conception and characterisation/methodology (61 publications) or SFSC sustainability, including governance issues within and around SFSCs (82 publications). We focused on recent papers, one-third of which were published during or after 2018. We searched articles from 2000 onwards, considering that the concept of SFSCs began to emerge around that time. The large majority of reviewed publications use a qualitative case study approach (78 publications), mainly based on interviews and/or participant observation. Cases vary both in their number (from 1 to more than a 100) and in the selected unit of analysis. Generally, cases were defined as one or more of the following: SFSC product (e.g., milk, apple food chain), SFSC actors (e.g., farmers, consumers, policy-makers), SFSC initiatives (e.g., farmers’ market, CSA partnership), SFSC areas (e.g., regions, territory, city). Other publications comprise quantitative surveys (with large samples), theoretical articles, literature reviews, expert reports, entire books or book chapters and policy briefs. SFSCs have predominantly been studied from a social science perspective, while very few have been published in biotechnical sciences and in multidisciplinary studies combining both social and biotechnical sciences.
developed by Rosenzweig & Wolpin (1993) concerning the dynamic investment in the asset “bullock”.
Self-assessed food insecurity:
Instead of looking at actual changes in consumption, some authors addressed the issue of self-reported food insecurity. After all, in many aspects of social sciences, the perception that people have of a phenomenon can often be as important as the actual observed consequences. Such a change in perspective can be found for instance in Headey (2011) who uses Gallup World Poll and finds that actually fewer people reported food insecurity in the world in 2007/2008 than in 2005/2006. The detrimental impact of the high food prices was partially offset by the high economic growth rates observed thorough the developing world: large and populous countries managed to limit the price hikes domestically and also benefitted from dynamic growth. Similarly, Verpooten et al (2013) analyzed the evolution of self-reported food insecurity in 18 Sub-Saharan African countries between 2005 and 2008 using the Afro-barometer data. The study concluded that although higher average prices were associated with an increased incidence of reported food insecurity over the previous 12 months, they were also associated with a lower probability of reporting a severe food insecurity status. Self-reported food insecurity variables could also be used as explanatory variables (see e.g. Akter and Basher (2014) who found that the self-assessed dummy for having experienced acute food shortage in 2007 did come out as significantly lowering the household expenditure growth between 2007 and 2010 in Bangladesh).
example, sees GDP as an imperfect indicator because improvements in the quality of life, which do not show up in material consumption, do not increase GDP. In order to take into account other determinants of well being, various kinds of ―quality of life‖ indices were developed recently. The most influential indicator is the Human Development Index (HDI), which was introduced by the United Nations Development Programme in its annual Human Development Report in 1995. The ideas of Nobel Prize winner Amartya Sen were influential in the development of this indicator. The HDI combines normalized measures of life expectancy, knowledge (literacy and educational attainment) and living standard (GDP per capita in PPP US$). The use of Purchasing Power Parity (PPP) takes into account countries‘ different price levels and converts the data into a common currency. The PPP can be a better indicator of living standards, especially of less-developed countries, because it compensates for the weakness of local currencies in world markets. Yet, as even the HDI‘s information value is limited, further research is being done in the field of measuring ―quality of life‖, or even ―happiness‖, which is assumed to correspond to the freedom to make personal choices. Today, the new indices are becoming a tool for judging the true wealth of nations, but due to limited data availability, especially concerning the time period, they enhance rather than replace GDP as a measure of a country‘s well-being. The major advantages to using GDP per capita as an indicator of standard of living are that it is measured frequently and widely. Furthermore, the technical definitions used within GDP are relatively consistent between countries.
buildings located in Northern Pakistan might need more heating. However, there are cities where both heating and cooling is required due to the high seasonal variations and extreme temperatures such as in Quetta, Balochistan. The household’s energy consumption in Pakistan is mainly increased by the increased number of users and the expansion of the electricity network to many areas . Advancement in technology and rise in income level tend to increase the ownership of the number of electric and electronic household appliances which also increases the energy consumption of a household . The Government of Pakistan initiated some efforts to reduce the household electricity consumption one of them was a campaign, “free distribution of Compact Fluorescent Lamps (CFLs)” to replace the Incandescent Bulbs (IBs). The prices of CFLs were also reduced to expedite their use in the domestic sector. Using fans for ventilation is very common across the country. Pakistan produces around 8 million fans annually and the fan manufacturing sector employs thousands of workers. There are 250 fan manufacturers in Pakistan and 99 percent of them are located in the province of Punjab. In recent years, the public and private sector together with The World Bank worked to introduce and promote energy efficient fans. Pakistan Energy Label (PEL) for the Fan Industry was created which is a certification for the energy saving fans. By 2017, 13 industries started to produce energy labelled fans. There are 60 million fans installed in Punjab by turning them to energy saving fans which consumes 40 percent less energy, there is potential to save large amount of energy .
The evolution of this regime has led to a corporate and environmental food regime (Friedmann 2016) driven by multinational firms developing in a neoliberal and media form of capitalism (Allaire and Daviron, 2008). This new regime developed from the end of the 1970’s and was characterized by the withdrawal of States, a financiarization of agri-food markets, a political economy of standards driven by multinational firms’ strategies, partially integrating some environmental issues (Loconto and Fouilleux, 2014; Fouilleux and Goulet, 2012). This regime was also characterized by the growing importance of ICTs and the promotion of new biotechnologies, under a more fragmented world in terms of political control (emerging of new countries in the global food market, such as Brazil, China…). The 2008 crisis showed the limits of this food regime and triggered many debates on agri-food models and the possible ways in which the international food regime might evolve: 1) the pursuit of the corporate and environmental regime strengthening the private management of commodities (partially framed by public safeguards set up on a global scale), the development of “global value chains” targeted at various quality standards, anda new agrarian capitalism; 2) the emergence of a civic food regime around international negotiations between governments, firms, the scientific community and social movements, that recognize the diversity of agri-food models, their possible contributions to public goods, the legitimacy of a national or regional food sovereignty, and the importance of agro-ecology and local know-how; 3) the “return” to more fragmented international regulation that gives back a role to the State (bilateral agreements between States, negotiation between State and multinational firms, protectionism, etc.), with a diversity of trajectories and combinations of agri-food models, according to the national policies (and the hegemony of China?).
When we simulate a model structured to eliminate complexities of the economic system using a timing and discount rate similar to those assumed in the IPCC, we find that our net present value cost of achieving a 2°C target is identical to the median estimate of the IPCC (at 2.2%). Notably, this is more than twice the cost estimated in the Stern Review. This may reflect, in part, that the Stern Review is now 10 years old, and studies reviewed at that time likely assumed the global policy would start in 2010 or earlier. By our estimate, if we could have started a global policy in 2010, the cost would be 8% less than if started in 2020 compared to our base policy runs. Unfortunately, while international negotiations have made progress in getting commitments from countries to abate, those commitments are insufficient to drive emissions down, which we find an optimal policy would do immediately upon implementation. Hence, if we are able to eventually get on an optimal path of emissions abatement toward the 2°C target, it will likely not be until 2030 or later, further increasing the costs. Our estimate is that delaying the start to 2030 would increase the cost by another 14%.
point to the various threats that these deals can pose to the environment, to local food security and to traditional livelihoods. 2
Official policies have been vital in incentivising what has been referred to as the ‘biofuel boom’. The European Union, the United States and other countries have included targets to achieve a higher use of biofuels in transport, whilst offering financial incentives and tax exemptions for those involved in ‘clean’ energy. Although the motives at the root of such policies are arguably well-intentioned, they often compete with food production, thereby increasing local food insecurity, and can lead to important human rights violations that include displacement. Although most of these projects claim to be using unoccupied or marginal land, empirical research shows that in reality these lands are often inhabited, forested, used for grazing or utilised as a communal resource.
Section 3. Action plans - Comparison and identification of predominant trends. In a
report by the WHO, seven main areas were identified: 1) dementia as a public health priority; 2) dementia awareness and friendliness; 3) dementia risk reduction; 4) dementia diagnosis, treatment, care and support; 5) support for dementia carers; 6) information systems for dementia; and 7) dementia research and innovation (World Health Organization, 2017). These areas transcend action plans for AD and other major NCDs, and are thus broadly convergent, although WHO recommends that governments operationalize them in concrete measures adapted to their political, sociosanitary, population and territorial realities. We observed that measures to improve early phases of the care and service trajectory, such as improved diagnosis, are the focus of the action plans. Improved diagnosis is usually put in place too late, constituting a major obstacle to the implementation of follow-up adapted for people living with the repercussions of these diseases. This explains why diagnostic measures are almost universally promoted in public policies, under various conceptual arrangements. However, several action plans have not given the same importance to the development of care and services, following diagnosis, and this can generate feelings of helplessness. A holistic approach to the needs of people living with AD and NCDs, requires public policies to reflect the same intensity in all of the main areas. This can be achieved using the concepts of dementia capable, dementia friendly and dementia positive. These shared concepts are useful in functional components, in transforming the physical and social environment, and in recognizing that people with Alzheimer's disease deserve to live a fulfilling life. This approach is key for people living with these diseases, and their loved ones, to fully exercise their remaining abilities and live with dignity.
There are many mortality studies worldwide. Medina-Ramon and Schartz (2007) conducted a cross-over study on the effects of cold and hot temperatures and ozone concentration on mortality and population adaptation in 50 USA cities during 1989-2000. The daily average temperature in the covered cities ranged from 18.5°C to 32.1°C for the hot months. The study found that heat related mortality varied from city to city, with the largest effect was observed in cities with milder summers (daily average temperature around 19.8°C at 25% percentile). The population living in cities in cold climate zones were found to be fully acclimatized to cold temperatures, but not to heat. Acclimatization was attributed to the ubiquitous use of central heating systems in cold climate regions compared to less usage of air conditioning in warm or hot climate regions. Ma et al. (2014) and Li et al. (2016) analysed the mortality-temperature data for large Chinese cities in various climate regions (i.e. hot, temperate, cold). The mortality-temperature data was found to follow U, V, W, J or inverted J-shapes, and varied from city to city as expected. The strongest effect of temperature on mortality appeared within two to three days, and thereafter quickly decayed within a week. The higher the daily temperature the shorter the time-lag. The optimum daily average temperature at which the mortality was the lowest varied from city to city, with a city-average value of 21°C. Basu (2009), Hajat and Kosatky (2010) and Song et al. (2017) reviewed literature on mortality-temperature data in America, Europe and Asia. Air pollution and the vulnerable
Design Principles: LiteratureReview, Analysis, and Future Directions
Design principles are created to codify and formalize design knowledge so that innova- tive, archival practices may be communicated and used to advance design science and solve future design problems, especially the pinnacle, wicked, and grand-challenge prob- lems that face the world and cross-cutting markets. Principles are part of a family of knowledge explication, which also include guidelines, heuristics, rules of thumb, and strategic constructs. Definitions of a range of explications are explored from a number of seminal papers. Based on this analysis, the authors pose formalized definitions for the three most prevalent terms in the literature—principles, guidelines, and heuristics—and draw more definitive distinctions between the terms. Current research methods and prac- tices with design principles are categorized and characterized. We further explore research methodologies, validation approaches, semantic principle composition through computational analysis, anda proposed formal approach to articulating principles. In analyzing the methodology for discovering, deriving, formulating, and validating design principles, the goal is to understand and advance the theoretical basis of design, the foundations of new tools and techniques, and the complex systems of the future. Sugges- tions for the future of design principles research methodology for added rigor and repeat- ability are proposed. [DOI: 10.1115/1.4034105]
ACCEPTED MANUSCRIPT Introduction
Mycobacterium tuberculosis is a rare cause of prosthetic joint infection (PJI), as most countries with high prevalence of tuberculosis have limited access to prosthetic arthroplasty. Moreover, as the diagnosis relies on specific tests not routinely performed for PJI, a significant proportion of M. tuberculosis PJI cases probably remain undiagnosed [1,2]. Tuberculosis PJI may result from the hematogenous spread of an extra-articular focus of active tuberculosis, or from the local reactivation of latent tuberculosis, even in patients without previously known history of tuberculosis . M. tuberculosis PJI may also arise from an active tuberculosis of the native joint, mimicking osteoarthritis, undiagnosed by the time of arthroplasty . Given the limited number of cases reported to date, the optimal strategy for the management of M. tuberculosis PJI is still controversial, as clinical practice guidelines for tuberculosis , and for PJI , provide no specific recommendation for M. tuberculosis PJI. In particular, whether the prosthetic joint needs to be removed is unclear. This question is of importance since M. tuberculosis PJI frequently occur in elderly patients with poor general condition and high surgical risk [3,7]. To better characterize M. tuberculosis PJI, we report our own experience of 13 consecutive cases, and performed aliteraturereview, with a focus on management and outcome.
Publisher’s version / Version de l'éditeur:
Vous avez des questions? Nous pouvons vous aider. Pour communiquer directement avec un auteur, consultez la
première page de la revue dans laquelle son article a été publié afin de trouver ses coordonnées. Si vous n’arrivez pas à les repérer, communiquez avec nous à PublicationsArchive-ArchivesPublications@nrc-cnrc.gc.ca.
As for the evaluation of the impact of informal learning, economic literature comes up against a major difficulty inherent to the very elusive nature of training (Brown, 1990). An essential aspect of on-the- job training concerns therefore the analysis of the individual impacts of different processes of human capital accumulation in the workplace. Why is it important? It seems that formal and informal training could not only affect the same category of workers differently, but also, that each of them could play a very specific role in employers’ training strategies: formal and informal training can either be substitute or complementary schemes. Moreover, one question remains: do formal and informal learning provide the same kind of economic outcomes for both workers and organisations that implement them? Finally, from the few attempts aimed at measuring informal training provided by firms (mostly in the United States), it appears quite clearly that not taking into account the informal component of training would lead to a huge under-estimation of the total amount of training provided to workers by the productive system. For these purposes, indirect measures of training are also very helpful 23 . For instance, researches by Nordman (2000) and Destré and Nordman (2002), using matched employer-employee data on France, Mauritius, Morocco and Tunisia, have provided a means of measuring the effects of informal training on earnings. Using this approach, these studies were able to distinguish between the relative contributions of informal learning from observation and from experience (learning-by-doing).
controls for the same stimuli, but also for non-inducing stim- uli as similar as possible as graphemes, and with a minimum of hypotheses concerning the localization of effects.
Jäncke 2012: EEG, 11 colored hearing synesthetes and 11 controls, auditory stimuli. Jäncke et al. (2012) recorded the EEG signals of 11 colored-hearing synesthetes and 11 controls during a pas- sive MMN (mismatch negativity) task. Subjects were instructed to watch a silent movie while ignoring tones. The standard tone was a piano tone A (440 Hz), presented 60% of the time. Deviants were either close to the standard (438 Hz: slightly mistuned A; 422 Hz: mistuned G#; 416 Hz: G#, one semitone deviant) in order to elicit similar colors for synesthetes or further away (264 Hz: piano tone C, nine semitone deviant). Each deviant occurred 10%. Signiﬁcant MMNs at around 150 ms were recorded for all deviants and both groups. The amplitude of the MMN was, however, larger in synes- thetes for the two largest deviant tone, [one semitone, t(20) = 3.9, p < 0.001; 9 semitone, t(20) = 2.726, p < 0.01] suggesting that the larger deviance was due to the synesthetic color being pro- cessed preattentively. LORETA source reconstruction suggested the possible involvement of visual areas in synesthetes. The authors were, however, aware that their 32 electrodes system did not allow them to draw ﬁrm conclusions concerning intracerebral source localization. One limitation of this study, besides relatively weak power, is the absence of measure of the MMN for control stimuli with no synesthetic quality, so we cannot rule out the possibil- ity that this particular group of synesthetes just had a stronger MMN for stronger deviants, irrespective of synesthesia. Moreover, tone deviance did not match the differences between synesthetic colors. While synesthetes “reported clear, distinct color sensations” while hearing tones Aand C, a statistically more reliable difference was obtained for tone G#, but not the mistuned G#, even though ﬁve of the 11 synesthetes perceived identical colors for these two tones. Moreover, inspection of their Supplementary Table revealed that the approximate number of colors different than the standard (depending of course on the exact rendering of RGB values) for the four deviants were, respectively, 2, 10, 9, and 11 (with larger distances in color space speciﬁcally for the nine semitone differ- ence). This protocol seems, however, promising to detect (if any) early correlates of synesthetic colors, if able to carefully choose synesthetes and tones in order to fully dissociate tone deviance from synesthetic color deviance.
In developing countries, the attributable environmental fractions were 33% (6—65%) for men, and 25% (6—37%) for women (WHO, 2006). It was estimated that environ- mental factors account for 31% of the global disease burden of lung cancer and 30% (6—55%) of the disease burden in developed countries, for both men and women (WHO, 2006). Other studies assessing the environmental attribut- able fraction of cancer have reported lower estimates. Health Canada estimates that only 10-15% of cancers are linked the environment (Boyd & Genius, 2008). Whereas, other studies have shown the environmental fraction in Canada to be 5-15%, however, this is due to a narrower def- inition of environmental risk factor than the WHO (Boyd & Genius, 2008). It was estimated that 15.6% of the world- wide incidence of cancer in 1990 could be attributed to in-
We evaluated these ALATs and showed that (1) re- searchers worked on improving the efficiency of ALATs by adopting diverse algorithms, while distributed archi- tectures seem most promising; (2) parameter tuning for large-scale log data is challenging and requires major ef- fort and time from software engineers, researchers should consider techniques for automatic and dynamic parame- ters tuning; (3) due to confidentiality issues, log datasets are rare in the community while all existing unsupervised ALATs depend on these datasets for training, so we rec- ommend researchers to investigate new ALATs that do not rely on training data; (4) practitioners must make compro- mises when selecting an ALAT because there is not one ALAT that can satisfy all quality aspects even if online ALATs (e.g., Spell, Drain) or ALATs based on heuristic clustering approaches and implementing a parallelization mechanism (e.g., POP, LogMine) satisfy most combina- tions of quality aspects; (5) supervised ALATs based on Natural Language Processing techniques (NLP) are accu- rate if the models are trained on large amounts of data and researchers should build and share their logs to benefit the research community.
C 3 H 3 . The presence of oxygen can also reduce auto ignition since the C-O bond energy is
lower than the C-H bond energy found in conventional diesel. DME has approximately 66% of the energy content, by mass, and about 50%, by volume, of diesel fuel.
The air/fuel ratio of DME fuel at stoichiometric conditions is approximately 9 versus 14.6 for diesel meaning that complete combustion of 1 kg DME requires less air than that of 1 kg diesel fuel. However, more than 1 kg of DME is required to provide the same amount of energy as 1 kg of diesel. DME has a much higher, and wider, flammability range (i.e. the volume of fuel, expressed as a percentage in an air mixture at standard conditions, where ignition may occur) in air than the three hydrocarbon fuels (gasoline, diesel and propane but very similar to natural gas. DME is sulfur free whereas even ultra-low sulfur diesel (ULSD) contains some sulfur. Most #1 and #2 pump diesel fuels have cetane numbers between 40 and 45 and many bio- diesels have CN greater than 50. DME has a cetane number between 55 and 60, which makes it very suitable for a diesel cycle engine. This reduces engine knocking and engine noise when compared to engines powered with conventional diesel and also helps to provide a more complete combustion process with less wasted fuel, particularly at engine start up or when in- cylinder temperatures cool off. Fuels such as propane and natural gas have high octane numbers but cetane numbers less than 10, making them impractical for dedicated use in a diesel cycle engine unless they are combined with at least some diesel as an ignition source. DME in the liquid state has low viscosity and low lubricity, two properties which strongly affect the maximum achievable injection pressure in a fuel injection system: viscosity allowing it to readily pass through narrow passages and the lack of lubricity can accelerate the wear of surfaces moving relative to each other such as the feed pump, the high pressure injection pump, and injector nozzles. Due to the low viscosity and lubrication characteristics, fuel additives are mandatory to improve the fuel viscosity to make DME a viable fuel for on road engines.
It is relevant to keep in mind that the literature re- view was undertaken within the context of an evaluation of technology cluster initiatives, and that NRC’s program activities were designed to support the development of some of these factors. Thus, each factor was linked to an appropriate core evalua- tion issue or question. In this manner, assessment of the status of each growth factor included in the framework presented here and the role of the NRC cluster initiatives in this change would help to de- termine the initiatives’ performance against the pro- gram’s expected outcomes as indicated in its logic model. The intent of this exercise was to facilitate the evaluation’s objective judgment on the influence, if applicable, of NRC on the presence of these fac- tors (e.g. development of a skilled workforce or the development of a specialized research infrastructure) or to assess the broader context in which NRC’s cluster initiatives evolve (e.g. presence of an anchor organization or government support) in order to de- termine the growth potential of each of the clusters, which were determined to be at various stages of development.