The discrete element method is recognized as a powerful tool for studying granular materials. The first generation of discrete element models idealized particles with discs in 2-D and by spheres in 3-D (Cundall and Strack 1978; 1979). Later, polygons were used to improve particle shape idealisation. However, polygon elements are demanding on computational time. Ting et al. (1989) reported an increase in execution time of at least one order of magnitude for polygons compared to circular or disc shaped particles. Lately, the useof clusters of particles has been pursued by researchers. This approach does not require much modification of contact detection schemes usually used with circular shapes. However, the composition of analyzed samples in terms of percentage of each particle clustering is taken arbitrarily, which usually is unrepresentative of actual particle size distribution of granular materials. This paper presents the useofdigital images to provide a real packing configuration for samples and then uses the cluster concept to improve particle modeling for use in discrete element analyses.
Figure 10. (a) Shear experiment, (b) comparison between gauge response and measurements by a digital image correlation technique ( l = 64 pixels and δ = 32 pixels).
Figure 11a shows the displacement field on the surface of the specimen just before the failure of the specimen. One can note the good symmetry of the displacement field about the two loading directions. The strain maps (Figs. 11b-c) show heterogeneities, which is a first indication that the material is not homogeneous on the scale of the measurements (of the order of 2-3 mm). It is worth remembering that the uncertainties related to the correlation technique are negligible for strain levels greater than 10 −3 . Therefore, it can be stated that the strain field fluctuations are mainly due to material imperfections.
In terms of willingness to pay, the majority of survey respondents indicated they were willing to pay less than 5000 FCFA/ha (8.60 USD/ha) for any given drone-based service such as those mentioned above. For example, the mean willingness to pay for identification of pest attacks by drone was 3,345 FCFA/ha (5.75 USD/ha), compared to 8,510 FCFA/ha willingness to pay for the same service delivered by human power (14.64 USD/ha). The same was true for fertilizer spreading, with farmers willing to pay 2,597 FCFA/ha (4.47 USD/ha) for the drone-based service compared to 7,500 FCFA (12.90 USD/ha) for the traditional human-powered service. This lower willingness to pay for the drone-based services could be due in part to the farmers’ perception of the drone-based work as requiring less manpower and less resources, thus assuming that it should cost less than paying laborers to complete the same tasks. For pesticide applications, the amounts were similar, with farmers indicating they were willing to pay 3,391 FCFA (5.83 USD/ha) for the drone-based service and 3,218 FCFA/ha (5.54 USD/ha) for a traditional human-powered service. This willingness to pay slightly more for the drone-based pesticide applications in this instance could be due to the farmers realizing that precision-based applications mean less overall pesticide used and less money spent on inputs, as opposed to uniform applications using the traditional method. Overall, the results of the study showed that Beninese rice farmers had a positive perception of drone usage in agricultural crop management, including for spatial crop quality control, identification of nutrient needs, and precision application of pesticides and fertilizers. However, smallholder farmer participants indicated that they expect to pay less for drone-based services than they traditionally pay for manpower-based services. As such, farmers were found to value awareness campaigns, reduced costs, and permanent availability of the drone-based technologies, and these principles should be applied in order to strengthen adoption and uptake rates.
on the quality of interconnections (Crémer et al, 2000; Bjorkegren, 2019); and (2) on the importance of market failures, transaction costs and infrastructural deficits (Aker, 2017). In this regard, the leapfrogging potential of mobile telephony in Africa has been unparalleled. When the distance between people is immense due to missing road and wireline infrastructures and where market and state failures in the provision of public services are profound, mobile technology is the easiest way for Africans to connect to each other, reduce information asymmetries, and lower transaction costs. As a result, digital initiatives based on mobile phones have multiplied in the region, allowing many Africans to get access to basic public services such as education, health, or financial services. Digital technologies have proven to be instrumental to address market failures in public service delivery, but their potential for scaling up is potentially hampered by the large and multidimensional digital divide in SSA, characterized by the low penetration ofdigital technologies in remote areas and among the poorest and most vulnerable segments of African societies. Our analysis stresses that the potential ofdigital technologies will be fully unleashed if policymakers are able to address persistent obstacles to ICT access that have long remained structural handicaps in African economies: allowing affordable access to energy, extending the landline backbone infrastructure and the mobile Internet network, improving educational attainment, and reducing gender inequalities. As a result, the low penetration of Internet and related technologies in (West) African countries precludes large-scale and more sophisticated usages ofdigital technologies, particularly usages based on Internet, artificial intelligence, cloud-computing, the Internet of things, or big data. Innovations based on these technologies offer promising perspectives for public service provision and development in Africa, but their bourgeoning nature and limited scale does not transfers (Aker et al, 2016b), subsidized education
This document is part of a series of reports produced by MIT CITE. Launched at the Massachusetts Institute of Technology (MIT) in 2012 with a consortium of MIT partners, CITE was the first-ever program dedicated to developing methods for product evaluation in global development. Located at MIT D-Lab since 2017, CITE is led by an interdisciplinary team and has expanded its research fo- cus to include studies that explore the barriers to, and enablers of, effective innovation processes and technology adoption; the outcomes of capacity building programs and technology interven- tions; and the contexts in which technologies and innovation processes operate. This includes a portfolio of research studies on digital financial services programs, capacity for local innovation, internet of things for agriculture, inclusive systems innovation, fairness in machine learning, and evaporative cooling technologies. CITE also develops the capacity of researchers to conduct eval- uations by providing resources and tools on its methods.
In some cases, supply-side respondents and SHFs aligned closely in their responses to the mirror surveys, such as in their assessments of self-efficacy (see Figure 25 in Section 3.3.4). In other cases, however, responses diverged. Figure 14 shows that lack of bank account ownership is dominated by a (perceived) lack of having enough money among SHFs. Relative to SHFs, supply- side respondents overstate understanding how FS work and their cost, the distance of banks from members, people’s lack of trust in banks, and the usefulness of a bank account. Supply-side actors underestimated how long it took SHFs to start using DFS after they heard about it—54% of supply-side respondents thought it took SHFs less than a year while 65% of SHFs self-reported taking more than one year to adopt from when they first heard about it. This was especially pronounced in the Saloum region, where 69% of SHFs in the study said it took more than a year to adopt DFS from the time that they first heard about it (59% of SHFs in the study from the Senegal River Valley). The implication is that supply-side actors may potentially be undervaluing the amount of time and resources it takes to convert a DFS non-user to a user.
found words ( Loiseau et al., 2015 ). In Figure 1 , for instance, if both players found ‘mia’ (‘my’, fem. sing.), the player who realises that its masculine (‘mio’) and plural feminine (‘mie’) forms are also in the grid will have an advantage over the opponent. Even if a player finds a form by luck, the competitive nature of this version provides an incentive to infer the category of the word in order to see if the grid does not contain other forms. Games are played one-on-one (asynchronously) in three sets. While the lexical nature of the game is mainly addressed through the existence of a personal lexicon for each learner, called a ‘wordbox’, stemming mechanisms are at the core of the rules.
• Questions about useofdigital health: participants were asked whether they had a mobile phone (2 items: yes, no), a wearable device (2 items: yes, no), a mobile health app
(2 items: yes, no), and, only for those reporting having a mobile health app, its frequency ofuse (3 items: often, occasionally, never) and name or topic (open-ended item). On the basis of a list, participants were asked about health topics they had searched for on the internet in the last 12 months (15 items: sleep, physical activity, nutrition, sexuality, contraception, pregnancy and maternity, alcohol risks, risks concerning tobacco and e-cigarette, cannabis and other synthetic drugs, stress, anxiety or depression, skin problems, vaccinations, environment and health risks, pain, and illnesses), why they had looked for Web-based health-related information per health topic (3 items: for yourself concerning a specific disease or medical problem which might affect you, out of curiosity, for your studies), and their main source of health information (7 items: forums, general health websites, YouTube, social networks such as Facebook and Twitter, institutional or official websites, blogs, and Wikipedia). They were also asked to rate the trustworthiness of each of these sources (3 items: credible, neither credible nor noncredible, and noncredible), and whether, from the beginning of their university studies, they had already looked online for a health professional or service (2 items: yes, no).
Fig. 10. L6, deviance analysis from the original mesh and a smoothed one.
Interesting details then emerged in this deviance analysis. Some figures became more recognizable and could be found carved on several standing stones. One of them, with a triangular shape, is an axe, replicating the exact form of a polished axe. The turning light technique (which consists of taking a large series of photos with oblique light oriented at a different angle for each view), detected a very interesting detail that could be isolated and measured. A tiny hole in the figure of an axe, previously barely visible, looked, in size and shape, to be very similar to those observable on real axes (Pétrequin et al., 2012)
Université de Nantes, CNRS, LINA, UMR6241, Polytech, 44306 Nantes, France
Abstract. In a world where activities, goals and available software are rapidly changing, users must constantly adapt. In this position paper, we discuss how digital skills are different from traditional skills due to their highly dynamic na- ture, both in the tools used and the tasks to be carried out. We advocate the needs both for interdisciplinary theory to conceptualize digital skill develop- ment, and for longitudinal, large-scale and trace-based methods to observe such phenomenon. We illustrate how digital tools could better support users in the development of skills, highlighting how traces of interaction could be leveraged within reflective and skill-sharing tools.
cate this rich diversity of specialised knowledge within an
interprétative ffamework that promotes understanding? Indeed,
with increasing spécialisation, how can we communicate with one another? Synthesis has become more difficult and students and the general public hâve complained of getting lost in a wealth of detail, of struggling to understand the context of the individual
Here, I would like to thank peoples without them this Ph.D project would be endless. First of all, I would like to thank Prof. Philippe Matherat for giving me the chance to do a PhD under his supervision. I would like to thank him too for his confidence in my choices and his continuous encouragement that helped me along the three years of my research work. I am very grateful to Prof. Fernando Silveira for his continuous guidance during the last two years. It was a pleasure to work with him and I learned a lot from his experience and his expertise. My gratitude also goes to Prof. Yves Mathieu for his help to master CAD tools. Next, I owe special thanks to Tarik Graba who took from his time to correct parts of this dissertation. Many thanks go to all the members of the Communications and Electronics Department at Telecom ParisTech, and to the administrative staff for their kindness and assistance. A thought goes also to all my friends in and outside Telecom ParisTech. My gratitude also goes to the professors that accepted to be part of my examination committee.
Fig. 3. Text input speed according to the word length during the second session
In Fig. 2 , we can see that DUCK is more eﬃcient than VODKA for words of four or more characters. Moreover, the text input speed is higher for the long words than in the ﬁrst experiment [ 5 ]. By analyzing the input time with DUCK, we can see that the time required to validate a word in the deduction list is only 2.85 s on average, whereas it was 3.6 s in the ﬁrst study. This conﬁrms that the changes made to the interaction with the list are beneﬁcial to the validation time, and thus allow the user to enter the words faster.
(2016) “interface methods”, who propose to account for the discrepancies between new digital methods, classic social research and social reality.
In the introduction, we have stated that during the twentieth century hermeneutics became a second- degree reflection on the specificity of human beings as interpreting animals. In digital hermeneutics, one might distinguish between three uses of this perspective. First, several authors including the aforementioned Hubert Dreyfus used this idea to stress the intrinsic difference between humans and digital machines (AI). According to him, there is an essential difference between human beings and computers: “[t]he human world, then, is prestructured in terms of human purposes and concerns in such a way that what counts as an object or is significant about an object already is a function of, or embodies, that concern. This cannot be matched by a computer, for a computer can only deal with already determinate objects […]” (Dreyfus 1972, 173). Human beings have goals, which are realized on the basis of a system of values and emotional states that are usually not explicit. Machines, instead, have ends, which are rather realized (Dreyfus is referring here to symbolic AI) thanks to a predefined list of specific criteria. Even in more recent publications, he has insisted on such intrinsic difference between humans and machines, by denouncing for instance the insufficiency of the attempts to build an “Heideggerian AI”. The fact is that, for him, we would need “a model of our particular way of being embedded and embodied […]. That is, we would have to include in our program a model of a body very much like ours […]” (Dreyfus 2007, 1160). It is interesting to notice how Dreyfus’ Heideggerian radicalism is in this context more radical than Heidegger himself. He refuses, for instance, the notion of as-structure, which plays an important role in Being and Time
• Permanent faults which are due to irreversible changes in circuit structure, occur and remain stable until a repair is undertaken. These faults mainly arise from defects or the variation in manufacturing process. Permanent faults induced by fabrication process are usually well eliminated by appropriate design and manufacturing meth- ods and can thus be ignored in usage time [ 29 ]. But sometimes, permanent faults also occur during the usage of device, such as aging-e↵ects leading to device wearout. • Transient faults or soft errors (SEs) which are caused by temporary environmental conditions like alpha particles and neutron, electrostatic discharge, thermal noise, crosstalk, etc. They occur for a short period of time and then disappear. In other words, transient faults are described as independent one-time errors. A transient fault can cause component malfunction without damaging the component itself. In memory cells, transient faults are referred as single event upsets (SEUs) while in a logic circuit are single even transients (SETs). The random occurrence and complex modeling character make transient faults representing a new di↵erent challenge. • Intermittent faults which are induced by variations in manufacturing procedure.
The reliability of integrated circuits has thus become a key consideration when dealing with nanoscale designs, as a consequence of many factors associated with technology scaling, like manufacturing precision limitations, devices parametric vari- ations, supply voltage reduction, higher operation frequency and power dissipation concerns as shown in . These problems are a serious menace to the continuous evolution observed in the development of the integrated circuits industry. There exist many techniques to improve or to counteract the reduction of reliability in integrated circuits but, generally, these techniques reduce the gains achieved with scaling and there will be a point where scaling will be meaningless. The problem in determining this point is the complexity of reliability evaluation, that leads to the useof probabilistic and stochastic methods. The reliability of a circuit is dependent on too many variables and the growing complexity of the circuits themselves does not make the task easier.
There are some significant advantages to an example-based strat- egy. In a sense the entirety of its geological history is encoded in an input DEM leading to highly realistic outcomes. Furthermore, it is possible to leverage recent advances in texture synthesis and ma- chine learning to provide interactive performance and effective user control. While such approaches can be fast, realistic and control- lable, there are some specific limitations. The problem with data- driven approaches such as these is that they stand or fall on the quality of the input data. Any artefacts or errors resulting from cap- turing real-world elevations will likely appear (or possibly be mag- nified) in the output. Sampling resolution is also limited to that of the source scans. Publicly available sources are currently mostly in the range of a few meters per pixel. The SRTM program cov- ers nearly the whole earth at 30m resolution, while the National Elevation Dataset of USGS covers almost all the United States of America at 10m. In rare cases, data can be found as fine as 0.5 − 1m per pixel, especially in easily floodable or densely urbanised area (e.g., AHN program in the Netherlands), but is, unfortunately, not available for steeper regions. However, this situation is likely to improve over time.
We capture the problem of valuing and selling data sets to buyers who interact downstream within the general framework of auctions ofdigital, or freely replicable, goods. We study the resulting single-item and multi-item mechanism design prob- lems in the presence of additively separable, negative allocative externalities among bidders. Two settings of bidders’ private types are considered, in which bidders either know the externalities that others exert on them or know the externalities that they exert on others. We obtain forms of the welfare-maximizing (efficient) and revenue- maximizing (optimal) auctions of single digital goods in both settings and highlight how the information structure affects the resulting mechanisms. We find that in all cases, the resulting allocation rules are deterministic single thresholding functions for each bidder. For auctions of multiple digital goods, we assume that bidders have inde- pendent, additive valuations over items and study the first setting of privately known incoming externalities. We show that the welfare-maximizing mechanism decomposes into multiple efficient single-item auctions using the Vickrey-Clarke-Groves mecha- nism. Under revenue-maximization, we show that selling items separately via optimal single-item auctions yields a guaranteed fraction of the optimal multi-item auction revenue. This allows us to construct approximately revenue-maximizing multi-item mechanisms using the aforementioned optimal single-item mechanisms.
In our problem, we have to determine if the point u belongs to the convex hull of chord(A) and an easy solution is to use the simplexe method. The points used in the algorithm are, except at the beginning, the one which maximize linear forms and we can notice that given a linear form ϕ and a ﬁnite set A, a point maximizing ϕ in chord(A) is the diﬀerence x − x between a point x maximizing ϕ in A and x minimizing ϕ in A. It means that we do not need to compute the whole set chord(A) to obtain its maximum by a linear form: with the simplex method, we do not need to compute the chords’ set chord(A). It makes the algorithm very simple and easy to make incremental.
The paper is organized as follows. In Section 2, after recalling some definitions related to 2D blurred segments, we study the problem of adding (or removing) a point to (from) a 2D blurred segment of width ν in the case of general discrete curves. Then we propose an extension for noisy curves the notion of maximal seg- ment of a discrete curve. An algorithm to determine all maximal blurred segments of a 2D discrete curve is given in Section 3. In Section 4, after recalling the defini- tion of the curvature estimator adapted to 2D noisy curves, an algorithm for the determination of the curvature at each point of a discrete curve is proposed. In this section, we also present how to extend these ideas into the 3D space. The next sections propose curvature and torsion estimators for 3D curves. The last section gives experiments and comparisons with Mokhtarian’s and Lewiner’s methods.