Web services composition has been a key issue in service sciences and heavily in- vestigated from both industrial and academic perspectives [ ter Beek 2007 ][ Yuan 2007 ]. Indeed, three main approaches have been deﬁned in the service composition research community in order to deal with service composition from an end-user perspective: (i) manual, (ii) automatic, or (iii) semi-automatic approaches. In the ﬁrst approach, the user has to compose services by writing entire programs (without any automated assistance). This program has to embed Web services invocations (calls) according to the composition logic. In this case, end-users are required to have high pro- gramming technical skills, which makes this approach very limited. The goal of the second approach is to automatically build composite services in order to match end-user’s request which is supposed to be expressed through a user friendly inter- face or even automatically computed on the basis of information gathered from his context. This approach has to handle indecision problems [ Balbiani 2006 ], that will ultimately need to involve the end-user within the composition task. This leads to the semi-automatic approach, the third and last one, which aims at providing end- users with an enhanced service composition environment. This environment oﬀers support for automated processing of some composition tasks where the end-user operates in a more or less involved manner. Semi-automatic composition, somehow, comes as an alternative approach by focusing on particular issues, for instance the diﬃculty of selecting a relevant service among the many available ones of the same category.
Our authoring tool provides enduser with the capability of reusing existing content, but not just taking it as a base for the augmentations; it also allows him to extract it from external Web pages, usually third party ones, to be injected into a new context. For example, he can take the actors’ profile at IMDB as target Web pages, and augment them with a carousel of related trailers from Youtube videos when the device is in landscape orientation. The content extraction is responsibility of a common component, available for every builder, named External Content Extractor. Such component is instantiated in the privileged context of the browser extension, so it makes it possible to append extra behaviour in any Web page, enabling user interactions for selecting their DOM elements of interest. This also makes it feasible the manipulation of every DOM element to obtain its positioning data (in the DOM, e.g. the XPath) and dynamically consume their content from external contexts (other Web pages that do not share the same domain name).
4.3 Consequences of the participation of end-users
Consequences of the participation of end-users fell into the architectural design quality and the relations between stakeholders.
Firstly, according to the architect’s point of view, some of the decisions taken by end-users on the communal areas, while totally democratic, were not optimal from both aesthetics and energy criteria. Secondly, relationship between stakeholders was highly impacted by both the involvement of end- users and the introduction of a new actor: the user manager. The UM was supposed to help and facilitate the dialogue between end-users and the design team. It appeared that it was a satisfactory solution for collective design issues, but it was more of an impediment in individual ones. His position as an intermediary requires a high capacity of interpretation, and it appears that a more direct dialogue between designers and end-users should be preferable when designing flats. Design of communal spaces is the fruit of a democratic consensus between all end-users; the message is therefore much more explicit and shared than during the individual flat design sessions. Naturally, end-users are much more exigent in the design of theirs, and have high expectations that can be difficult to express in only one meeting. The role of the UM seems interesting and consistent with Gould’s (1988) recommendations, but his task attributions should be rethought throughout the design of individual spaces. The high level of involvement of end-users is coherent with participatory design but has a high impact on the role of designers. While end-users have a lot of power in the decision taking, it appeared that designers were not aware of the limit of their own involvement: to what extent were they supposed to accept end-user decisions? If end-users are responsible for most of the non-technical design issues, are the architects only in charge of the technical? This radical redefinition of responsibilities in the decision taking appears to be one of the main issues in this pilot project and could be perceived as the main explanation of the delays observed in the project.
Universit´e Cˆote d’Azur, Inria, France email@example.com
Abstract—Video streaming is without doubt the most requested Internet service, and main source of pressure on the Internet infrastructure. At the same time, users are no longer satisfied by the Internet’s best effort service, instead, they expect a seamless service of high quality from the side of the network. As result, Internet Service Providers (ISP) engineer their traffic so as to improve their end-users’ experience and avoid economic losses. Content providers from their side, and to enforce customers pri- vacy, have shifted towards end-to-end encryption (e.g., TLS/SSL). Video streaming relies on the dynamic adaptive streaming over HTTP protocol (DASH) which takes into consideration the under- lying network conditions (e.g., delay, loss rate, and throughput) and the viewport capacity (e.g., screen resolution) to improve the experience of the enduser in the limit of the available network resources. In this work, we propose an experimental framework able to infer fine-grained video flow information such as chunk sizes from encrypted YouTube video traces. We also present a novel technique to separate video and audio chunks from encrypted traces based on Gaussian Mixture Models (GMM). Then, we leverage our dataset to train models able to predict the class of viewport (either SD or HD) per video session with an average 92% accuracy and 85% F1-score. The prediction of the exact viewport resolution is also possible but shows a lower accuracy than the viewport class.
From end-users perspective, service creation process must be intuitive and self- explaining: no software development skills are required to develop a new service. End-users cannot understand current service technologies such as WSDL  and SOAP . We witness an exponentially growing number of services on the web, which makes the end-user confused about which service to use in his SCE [10-12]. These factors make the intuitiveness almost a key for the success of service creation platform. One proposed solution is to develop a semantic service creation assistant: e.g. whenever the user picks an existing service, the system should be able to suggest a set of syntactically or semantically related services that can be connected to the existing service (Inter-service communication); or user can locate a specific service based on a description of all (or part of) its functionality with a near natural language request (NL composition ).
The need for domain-specific approaches has been gradually recognized, as illustrated by the domain of mashups. Mashups are tools for building web pages or web information portals, that aggregate information from different web sites and web services. An early EUD system for mashups called Marmite introduced a visual dataflow language, which was evaluated in an informal user study (Wong & Hong, 2007). Results were mixed in that only half of the users succeeded in building the required applications. A subsequent EUD system called Cocoa Buzz revealed that there exists a spectrum between end-user programming and end-user software configuration; Eagan and Stasko introduced an approach that addresses this spectrum, enabling complex configuration to be done by instantiating predefined domain abstractions using menus (Eagan & Stasko, 2008). A more recent proposal of EUD for mashups proposed to simplify EUD for mashups by narrowing the target even more. Specifically, it proposed EUD dedicated to mashups for the domain of scientific conferences for researchers (Soi, Daniel, & Casati, 2014).
Conclusion and open issues
Ambient systems, with their dynamics and unpredictability, do not allow software design and imple- mentation following traditional development cycles. End-user involvement enables the on-the-fly creation of software adapted to the situation and the user’s preferences and skills. Nevertheless, this user must be supported and helped in the development task. Hence, an intelligent system builds and makes emerge relevant applications that the user has not explicitly asked for nor expected. This report describes an original approach and its framework for presenting emerging applications to the user in an intelligible and manipulable way. It shows how MDE techniques with the definition of dedicated languages help to provide the user with personalized views of applications, as well as the tools that allow them to be modified or even built from scratch. Thus, MDE supports both the controlled construction of applications and the production of descriptive material. Besides, by com- paring and transforming models, feedback is captured and provided to the intelligent system to feed its learning process. Therefore, our original MDE-based approach puts the end-user “in the loop” by giving her/him direct access to the handling of internal application models.
End-user storytelling with a CIDOC CRM - based semantic wiki Vincent Ribaud (reviewed by Patrick Le Bœuf)
This paper presents the current state of an experiment intended to use the CIDOC CRM as a knowledge representation language. STEM freshers freely constitute groups of 2 to 4 members and choose a theme; groups have to model, structure, write and present a story within a web-hosted semantic wiki. The main part of the CIDOC CRM is used as an ontological core where students are hanging up classes and properties of the domain related to the story. The hypothesis is made that once the entry ticket has been paid, the CRM guides the end-user in a fairly natural manner for reading - and writing - the story. The intermediary assessment of the wikis allowed us to detect confusion between immaterial work and (physical) realisation of the work; and difficulty in having event-centred modelling. Final assessment results are satisfactory but may be improved. Some groups did not acquire modelling abilities - although this is a central issue in a semantic web course. Results also indicate that the scope of the course (semantic web) is somewhat too ambitious. This experience was performed in order to attract students to computer science studies but it did not produce the expected results. It did however succeed in arousing student interest, and it may contribute to the dissemination of ontologies and to making CIDOC CRM widespread.
Keywords: End-User Development, Web Augmentation, Web Adaptation,
Nowadays, many applications which, formerly, would have been designed for the desktop such as calendars, travel reservation systems, purchasing systems, library card catalogs, maps viewers or even games have made the transition to the Web, largely successfully. Many Web sites are created every day to help users to find information and/or to provide services they need. However, there are cases where rather than a new Web site, what users need is to combine information or services that are already available but scattered on the WWW. Some examples follow: (1) users who want to have additional links on a Web page to improve the navigation (for example to create a personalized menu that gathers in one location multiple personal interests), (2) users who need to integrate contents from diverse Web sites (for example to include a Google’s map into a Web page that originally only shows addresses as flat text) in order to improve their performance in identifying distance from their personal location or (3) simply to remove content from Web pages (such as contact details they consider irrelevant) to improve reading and selection performance as identified by Hick’s law . Because these needs might be perceived as idiosyncratic, volatile (being short- lived or occasional) or dissenting with the interests of the Web site, they might well not be considered (or even not known) by Web developers . This is because Web sites are, by definition, designed for the masses and that at design time only few users are available.
On the other hand, there are many benchmarks for performance evaluation from a system perspective, like the Berlin SPARQL Benchmark (BSBM)  to evaluate SPARQL query engines, but they do not take into account the end-user perspective.
In this paper, we present a benchmark for semantic data (graphical) user interfaces with a set of user tasks to be completed and metrics to measure the performance of the analyzed interfaces at different levels of granularity. We provide a benchmark not just for semantic-web data exploration, but for structured data more generally. This makes it possible to also compare tools available in more mature domains like rela- tional databases . It is well known that semantic web data can be squeezed into a traditional relational (SQL) database, and vice versa. Since the GUI systems we con- sider are aimed at end users, they generally isolate the user from details of the under- lying storage mechanism or query engine. Thus, these interfaces can in theory oper- ate over either type of data (modulo some simple-matter-of-programming data trans- formations). We also hope to further motivate research in semantic data exploration that goes beyond what is possible with other less rich data models.
This deliverable describes the M24 release of the Enduser applications for knowledge
practices software v2.0.0. The deliverable includes the technical development performed until M24 (January 2008) within WP6 according to Description of Work 2.1 and D6.4 M21
specification of end-user applications.
basis of the discussion of downscaling skill to meet these enduser needs in section 6.
[ 94 ] Any validation method ultimately relies upon the
quality and quantity of observational data. Typical quality problems are inhomogeneities, outliers, and biases due to wind‐induced undercatch (i.e., precipitation is under- estimated by the rain gage because a nonnegligible amount of rain is blown over the gage). Inhomogeneities may induce spurious trends [e.g., Yang et al., 2006] and increase uncer- tainty and may potentially weaken predictor/predictand relationships. Estimates of extreme events are particularly sensitive to outliers and inhomogeneities. For an appropriate signal to noise ratio, sufficiently long time series are needed, in particular, to reliably estimate extremes and infer trends. The validation of how natural variability is represented is limited by the length of observational records, typically a few decades. Furthermore, a sparse rain gage network limits the possibility for validation or may even render it impos- sible. For this reason, high‐resolution data sets have been developed in some regions [e.g., Haylock et al., 2008]. For an impression of the global rain gage network, see Figure 4. Data are particularly sparse in the high latitudes, deserts, central Asian mountain ranges, and large parts of South America.
First, a preliminary research paper was accepted by the IEEE-ISWM Mensura conference, 2014, where we were able to initially apply Bautista’s performance measurement framework on test data taken from data center logs. With this first laboratory experiment, we were able to measure the time behavior of production servers during a specific time frame. These early findings suggested that the sub-steps presented in section 1.5 would be required for the completion of this research: 1) we will need to further study if the enduser experience can be related directly to the LLDM measures; 2) the base (low level) measures, captured in the logs, will have to be mapped to the framework’s performance characteristics, which is the intended topic of the next paper; and finally 3) the measures will be validated using the validation method presented in this report. It is important to note that there were significant difficulties with the utilization of the proposed framework, specifically the challenges of mapping the base measures into the quality characteristics defined by the author. A set of improvements to the framework are being discussed and will be part of another research paper.
collaborative tailoring via customization files and email sharing an effective mecha- nism to foster a culture of end-user tailoring .
Web search engines return a result page with a list of Web documents (URIs) match- ing the search criteria. Results are usually presented as a list of page titles and one or two sentences taken from the content. Recent advances in Web search interfaces pro- vide, for a predefined set of element types such as movies, recipes, and addresses, rich snippets that help the user recognize relevant features of each result element (e.g., mov- ies playing soon or user opinions for a given movie). In some cases, these rich snippets include the data the user is looking for. They are possible because Web site creators include structured data in Web pages that computers can recognize and interpret, and that can be used to create applications. Viewing the Web as a repository of structured, interconnected data that can be queried is the ultimate goal of the Semantic Web . However, end users do not always have the means to exploit or add information to the Semantic Web. The tools presented here let end-users without training on Web devel- opment to extract and use structured data from any Web site.
The transition of personal information management (PIM) tools off the desktop to the Web presents an opportunity to augment these tools with capabilities provided by the wealth of real-time information readily available. In this pa- per, we describe a next-generation personal information as- sistance engine that lets end-users delegate to it various sim- ple context- and activity-reactive tasks and reminders. Our system, Atomate, treats RSS/ATOM feeds from social net- working and life-tracking sites as sensor streams, integrating information from such feeds into a simple unified RDF world model representing people, places and things and their time- varying states and activities. Combined with other informa- tion sources on the web, including the user’s online calendar, web-based e-mail client, news feeds and messaging services, Atomate can be made to automatically carry out a vari- ety of simple tasks for the user, ranging from context-aware filtering and messaging, to sharing and social coordination actions. Atomate’s open architecture and world model eas- ily accommodate new information sources and actions via the addition of feeds and web services. To make routine use of the system easy for non-programmers, Atomate provides a constrained-input natural language interface (CNLI) for behavior specification, and a direct-manipulation interface for inspecting and updating its world model.
mechanism for paginating the results. We propose to abstract all these UI com- ponents (DOM elements), wrapping them with objects that conform a Search Service. Then, these Search Services can emulate the user behavior and retrieve the corresponding results given a particular query speci ﬁcation. To provide an API-alike mechanism, results are not interpreted just as DOM elements but also as abstractions of the underlying domain objects. E.g. if the search engine being abstracted is DBLP, results may be wrapped by the Paper domain object, which could be populated with domain properties such as title or authors. Even more, this object may also have properties whose values are taken from another DOM (ob- tained from another URL), such as a bibtext property; we explain this in Sect. 3.3. All these concepts are materialized at the bottom of Fig. 6 . As we will show later, it is convenient to provide a semantic layer on the top of the search results because it allows the creation of visualizers that go beyond presenting the raw results.