Haut PDF Tabletop tangible maps and diagrams for visually impaired users

Tabletop tangible maps and diagrams for visually impaired users

Tabletop tangible maps and diagrams for visually impaired users

In this part, we provided an overview of the main theories and characteristics of (tabletop) TUIs. In particular, we introduced the MCRit model [305], which emphasizes the fact that TUIs are built on a digital model that is rendered using both intangible and tangible representations. We also described the taxonomy proposed by Holmquist et al. [102] to describe the different roles of tangible objects: containers, tools and tokens. TUIs have been studied from various perspectives, and throughout the years a number of important characteristics of TUIs have been identified. We presented some of them, along with benefits of TUIs for collaborative, learning and spatial applications. We also described the most widespread technologies for implementing tabletop TUIs. Object tracking can be achieved with internal or external sensing: although internal sensing is very promising, systems based on external sensing are most common, probably due to their simplicity. The most common technology is to place a camera below a tabletop and to track fiducials attached underneath the tangible objects. For finger tracking, a number of solutions exist and can be broadly classified into camera-based technologies and electric-based technologies. Along with traditional approaches (a camera placed below a surface that is illuminated with infrared LEDs), new technologies such as infrared frames and capacitive foils are now available on the market and provide an easy way to implement multitouch surfaces. Finally, we described the field of actuated tabletop TUIs: actuation can be used to preserve consistency between digital and tangible information, but it can also be used to make TUIs more dynamic, and therefore compensate for their limited scalability. We discussed two main approaches for implementing actuated tabletop TUIs (electromagnetic surfaces and mobile-based interfaces), and highlighted the fact that systems based on robots are particularly promising, notably in the context of the development of Swarm User Interfaces. Having described the properties and implementation technologies of tabletop TUIs, we address in the next and last part of this chapter the design of usable interaction techniques, tangible objects and feedback for tabletop tangible maps and diagrams for visually impaired users.
En savoir plus

326 En savoir plus

Tangible Reels: Construction and Exploration of Tangible Maps by Visually Impaired Users

Tangible Reels: Construction and Exploration of Tangible Maps by Visually Impaired Users

RELATED WORK ON INTERACTIVE MAPS To alleviate the aforementioned issues, different approaches relying on new technologies have been used. Zeng and Weber [31] classified the different types of interactive maps in four categories, depending on the device and interaction used. Virtual acoustic maps use verbal and non-verbal audio output to render geographical data. For example, Zhao et al. [33] presented thematic maps explored with a keyboard or a tablet and producing string pitch and spatial sound. Virtual tactile (or haptic) maps most often rely on a force-feedback device. For instance, SeaTouch [26] enables visually impaired users to explore a maritime environment relying on haptic feedback, sonification and speech synthesis. The BATS [20] or HapticRiaMaps [11] are other examples of virtual tactile map using force-feedback devices. TouchOver map [22] provides visually impaired users with a basic overview of a map layout displayed on a mobile device through vibrational and vocal feedbacks. Kane et al. [13] described complementary non-visual interaction techniques that allow finding a target on large touch screens. Audio-tactile maps consist in a raised-line paper map placed over a touch-sensitive screen that provides audio descriptions of tactile elements when touched (see [19] and [30]). In contrast to virtual acoustic and tactile maps, these maps provide multiple points of contact (potentially all the fingers), and proved to be usable for learning spatial configurations [2]. Finally, Braille maps displayed on refreshable displays are a promising approach to create interactive maps. Zeng et al. [32] used a BrailleDis 9000 tablet device consisting in a matrix of 120x60 pins that can be moved up or down. Their prototype allowed visually impaired users to explore, annotate and zoom in or out. Similarly, Schmitz and Ertl [23] used a HyperBraille to present different types of maps representing buildings or large outdoor areas. The main drawback of the virtual maps
En savoir plus

13 En savoir plus

en
                                                                    fr

en fr Designing spatial tangible interfaces for visually impaired users: why and how? Concevoir des interfaces tangibles et spatiales pour les déficients visuels : pourquoi et comment ?

• Facilité d’édition. Tous les prototypes visent à permettre l’édition et la manipulation des données, mais les articles n’indiquent pas à quel point le contenu est dynamique et peut être modifié « à la volée ». Par exemple, dans TPath, les auteurs indiquent que l’utilisateur pourrait télécharger une carte en ligne, puis construire une partie de cette carte. Cependant, le temps nécessaire à la reconstruction de la carte ainsi que l’utilisabilité du système n’ont pas été évalués. Dans TReels, le temps nécessaire pour placer un objet (24 secondes en moyenne) et pour reconstruire une carte a été mesuré. La difficulté à mettre à jour une représentation tangible a déjà été identifiée comme une des limitations des interfaces tangibles. Shaer et al. [34] parlent d’un problème de scalability alors que Edge et al. [10] emploient le terme de bulkiness.
En savoir plus

13 En savoir plus

Automatic derivation of on-demand tactile maps for visually impaired people: first experiments and research agenda

Automatic derivation of on-demand tactile maps for visually impaired people: first experiments and research agenda

2 Related Work 2.1 On Demand Mapping for Visually Impaired People The usual first step of map design is to understand the needs of the map users. In this specific case, there are two types of users for on demand tactile maps: the transcribers or mobility instructors that usually create tactile maps (and teach to visually impaired people how to use them) and that could benefit from automated processes [7], and of course the visually impaired people that use the map to navigate into and to understand the geographic space. Each visually impaired person has specific needs for a tactile map [7], which urges the need for on demand mapping. Some transcribers even separate the map in two or three different map objects to provide the entire required information (e.g. roads and POIs in one, relief in the other) [7]. Therefore, increasing the automation and reducing the time spent on map design would be greatly helpful. When transcribers and mobility instructors are interviewed, spacing is clearly ranked as one of the main variables for tactile map design [10], which also pleads for using more automated cartography. With interactive interfaces widering multi-modal interfaces [11], automated processing will be mandatory, such as multi-scale web mapping is accelerating the need for automated topographic cartography. Finally, as many printing methods exist, it is necessary to know if visually impaired people prefer some of the techniques over others. Past experiments showed that rough paper and microcapsule paper are preferred among classical techniques (others are braillon, smooth paper, rough plastic, smooth plastic, and aluminium), and also perform better on search tasks [12]. The preference is based on the feeling that it was easier to move their fingers across the object in relief.
En savoir plus

21 En savoir plus

Personal Shopping Assistance and Navigator System for Visually Impaired People

Personal Shopping Assistance and Navigator System for Visually Impaired People

in unfamiliar environments are crucial for increasing the autonomous life of vi- sually impaired people. We present a hardware/software platform, based on Android OS, integrating out- door/indoor navigation, visual context awareness and Augmented Reality (AR) to demonstrate how visually impaired people could be aided by new technologies to perform ordinary activity, such as going to a mall to buy a specific product. In particular, indoor navigation poses significant challenges. The major problem is that the signals used by outdoor locating technologies (e.g. GPS) are often inade- quate in this environment. Some solutions available exploit the Earth’s magnetic field [9], using magnetometers available in Smartphone and relying on maps of magnetic fields; some other solutions rely on the identification of nearby WiFi access points [28], [10], but do not provide sufficient accuracy to discriminate between individual rooms in a building. An alternative approach is Pedestrian Dead Reckoning (PDR) [24], [21]. It composes an inertial navigation system based on step information to estimate the position of the pedestrian. The pro- posed system integrates a PDR-based system with computer vision algorithms to help people to be independent both indoor and outdoor. The implemented Pedestrian Dead Reckoning uses advanced map-matching algorithms for refining the user’s trajectory and also her/his orientation, thus resulting in a significant enhancement of the final estimated position. Computer vision provides nowa- days several methods for the recognition of specific objects; users with visual disabilities could benefit from this technology but, unfortunately, the success often relies upon the user being able to point the camera toward the object of interest, that must cover the major part of the image. If the object is embed- ded in a complex scene and its apparent size is only few pixels, the available recognition apps, usually, fail. The proposed visual target detectors can detect and localize an object of interest inside a complex scene, and also estimate its distance from the user, which is particularly helpful for the guide of a visually impaired person.
En savoir plus

17 En savoir plus

Map design for visually impaired people: past, present, and future research

Map design for visually impaired people: past, present, and future research

Further development Participatory design is an iterative process (ISO - International Organization for Standardization, 2010) and users’ assessment of a prototype provides the keys to revising the design of the interactive map prototype in order to improve usability. One aspect worth considering relates to strategies used by blind users to read maps. Despite several studies in experimental psychology, the specific nature of these exploratory modes and their relations to performance level in spatial cognition remain obscure (Thinus-Blanc & Gaunet, 1997). Addressing these issues would be important for the design of accessible user interfaces. In this perspective, we developed Kintouch, a prototype that tracks finger movements by integrating data from the Microsoft Kinect camera and a multi-touch table (Brock, Lebaz, et al., 2012). It registers the location of hands and digits during the explo- ration of a tactile map or image and can thus help analyzing haptic exploration strategies much more easily than with classical video observation. Our short-term objective is to use these observations in order to adapt interaction techniques and thus to make the prototype even more accessible and usable.
En savoir plus

14 En savoir plus

Kin'touch: understanding how visually impaired people explore tactile maps

Kin'touch: understanding how visually impaired people explore tactile maps

To conclude, we have shown that it was possible to combine multi-touch and Kinect sensing to better capture the users’ hand motions on a surface. We applied this system to a map exploration program for visually impaired users. Beyond the exploration of tactile maps, our combined system offers an interesting and novel apparatus for learning about how visually impaired users read a variety of pictures with their sense of touch. In addition to providing future guidelines for teaching efficient picture reading to visually impaired people and for designing interaction techniques, our system might also be used as a training device itself. It might assist visually impaired people in learning how to scan pictures successfully through providing online corrective feedback during the manual exploration (e.g., an auditory feedback could help users to modulate their finger movements to optimal
En savoir plus

7 En savoir plus

BotMap: Non-Visual Panning and Zooming with an Actuated Tabletop Tangible Interface

BotMap: Non-Visual Panning and Zooming with an Actuated Tabletop Tangible Interface

well as ambient displays. In this study, we used robots to display physical and dynamic maps: each robot represents a landmark and can move to a new position whenever the digital map is updated. 2.3 Panning and Zooming Interfaces According to Hornbaek et al. [ 21 ], “panning changes the area of the information space that is visible, and zooming changes the scale at which the information space is viewed.” The visible area of the canvas is often referred to as the view and it is displayed inside the viewport [ 27 ]. For panning, two conceptual models can be used [ 2 ]: users can either move the canvas directly (e.g., using “grab/touch and drag” techniques) or move the viewport over the canvas (by using navigation buttons or by moving a field-of-view box). Panning can be continuous (in which case the user can move the canvas or the viewport in any direction), or constrained/discrete (in which case the user can move the canvas or the viewport a predefined distance to a predefined set of directions). For zooming, sighted users can usually select a scale [ 27 ] by moving a slider along a vertical or horizontal axis, by pressing zoom-in or zoom-out buttons, or using the mouse wheel. Two main types of zoom exist [ 3 , 21 ]: geometric (all elements are always displayed, whatever the scale, but the size of the icons depends on the chosen scale) or semantic (different elements are displayed at different scales, for example, the name of buildings or rivers only appear beyond a certain scale). Semantic zooming is more often used in online maps. As an example, OpenStreetMap provides both discrete (with on-screen buttons and key presses) and continuous (with the mouse) panning and zooming functions, and relies on semantic zooming. Users can zoom in and out in order to explore continents, countries, wide areas, or villages. In OpenStreetMap, there are 20 zoom levels, corresponding to 20 predefined scales. 1 When panning and zooming, users can experience “desert fog” if the part of the map displayed does not contain any elements, and users may feel lost or disorientated [ 29 ].
En savoir plus

43 En savoir plus

Interactivity Improves Usability of Geographic Maps for Visually Impaired People

Interactivity Improves Usability of Geographic Maps for Visually Impaired People

outputs. The blind users who tested the system were then not able to understand which finger caused sound outputs. Similarly, McGookin et al. (2008) observed accidental speech output for single tap interaction. As we wanted to preserve natural two-hand exploration, we looked for alternative touch inputs that would unlikely trigger events by chance. Kane, Wobbrock, and Ladner (2011) identified double taps as gestures that are usable by blind people. Multiple tap interaction was also used in the Talking TMAP project (Miele et al., 2006) and by Senette et al. (2013). We then used a double tap technique with a 700 ms delay between two taps. The standard speed for mouse double clicks in Windows Operating System, which is 500ms, proved to be too short. The double tap ended right after the second tap, while the digit was still touching the surface. This allowed the user to keep the tapping finger on the interactive map element that was selected. Pretests showed that this double tap technique was efficient and was more natural for visually impaired users. However, a few unintended double taps still occurred, mainly because of the palms of the hand resting on the map during exploration (as
En savoir plus

47 En savoir plus

BotMap: Non-Visual Panning and Zooming with an Actuated Tabletop Tangible Interface

BotMap: Non-Visual Panning and Zooming with an Actuated Tabletop Tangible Interface

9 CONCLUSION We described the design, implementation, and evaluation of an actuated tabletop TUI, named BotMap, that enables VI users to independently explore “pan & zoom” maps. Each landmark is represented by a robot, and whenever the map needs to be refreshed, the robots move to their new position. To interact with the map, we proposed two interfaces, the Keyboard and the Sliders, as well as a number of voice commands and navigation aids. We conducted three user studies. The first, conducted with blindfolded participants, demonstrated that both interfaces can be used to perform panning and zooming operations of various complexities without vision. The second study, conducted with VI users, demonstrated that users can understand maps whose exploration requires panning and zooming, and that they were able to pan and zoom, even though some felt disorientated on occasion and found that the task was cognitively demanding. We discussed a number of factors that may have explained differences in terms of navigation and comprehen- sion (strategies of memorization, training, use of discrete vs. continuous controls, abilities to build map-like mental representations of space). In the final study, participants had to plan a journey through Africa using four navigation aids. This study showed the potential of these aids to facili- tate navigation and gave interesting insights into the design of actuated tabletop TUIs for VI users. We concluded by discussing to what extent the prototype could be improved, notably in terms of implementation, and proposed a number of perspectives for further research on non-visual pan- ning and zooming. We suggest that the pieces of information related to the design, development, and evaluation of BotMap as well as the perspectives that we identified will facilitate and encour- age the design and deployment of actuated tangible «pan & zoom» maps for VI, and, ultimately, empower VI people by giving them the opportunity to independently explore and interact with complex data in the same way that sighted users do.
En savoir plus

44 En savoir plus

Making Gestural Interaction Accessible to Visually Impaired People

Making Gestural Interaction Accessible to Visually Impaired People

4 Discussion and Conclusion In this paper, we presented a state of the art of non-visual gestural interaction. Analysis of the literature shows different approaches to make gestural interaction accessible. It also reveals challenges that need to be addressed in the future, such as unintended touch input. Furthermore, we showed an example of how gestural interac- tion techniques can be used in interactive maps for visually impaired people. We checked that it was possible for a blind user to access distances and different types of information. Furthermore, our observations suggest that gestural interaction should be picked carefully. Gestures that are easy for sighted people may be less evident for visually impaired people (e.g., the lasso). This is in line with previous studies on ac- cessible gestural interaction [16]. The present work only presented a proof-of- concept. A more advanced design process would be necessary, as well as extended evaluations with several visually impaired users. In the future, it would be interesting to go beyond the basic gestural interaction provided by the API, and to design specific gestural interaction. To sum up, we believe that thoroughly designed gestural interac- tion would open up new interaction possibilities in research and commercial proto- type. However, it remains a challenge for the research domain of multi-touch to find interaction techniques that are usable without sight. We suggest that future studies should address the design of specific gestures for visually impaired people by includ- ing them throughout the design process from the creation of ideas to the evaluation. Acknowledgments. We thank Alexis Paoleschi who developed the prototype present- ed in this paper and our blind participants.
En savoir plus

10 En savoir plus

Quick-Glance and In-Depth exploration of a tabletop map for visually impaired people

Quick-Glance and In-Depth exploration of a tabletop map for visually impaired people

Related work Interactive Maps for visually impaired people In the past years interactive maps dedicated to visually impaired people have been developed based on the use of different interactive technologies such as touch screens, haptic feedback, speech recognition, mouse, keyboard and/or tangible interaction [2]. Many prototypes are based on the use of different types of touch-sensitive devices combined with audio feedback. We focus here on those interactive maps providing touch-based interaction. Touch-screens of the size of a regular computer screen have been used in several prototypes [3,7,14]. Fewer projects were based on large tabletops [5]. Recently the use of mobile devices, such as smartphones and tablets, has emerged [9,15]. In some accessible map prototypes, the touch-sensitive surface was combined with raised-line map overlays [7] or with vibro-tactile stimulation [9,15] in order to add tactile information which is not provided by the touch screen per se.
En savoir plus

7 En savoir plus

From open geographical data to tangible maps: improving the accessibility of maps for visually impaired people

From open geographical data to tangible maps: improving the accessibility of maps for visually impaired people

Several technologies are promising to create a refreshable display suitable for interactive maps. One of the rare commercially available solution (HyperBraille 2 ) consists in a matrix of 120x60 pins that can be moved up or down with piezoelectric crystals in order to display a tactile image. Zeng et al. (2014) used this device to enable blind users to explore, annotate and even zoom in or out. However such devices are relatively cumbersome but above all very expensive. A range of other technologies, including motors (Alexander et al., 2012), pneumatic (Yao et al., 2013) or electromagnetic (Frisken- Gibson et al., 1987) actuators (Nakatani et al., 2003), as well as hydraulic devices (Goulthorpe et al., 2001) are very promising to provide deformable screens. However, they are all in preliminary stages of development and are not available. Currently, raised-line maps augmented with interactive auditory messages appear to be the more usable technology to provide visually impaired users with accessible maps. Indeed, complex spatial graphics can be explored with both hands and all the fingers. In addition, the device is cheap in comparison to refreshable displays. However, in order to enhance usability, it is essential to provide the users with tools that facilitate the production of raised-line maps with the corresponding interactive numerical content. We suggest that these tools should rely on the availability of numerous sources of spatial (including geographic) data that are free to use. OpenStreetMap 3 , for example, is a collaborative project that aims at creating a free and editable map of the world. These maps can be annotated with a variety of tags. Rice et al. (2013), for example, described several techniques to enhance map accessibility by adding crowdsourced information about temporary obstacles (Rice et al., 2013).
En savoir plus

8 En savoir plus

From open geographical data to tangible maps: improving the accessibility of maps for visually impaired people

From open geographical data to tangible maps: improving the accessibility of maps for visually impaired people

4. TOWARDS TANGIBLE MAPS The main issue of interactive audio-tactile maps is that once the map is printed, its physical form cannot be edited. According to Ishii et al. (1997), tangible user interfaces “augment the real physical world by coupling digital information to everyday physical objects and environments”. In our ongoing work, we suggest that a visually impaired user may add or (re)move tangible objects that represent areas, lines or points within the map. These objects, enhanced with non-visual interactions techniques, may improve accessibility to spatial digital data. Using tangible objects may prove particularly relevant in supporting the (automatic) production of tangible maps. Tangible map prototypes were designed alongside the very first tangible user interfaces. GeoSpace is an interactive map of the MIT Campus designed for sighted users, where physical objects are used to pan or zoom by forcing the digital map to reposition itself (Ishii and Ullmer, 1997). Urp allows sighted urban planners to simulate wind flow and sunlight (Underkoffler and Ishii, 1999). It has been used to observe their consequences on models of building placed onto the tabletop. With the MouseHous Table, users can simulate several arrangements of urban elements such as streets and buildings (Huang et al., 2003). Paper rectangles were placed on the device and help to visualize the behaviour of pedestrians around the buildings. Similarly, ColorTable is a tool to help urban planners and stakeholders discussing urban changes (Maquil et al., 2008). In that project, a mixed-reality scene was created from the original map and the tangible objects that had been placed above it. Tangible interfaces designed for visually impaired users are rare. To our knowledge there is no prototype that supports the production and edition of a geographic map. Schneider et al. (2000) d eveloped a prototype in which vocal instructions guide a visually impaired user in placing rectangular magnets on a magnetic board until an entire route is built. This system is adapted to the creation of an itinerary but does not support the construction of complex maps. In addition, it has not been formally evaluated. McGookin et al. (2010) were the first to develop a prototype for the construction and exploration of tangible graphs by visually impaired users. In this prototype, a series of identical objects placed on a tabletop represented one series of data in a line plot. The evaluation showed that the prototype was efficient for the construction and comprehension of mathematical graphs by visually impaired users.
En savoir plus

9 En savoir plus

"DIY" Prototyping of Teaching Materials for Visually Impaired Children: Usage and Satisfaction of Professionals

"DIY" Prototyping of Teaching Materials for Visually Impaired Children: Usage and Satisfaction of Professionals

3 CNRS, IPAL, UMI 2955, Singapore, Singapore Abstract. Professionals working with visually impaired children (i.e. specialist teachers and educators, Orientation and Mobility trainers, psychologists, etc.) have to create their own teaching materials. Indeed, only few adapted materials exist, and do not fully meet their needs. Thus, rapid prototyping tools and methods could help them to design and make materials adapted to teaching to visually impaired students. In this study, we first designed a blog enabling professionals to create their own teaching materials. Then, we set up a challenge with five teams including one professional of visual impairment and students in computer science. The aim of each team was to design and make a teaching material, based on handcrafting, 3D printing tools and cheap micro-controllers, fitting the needs of the professional. After they have used their material with visually impaired students, we interviewed the professionals in order to evaluate usage and satisfaction. The professionals reported that the materials were easy to make, and valuable for teaching to visually impaired students. They also reported that DIY prototyping, based on 3D printing and cheap microcontrollers, enables them to create their own teaching materials, and hence accurately meet unanswered needs. Importantly, they would advise their colleagues to use this method and new tools. However, they consider that they would need assistance to create new materials on their own.
En savoir plus

12 En savoir plus

Evaluation of a gaze-controlled vision enhancement system for reading in visually impaired people

Evaluation of a gaze-controlled vision enhancement system for reading in visually impaired people

In addition, the ideas underlying condition 1 might be implemented in very different kinds of setups. For instance, if patients want to use a CCTV-like system, i.e. at a relatively short viewing distance, then gaze-control could be achieved with a remote eyetracker inte- grated within the CCTV. This kind of setup would remove the constraint of wearing special glasses or helmets that include an eyetracker. Ideally, helpful options already present on CCTVs thanks to text digitalization, such as text reformatting (cf. condition 3), should be combined with our system. Another example concerns the use of new fonts able to improve reading performance [ 86 ]. The huge potential of EVES is precisely the ability to additively combine several helpful sources whose benefits are modest when taken individually. Another interesting option with a short viewing distance is that gaze-control could be replaced by a haptic interface. Patients could then select ROIs by pointing at some locations on a touch screen. This could be implemented for instance on tablet computers or e-books readers [ 87 ]. Apart from the interface difference (gaze vs. touch), all other features of condition 1 would remain the same. It is difficult to predict if this option would be found comfortable by patients as this would imply many hand pointing movements, but this is an option that would be technically easier to implement.
En savoir plus

25 En savoir plus

Conceptualization of a technical solution for web navigation of visually impaired people

Conceptualization of a technical solution for web navigation of visually impaired people

development. In addition, evaluation of the technical solution would also allow for assessing the acceptance level of this solution by blind users. Indeed, even if this solution allows blind people to use the same interfaces as sighted people, the initiatives taken by the technical solution raise the problem of control left to the user. Bastien and Scapin [18] advised that users should have control capabilities over ongoing processing (user control criteria). This solution therefore runs counter to this criterion, bringing up the question of real or perceived reliability. Blind people might not trust a technical solution which may filter some information they wish to hear. Thus, in future work, we could carry out two types of methods for the evaluation of this technical solution. First, semi-structured interviews could be led in order to collect the blinds' perceptions of such a solution (advantages, risks, opportunities) [19]. Second, a user-test could be conducted in order to assess the usability of this solution. Thanks to this evaluation, we could therefore test whether the solution would be accepted or rejected according to its ease of use and its relevance, which is a necessary criterion for it to be used. Once evaluation conducted, one can implement one plug-in per web browser (Internet Explorer, Firefox, Google Chrome, Safari, etc.). In conclusion, the conceptualization of the technical solution resulting from the application of the holistic approach is a first step for the development of a tool which would improve the usability of web interfaces for blind people. This solution seems promising and could be developed to be used in the daily life of blind people.
En savoir plus

7 En savoir plus

INSPEX: Integrated portable multi-sensor obstacle detection device. Application to navigation for visually impaired people

INSPEX: Integrated portable multi-sensor obstacle detection device. Application to navigation for visually impaired people

This work has been partly funded by the European Union's Horizon 2020 Research and Innovation programme under grant agreement no 730953. This work was supported in part by the Swiss secretariat for education, research and innovation (SERI) under grant 16.0136 730953 INSPEX: Integrated portable multi-sensor obstacle detection device

17 En savoir plus

"DIY" Prototyping of Teaching Materials for Visually Impaired Children: Usage and Satisfaction of Professionals

"DIY" Prototyping of Teaching Materials for Visually Impaired Children: Usage and Satisfaction of Professionals

“demystified ‘old’ stereotypes and opened up a debate about the relationship between wisdom, creativity and technology” [ 11 ]. These technologies can also empower chil- dren to take greater control of their disabilities [ 12 ]. Then, it appears that rapid prototyping tools including 3D printing and cheap micro-controllers may enable professionals to design and make their own adapted materials. However, professionals may be reluctant to use these technologies because they have some prejudices, especially about skills that are needed to use them. Though, Stangl et al. [ 13 ] showed that non expert designers of 3D printed adapted objects may benefit from online creativity support tools. For example, the online community “ Thingiverse.com ” provides many models for assistive tools printing. Buehler et al. [ 14 ] highlighted that various models were created by end-users themselves on this platform. Interestingly, these designers do not have any formal training or expertise in the creation of assistive technology. Hurst and colleagues [ 15 , 16 ] illustrated several examples of materials that can be made by non-engineers. They also observed that it increases the adoption process because it provides end-users with a better control over design and cost. Hence we made the hypothesis that it could be efficient to empower non-experts teachers in order to create, modify, or build their own teaching materials. Our main objectives were: (1) to assist professionals to create their own adapted teaching materials with rapid prototyping tools such as 3D printing and low-cost micro-controllers, and (2) evaluate the usage of these technologies and the level of satisfaction that they provide.
En savoir plus

11 En savoir plus

Perception Assistance for the Visually Impaired Through Smart Objects: Concept, Implementation, and Experiment Scenario

Perception Assistance for the Visually Impaired Through Smart Objects: Concept, Implementation, and Experiment Scenario

examples of contextual interactions with the environment: navigation functionalities together with the ability to give information about public transportation timetables and to activate sound beacons at intersections. A new field in the domain of context- relevant information is the automated scene description, such as done by Microsoft’s Seeing AI smartphone application [18]. This application is able to perform many contextual tasks that require vision such as analyzing and describing scenes, reading text and identifying people. A third category gathers devices that rely on infrastructures, which can be pre-existing, as is the case with many localization systems relying on WiFi access points [19], or which may be designed and deployed specifically for the assistive system, like the Remote Infrared Signage System (RISS) or Talking Signs R [20], [21]. Developing a specific infrastructure for assistance allows to offer a wide range of functionalities to VIPs, but the deployment and the maintenance of a dedicated infrastructure
En savoir plus

15 En savoir plus

Show all 10000 documents...