2 Related Work
2.1 On Demand Mapping forVisuallyImpairedPeople
The usual first step of map design is to understand the needs of the map users. In this specific case, there are two types of users for on demand tactile maps: the transcribers or mobility instructors that usually create tactile maps (and teach to visuallyimpairedpeople how to use them) and that could benefit from automated processes , and of course the visuallyimpairedpeople that use the map to navigate into and to understand the geographic space. Each visuallyimpaired person has specific needs for a tactile map , which urges the need for on demand mapping. Some transcribers even separate the map in two or three different map objects to provide the entire required information (e.g. roads and POIs in one, relief in the other) . Therefore, increasing the automation and reducing the time spent on map design would be greatly helpful. When transcribers and mobility instructors are interviewed, spacing is clearly ranked as one of the main variables for tactile map design , which also pleads for using more automated cartography. With interactive interfaces widering multi-modal interfaces , automated processing will be mandatory, such as multi-scale web mapping is accelerating the need for automated topographic cartography. Finally, as many printing methods exist, it is necessary to know if visuallyimpairedpeople prefer some of the techniques over others. Past experiments showed that rough paper and microcapsule paper are preferred among classical techniques (others are braillon, smooth paper, rough plastic, smooth plastic, and aluminium), and also perform better on search tasks . The preference is based on the feeling that it was easier to move their fingers across the object in relief.
5.3.2 Design of tangible objects: Before developing the
device, it is essential to study the design of the physical icons (phicons). McGooking et al. (2010) provide three guidelines relative to non-visual phicons: 1/ They should be physically stable to prevent them from being knocked over by the user; 2/ They should have irregular shape “so they can be placed on the table one way”; 3/ The system must provide awareness of phicon status to inform the user when a phicon is not detected. We designed cylindrical objects with tactile cues for orientation. They are fillable with lead, which proved to be an efficient way to quickly modify their weight and make them stable. These objects are tagged and their position and current state (e.g. detected by the system) is continuously tracked by the system. In order to construct tangible maps, we also needed lines and areas. We added retractable reels on the objects. The end of the reel is magnetic and can easily be attached to another object. These connectable objects are used to “materialize” points, lines, and areas on the map (see Figure 4). We are currently evaluating the usability of these objects in our different scenarios.
4. TOWARDS TANGIBLE MAPS
The main issue of interactive audio-tactile maps is that once the map is printed, its physical form cannot be edited. According to Ishii et al. (1997), tangible user interfaces “augment the real physical world by coupling digital information to everyday physical objects and environments”. In our ongoing work, we suggest that a visuallyimpaired user may add or (re)move tangible objects that represent areas, lines or points within the map. These objects, enhanced with non-visual interactions techniques, may improve accessibility to spatial digital data. Using tangible objects may prove particularly relevant in supporting the (automatic) production of tangible maps. Tangible map prototypes were designed alongside the very first tangible user interfaces. GeoSpace is an interactive map of the MIT Campus designed for sighted users, where physical objects are used to pan or zoom by forcing the digital map to reposition itself (Ishii and Ullmer, 1997). Urp allows sighted urban planners to simulate wind flow and sunlight (Underkoffler and Ishii, 1999). It has been used to observe their consequences on models of building placed onto the tabletop. With the MouseHous Table, users can simulate several arrangements of urban elements such as streets and buildings (Huang et al., 2003). Paper rectangles were placed on the device and help to visualize the behaviour of pedestrians around the buildings. Similarly, ColorTable is a tool to help urban planners and stakeholders discussing urban changes (Maquil et al., 2008). In that project, a mixed-reality scene was created from the original map and the tangible objects that had been placed above it. Tangible interfaces designed forvisuallyimpaired users are rare. To our knowledge there is no prototype that supports the production and edition of a geographic map. Schneider et al. (2000) d eveloped a prototype in which vocal instructions guide a visuallyimpaired user in placing rectangular magnets on a magnetic board until an entire route is built. This system is adapted to the creation of an itinerary but does not support the construction of complex maps. In addition, it has not been formally evaluated. McGookin et al. (2010) were the first to develop a prototype for the construction and exploration of tangible graphs by visuallyimpaired users. In this prototype, a series of identical objects placed on a tabletop represented one series of data in a line plot. The evaluation showed that the prototype was efficient for the construction and comprehension of mathematical graphs by visuallyimpaired users.
Recent studies have demonstrated the benefit of using tangible interfaces in the context of visual impairment. McGookin et al. (2010) designed a TUI that allows visuallyimpaired users to access graphs and charts. They showed that the device improved the accuracy with which participants carried out the tasks. They provided design recommendations for non-visual tangible interaction, such as choosing object shapes that are distinctive by touch alone. Ducasse et al. (2016) demonstrated the usability of “Tangible Reels,” both for the construction and exploration of interactivemaps. The results showed that visuallyimpaired users were able to understand and interact with existing drawings, but also to create new ones. Brulé et al. (2016) showed that using tangible objects as part of a multisensory map increased engagement of visuallyimpaired students during learning activities. Using TUIs, visuallyimpaired users can manipulate digital information via physical objects (“phycons”), which involves the sense of touch and is therefore adequate in the absence of vision. In addition, tangible interfaces can provide structural (shape and volume) and material (texture) properties that are important during tactile exploration ( Klatzky et al., 1985 ).
Two remarks can be made concerning landmark knowledge and graphical tests. First, these tests are particularly adapted to assess route or survey knowledge. Other tests can be used to assess landmark knowledge. For example, in her study of the usability of interactive tactile maps, Brock  asked participants to list the names of the streets and points of interest displayed on the map. Secondly, to analyze sketched maps or reconstructed maps, two main methods exist. The first one consists in asking external and independent judges to evaluate how similar the sketched or reconstructed maps are with regards to the actual map, usually using a set of pre-determined criteria (see [38,320] for example). The second one, more objective, is based on bi-dimensional regression analysis and was originally proposed by Tobler : it allows quantifying scale, rotation and translation differences between the sketched/reconstructed map and the actual map. Methods for assessing spatial knowledge present several limitations. The main one is referred to as an issue of “weak methodological convergence” : different results can be obtained for the same participant depending on the test that is used and/or the amount of spatial information provided to the participant in the test itself (e.g. in completion tests). To compensate for these issues, Kitchin and Jacobson  suggested that researchers should use multiple tests to assess knowledge, and in particular to assess survey knowledge. Another limitation is that the majority of these tests aim at measuring the accuracy of the users’ mental representations, in terms of distance or direction for example. However, assessing the utility of mental representations (e.g. are users able to go from A to B) may be more relevant than assessing their accuracy. Nonetheless, indications of accuracy can be useful to better understand the nature and structure of spatial knowledge. Finally, some tests may be less adapted to visuallyimpaired users than to sighted users. For example, sketching a map can be challenging forvisuallyimpaired users as they may rarely be as accustomed to drawing as sighted users. However, some studies reported successful use of sketching techniques (e.g. ).
As mentioned above, it has previously been shown that SSMs are beneficial for the acquisition of spatial knowledge by visuallyimpairedpeople ( Picard and Pry, 2009 ). Yet, to our knowledge,
no study has been conducted to evaluate non-visual learning using interactive SSMs. Thus, we aimed to evaluate the usability of an interactive SSM to learn a large and complex geographical and historical display without vision, in comparison to a RLM with braille legend. The interactive SSM and the RLMs used in the current study represented geographical and historical knowledge of a large fictitious kingdom. The interactive small- scale included movable 3D pieces (similar to a puzzle), carved roads in a piece of wood, and verbal descriptions triggered during tactile exploration. The RLMs described the same kingdom with embossed drawings and booklets printed in braille. We set the hypothesis that the interactive SSM is more usable than regular RLMs accompanied by braille legends, i.e., that it allows participants to better memorize geographical and historical information, and that it is more satisfactory to use.
4 Discussion and Conclusion
In this paper, we presented a state of the art of non-visual gestural interaction. Analysis of the literature shows different approaches to make gestural interaction accessible. It also reveals challenges that need to be addressed in the future, such as unintended touch input. Furthermore, we showed an example of how gestural interac- tion techniques can be used in interactivemapsforvisuallyimpairedpeople. We checked that it was possible for a blind user to access distances and different types of information. Furthermore, our observations suggest that gestural interaction should be picked carefully. Gestures that are easy for sighted people may be less evident forvisuallyimpairedpeople (e.g., the lasso). This is in line with previous studies on ac- cessible gestural interaction . The present work only presented a proof-of- concept. A more advanced design process would be necessary, as well as extended evaluations with several visuallyimpaired users. In the future, it would be interesting to go beyond the basic gestural interaction provided by the API, and to design specific gestural interaction. To sum up, we believe that thoroughly designed gestural interac- tion would open up new interaction possibilities in research and commercial proto- type. However, it remains a challenge for the research domain of multi-touch to find interaction techniques that are usable without sight. We suggest that future studies should address the design of specific gestures forvisuallyimpairedpeople by includ- ing them throughout the design process from the creation of ideas to the evaluation. Acknowledgments. We thank Alexis Paoleschi who developed the prototype present- ed in this paper and our blind participants.
Interactive tactile maps provide visuallyimpairedpeople with accessible geographic information. However, when these maps are presented on large tabletops, tactile exploration without sight is long and tedious due to the size of the surface. In this paper we present a novel approach to speed up the process of exploring tabletop maps in the absence of vision. Our approach mimics the visual processing of a map and consists in two steps. First, the Quick-Glance step allows creating a global mental representation of the map by using mid-air gestures. Second, the In-Depth step allows users to reach Points of Interest with appropriate hand guidance onto the map. In this paper we present the designand development of a prototype combining a smartwatch and a tactile surface for Quick- Glance and In-Depth interactive exploration of a map.
manipulations of space (selecting shortcuts, alternative paths, etc.). We conclude that interactivemaps may advantageously replace traditional paper mapsfor providing visuallyimpairedpeople with access to spatialand geographic information.
We observed another significant advantage forinteractivemaps: the improved accessibility forpeople with low braille reading skills. Contrary to a general thinking, only a small part of the visuallyimpaired population has been trained to read braille. Especially for late blind people, braille represents a great challenge. Through the use of interactivemaps, this part of the population could improve mobility and orientation skills and thus gain confidence in traveling. Given the current low prices of tablets and touch screens, schools and associations forvisuallyimpairedpeople begin to adopt this technology for teaching (mainly for providing access to written information). To our knowledge, this technology has not yet been systematically used for teaching spatial content and improving mobility and orientation skills. It would be beneficial to quickly take advantage of this technology, provided that map contents and accessible interaction techniques are designed. For a visuallyimpaired person who owns swell paper, a printer and a fuser, it would even be possible to create interactivemaps at home at a reasonable price. It would just be necessary to provide the community with the digital mapsand software.
Figure 1. Photograph of our interactive map prototype
Prototype design was based upon the previous analysis of context and generation of ideas. We developed successive versions of the prototype, taking into considera- tion users’ needs and recommendations (see Brock, Truillet, et al., 2012). The first prototyping step was a low-fidelity prototype based on the method of “Wizard of Oz”. This method usually involves visual representations, but can be adapted to VI people (Brock, Vinot, et al., 2010). Concretely, we adapted it by using raised-line mapsand simulated speech output. Based on the pre-tests with the low-fidelity prototype, we confirmed the users’ appreciation for the interactive map concept. The final prototype consisted of a raised-line map placed over a multi-touch screen (see Figure 1). Output interaction was both tactile (the map’s raised design) and auditory (text-to-speech associated with touch events). We implemented a double tap as input interaction for a first version of the prototype. Details of the implementation anddesign are described in Brock, Truillet, et al. (2012).
3.2 Influence of the site of data collection
An important control variable to be inspected is the site at which data were collected, particularly since site is often confounded with the type of condition (e.g. dry vs. wet background) or population (sighted vs. VI) studied (see Table 2). It turned out that the site at which data were collected did have a significant influence, as determined by a single-factor, between-subjects analysis of variance (ANOVA), even when restricting the analysis to the wet background noise condition, F(3,105)=13.96; p<0.001 (There was no significant difference between laboratories for the dry background noise conditions). Particularly, RTs collected at two of the sites (PSA, Nissan) were some 0.8 to 1 s longer on average. That may be attributed to differences in headphones, sound insulation, and potential calibration errors, and is not entirely accounted for. As a remedy, RTs referenced to the overall mean of a given participant were inspected; these turned out to be quite similar when comparing the different warning sounds. Figure 4 shows such an analysis comparing the results of the four different laboratories as a function of the sound condition studied. It is evident that the RT patterns are remarkably similar despite the different offsets in overall RT. For the remainder of the analysis to be presented, the raw, unreferenced RTs were used.
Pour la construction, une technique de guidage en deux temps (Figure 4.b) permet à l’utilisateur de placer rapidement un nouveau TReel sur la table, ou de l’attacher à un TReel déjà placé, créant ainsi une ligne physique entre ces deux objets. Cette approche permet au système de savoir quels objets sont reliés entre eux mais aussi de connaître la position de la ligne afin de la rendre interactive. De plus, grâce à l’utilisation d’un cadre infrarouge, l’utilisateur peut cliquer sur un TReel ou sur une ligne pour écouter le nom de l’élément qu’ils représentent. Concernant le guidage, quand l’utilisateur est loin de la cible, la direction et la distance à la cible sont indiquées (par exemple, « en haut et à droite, 30 cm »). Ceci permet à l’utilisateur de se déplacer très rapidement vers la cible. Dès que l’objet se rapproche de celle-ci, seule la direction est donnée (haut, bas gauche ou droite), et à une fréquence plus importante.
The items or options in a visual menu are often ordered in some logical fashion. Alphabetical, numerical, and chronological ordering are all examples of ordering techniques that can reduce menu selection times if used appropri- ately [ 10 ]. Ordering menus by frequency of use is another technique that has been found to significantly reduce performance times [ 11 ], although it results in dynamic menus that also lead to poorer performance due to the lack of con- sistency [ 9 ]. Even it’s preferable to offer a display that order elements under a hierarchy to increase learning. it is however impossible for us to use that app- roach due to the extreme changing nature of our content: we cannot establish a stable hierarchy that the user could remember or use easily without increasing his cognitive load more than necessary.
3 Waypoint Validation Strategies
Each waypoint in an itinerary is specified with geographical coordinates. The system defines a “capture radius” around each waypoint to validate them as the user passes by (see Fig.1). This capture radius allows some flexibility: the user doesn’t need to be exactly on the waypoint to validate it. With this strategy, the waypoints are also vali- dated despite the inevitable global positioning inaccuracy. The length of the radius is carefully selected so that the user is considered as “close enough” to the waypoint to be validated. If the capture radius is too small, the user might consistently miss the waypoint. If the radius is too large, the user may consider that he has reached the waypoint too early, which could lead to erroneous guidance and poor direction choic- es. An optimal capture radius would keep the person close to the intended path while still allowing some flexibility. An experimental study has been performed in a virtual environment by , which concludes that a capture radius of approximately 1.5m is optimal. However, in real situation where positioning is rarely more accurate than 5-10m, a capture radius of 1.5m is definitely too precise and a larger value is more appropriate.
Figure 5: Usability assessment of the SENSIVISE application for all participants
According to the study results, perceiving a VE in low vision condition is not easy. Although almost all participants answered well the questions about visual exploration, times were longer for participants in the low vision conditions compared to control condition. It seems that in the case of a global view, when vision is blurred or when the peripheral visual field is limited, as in a tunnel vision, the observation times are longer and participants tend to be closer to the screen in particular with blurred vision. Moreover, when the task requires vision for detail, times are longer with central scotoma vision compared to the other groups. Studies in this direction have shown that persons with central field loss have reduced acuity and contrast sensitivity, and read quite slowly (Legge et al., 1992).
firstname.lastname@example.org, email@example.com, firstname.lastname@example.org, email@example.com, firstname.lastname@example.org
This paper presents a novel saliency prediction model for children with autism spectrum disorder (ASD). We design a new convolution neural network and train it with a new ASD dataset. Among the con- tributions, we can cite the coarse-to-fine architecture as well as the loss function which embeds a regularization term. We also discuss about some data augmentation methods for ASD dataset. Experi- mental results show that the proposed model performs better than 6 models, one supervised model finetuned with the ASD dataset. Con- trary to control people, our results hint that no center bias apply in visuall attention for autistic children.
Our findings are consistent with the literature: A DIY ap- proach to Assistive Technology is empowering for children and caretakers , technologies perceived as stigmatizing are more often abandoned [42, 3], children may not feel dis- abled once the material barriers are overcome  and tech- nologies allow them to develop strategies to reduce negative reactions from other people. Children’s use of assistive tech- nology and their educational context influence which activ- ities they can engage in, as well as their experience of dis- ability . It also underlines the need to assist caretak- ers in the realisation of adaptations, used to help children access symbolic representations and develop accurate men- tal representations—including spatial ones. Nevertheless, it should be noted that those results should be completed by further investigations on the roles of parents and family re- lationships in children’s education. As parents are not often present in the Institute, we could only interview one of them. To summarize our design recommendations, designers should propose assistive technologies soliciting several sensory modalities to enable different cognitive approaches. Class- rooms’ technologies should enable collaborations between children living with or without various impairments , therefore also accommodating visual learning. They should support ludic pedagogical scenarios, encourage storytelling and children’s reflectivity on what they are learning, which may be achieved by allowing a high level of customization using Do-It-Yourself techniques, as well as by taking great care of aesthetic properties (textures, colours etc.).
The interactive objects (Fig. 6 ) were objects of everyday life (dolls, cutlery, fruits, etc.) connected to a MakeyMakey® board. This project was conducted by an educator without the assistance of any computer science student. It was designed forvisuallyimpaired children between ﬁve to ﬁfteen years old with important associated disorders, in order to train them to identify everyday objects. Conductive objects were connected to a MakeyMakey® board. When they are touched, they trigger verbal descriptions previously recorded by the teacher. Non-conductive objects were covered with alu- minum. In addition, a game was designed by the educator: he was naming objects, and when being touched by the students, the objects themselves provided error or con- gratulation feedback. The educator was highly satisﬁed with the device that was easy to make and highly adaptable. She reported that it was easy to teach new vocabulary and she noted a great enhancement in students’ concentration and motivation. However, it appeared that the number of inputs onto the micro-controller board was too restrictive.
Keywords: Multitouch, multimodal, haptics, map, blind, visual impairment, accessibility
Navigating in a familiar environment is not always obvious for a blind person. In an unknown environment, it becomes especially complicated. Therefore this issue presents a social challenge as well as an important research area. The major problem is a lack of information concerning the environment which leads to deficits in orientation and mobility. These problems often mean that the visuallyimpaired travel less, which influences their personal and professional life and can lead to exclusion from society. Many websites offer the pos- sibility of calculating an itinerary for free. Often, this infor- mation is presented in the form of a visual map and a corre- sponding roadmap. The roadmap is accessible using a screen reader (technical aid for the blind for accessing the screen content), but sequential access to information in the roadmap is limited to the important steps of an itinerary and does not help to understand the environment, which is necessary to enable a flexible and autonomous navigation (e.g. a change of itinerary in case of roadwork). Visual maps are very use- ful forspatial knowledge but are inaccessible.