The constant barrage of updates and novel applications to explore creates a ceaseless cycle of new layouts and interaction methods that we must adapt to. One way to address these challenges is through in-context interactive tutorials. Most applications provide onboarding tutorials using visual metaphors to guide the user through the core features available. However, these tutorials are limited in their scope and are often inaccessible to blind people. In this paper, we present AidMe, a system-wide authoring and playthrough of non-visual interactive tutorials. Tutorials are created via user demonstration and narration. Using AidMe, in a user study with 11 blind participants we identified issues with instruction delivery and user guidance providing insights into the development of accessible interactive non-visual tutorials.
Braille input enables fast nonvisual entry speeds on mobile touchscreen devices. Yet, the lack of tactile cues commonly results in typing errors, which are hard to correct. We propose Hybrid-Brailler, an input solution that combines physical and gestural interaction to provide fast and accurate Braille input. We use the back of the device for physical chorded input while freeing the touchscreen for gestural interaction. Gestures are used in editing operations, such as caret movement, text selection, and clipboard control, enhancing the overall text entry experience. We conducted two user studies to assess both input and editing performance. Results show that Hybrid-Brailler supports fast entry rates as its virtual counterpart, while significantly increasing input accuracy. Regarding editing performance, when compared with the mainstream technique, Hybrid-Brailler shows performance benefits of 21% in speed and increased editing accuracy. We finish with lessons learned for designing future nonvisual input and editing techniques.
Blind people face many barriers using smartphones. Still, previous research has been mostly restricted to non-visual gestural interaction, paying little attention to the deeper daily challenges of blind users. To bridge this gap, we conducted a series of workshops with 42 blind participants, uncovering application challenges across all levels of expertise, most of which could only be surpassed through a support network. We propose Hint Me!, a human-powered service that allows blind users to get in-app assistance by posing questions or browsing previously answered questions on a shared knowledge-base. We evaluated the perceived usefulness and acceptance of this approach with six blind people. Participants valued the ability to learn independently and anticipated a series of usages: labeling, layout and feature descriptions, bug workarounds, and learning to accomplish tasks. Creating or browsing questions depends on aspects like privacy, knowledge of respondents and response time, revealing the benefits of a hybrid approach.
Over the last decade there have been numerous studies on touchscreen typing by blind people. However, there are no reports about blind users’ everyday typing performance and how it relates to laboratory settings. We conducted a longitudinal study involving five participants to investigate how blind users truly type on their smartphones. For twelve weeks, we collected field data, coupled with eight weekly laboratory sessions. This paper provides a thorough analysis of everyday typing data and its relationship with controlled laboratory assessments. We improve state-of-the-art techniques to obtain intent from field data, and provide insights on real-world performance. Our findings show that users improve over time, even though it is at a slow rate. Substitutions are the most common type of error and have a significant impact on entry rates in both field and laboratory settings. Results show that participants are 1.3-2 times faster when typing during everyday tasks. On the other hand, they are less accurate. We finished by deriving some implications that should inform the design of future virtual keyboard for non-visual input. Moreover, findings should be of interest to keyboard designers and researchers looking to conduct field studies to understand everyday input performance.
Deaf and hard of hearing students must constantly switch between several visual sources to gather all necessary information during a classroom lecture (e.g., instructor, slides, sign language interpreter or captioning). Using smart glasses, this research tested a potential means to reduce the effects of visual field switches, proposing that consolidating sources into a single display may improve lecture comprehension. Results showed no statistically significant comprehension improvements with the glasses, but interviews indicated that participants found it easier to follow the lecture with glasses and saw the potential for them in the classroom. Future work highlights priorities for smart glasses consideration and new research directions.
Following multimedia lectures in mainstream classrooms is challenging for deaf and hard-of-hearing (DHH) students, even when provided with accessibility services. Due to multiple visual sources of information (e.g. teacher, slides, interpreter), these students struggle to divide their attention among several simultaneous sources, which may result in missing important parts of the lecture; as a result, access to information is limited in comparison to their hearing peers, having a negative effect in their academic achievements. In this paper we propose a novel approach to improve classroom accessibility, which focuses on improving the delivery of multimedia lectures. We introduce SlidePacer, a tool that promotes coordination between instructors and sign language interpreters, creating a single instructional unit and synchronizing verbal and visual information sources. We conducted a user study with 60 participants on the effects of SlidePacer in terms of learning performance and gaze behaviors. Results show that SlidePacer is effective in providing increased access to multimedia information; however, we did not find significant improvements in learning performance. We finish by discussing our results and limitations of our user study, and suggest future research avenues that build on these insights.
Touch-enabled devices have a growing variety of screen sizes; however, there is little knowledge on the effect of key size on non-visual text-entry performance. We conducted a user study with 12 blind participants to investigate how non-visual input performance varies with four QWERTY keyboard sizes (ranging from 15mm to 2.5mm). This paper presents an analysis of typing performance and touch behaviors discussing its implications for future research. Our findings show that there is an upper limit to the benefits of larger target sizes between 10mm and 15mm. Input speed decreases from 4.5 to 2.4 words per minute (WPM) for targets sizes below 10mm. The smallest size was deemed unusable by participants even though performance was in par with previous work.
Word prediction can significantly improve text-entry rates on mobile touchscreen devices. However, these interactions are inherently visual and require constant scanning for new word predictions to actually take advantage of the suggestions. In this paper, we discuss the design space for non-visual word prediction interfaces and finally present Shout-out Suggestions, a novel interface to provide non-visual access to word predictions on existing mobile devices.
Interaction with large touch surfaces is still a relatively infant domain, particularly when looking at the accessibility solutions offered to blind users. Their smaller mobile counterparts are shipped with built-in accessibility features, enabling non-visual exploration of linearized screen content. However, it is unknown how well these solutions perform in large interactive surfaces that use more complex spatial content layouts. We report on a user study with 14 blind participants performing common touchscreen interactions using one and two-hand exploration. We investigate the exploration strategies applied by blind users when interacting with a tabletop. We identified six basic strategies that were commonly adopted and should be considered in future designs. We finish with implications for the design of accessible large touch interfaces.
Non-visual text-entry for people with visual impairments has focused mostly on the comparison of input techniques reporting on performance measures, such as accuracy and speed. While researchers have been able to establish that non-visual input is slow and error prone, there is little understanding on how to improve it. To develop a richer characterization of typing performance, we conducted a longitudinal study with five novice blind users. For eight weeks, we collected in-situ usage data and conducted weekly laboratory assessment sessions. This paper presents a thorough analysis of typing performance that goes beyond traditional aggregated measures of text-entry and reports on character-level errors and touch measures. Our findings show that users improve over time, even though it is at a slow rate (0.3 WPM per week). Substitutions are the most common type of error and have a significant impact on entry rates. In addition to text input data, we analyzed touch behaviors, looking at touch contact points, exploration movements, and lift positions. We provide insights on why and how performance improvements and errors occur. Finally, we derive some implications that should inform the design of future virtual keyboards for non-visual input
The advent of system-wide accessibility services on mainstream touch-based smartphones has been a major point of inclusion for blind and visually impaired people. Ever since, researchers aimed to improve the accessibility of specific tasks, such text-entry and gestural interaction. However, little work aimed to understand and improve the overall accessibility of these devices in real world settings. In this paper, we present an eight-week long study with five novice blind participants where we seek to understand major concerns, expectations, challenges, barriers, and experiences with smartphones. The study included pre-adoption and weekly interviews, weekly controlled task assessments, and in-the wild system-wide usage. Our results show that mastering these devices is an arduous and long task, confirming the users’ initial concerns. We report on accessibility barriers experienced throughout the study, which could not be encountered in task-based laboratorial settings. Finally, we discuss how smartphones are being integrated in everyday activities and highlight the need for better adoption support tools.
Most work investigating mobile HCI is carried out within controlled laboratory settings; these spaces are not representative of the real-world environments for which the technology will predominantly be used. The result of which can produce a skewed or inaccurate understanding of interaction behaviors and users’ abilities. While mobile in-the-wild studies provide more realistic representations of technology usage, there are additional challenges to conducting data collection outside of the lab. In this paper we discuss these challenges and present TinyBlackBox, a standalone data collection framework to support mobile in-thewild studies with today’s smartphone and tablet devices.
We propose HoliBraille, a system that enables Braille input and output on current mobile devices. We use vibrotactile motors combined with dampening materials in order to actuate directly on users’ fingers. The prototype can be attached to current capacitive touchscreen devices enabling multipoint and localized feedback. HoliBraille can be leveraged in several applications including educational tools for learning Braille, as a communication device for deaf-blind people, and as a tactile feedback system for multitouch Braille input. We conducted a user study with 12 blind participants on Braille character discrimination. Results show that HoliBraille is effective in providing localized feedback; however, character discrimination performance is strongly related with number of simultaneous stimuli. We finish by discussing the obtained results and propose future research avenues to improve multipoint vibrotactile perception.
Tablet devices can display full-size QWERTY keyboards similar to the physical ones. Yet, the lack of tactile feedback and the inability to rest the fingers on the home keys result in a highly demanding and slow exploration task for blind users. We present SpatialTouch, an input system that leverages previous experience with physical QWERTY keyboards, by supporting two-handed interaction through multitouch exploration and spatial, simultaneous audio feedback. We conducted a user study, with 30 novice touchscreen participants entering text under one of two conditions: (1) SpatialTouch or (2) mainstream accessibility method Explore by Touch. We show that SpatialTouch enables blind users to leverage previous experience as they do a better use of home keys and perform more efficient exploration paths. Results suggest that although SpatialTouch did not result in faster input rates overall, it was indeed able to leverage previous QWERTY experience in contrast to Explore by Touch.
Touchscreens are pervasive in mainstream technologies; they offer novel user interfaces and exciting gestural interactions. However, to interpret and distinguish between the vast ranges of gestural inputs, the devices require users to consistently perform interactions inline with the predefined location, movement and timing parameters of the gesture recognizers. For people with variable motor abilities, particularly hand tremors, performing these input gestures can be extremely challenging and impose limitations on the possible interactions the user can make with the device. In this paper, we examine touchscreen performance and interaction behaviors of motor-impaired users on mobile devices. The primary goal of this work is to measure and understand the variance of touchscreen interaction performances by people with motor-impairments. We conducted a four-week in-the-wild user study with nine participants using a mobile touchscreen device. A Sudoku stimulus application measured their interaction performance abilities during this time. Our results show that not only does interaction performance vary significantly between users, but also that an individual’s interaction abilities are significantly different between device sessions. Finally, we propose and evaluate the effect of novel tap gesture recognizers to accommodate for individual variances in touchscreen interactions.
Braille has paved its way into mobile touchscreen devices, providing faster text input for blind people. This advantage comes at the cost of accuracy, as chord typing over a flat surface has proven to be highly error prone. A misplaced finger on the screen translates into a different or unrecognized character. However, the chord itself gathers information that can be leveraged to improve input performance. We present B#, a novel correction system for multitouch Braille input that uses chords as the atomic unit of information rather than characters. Experimental results on data collected from 11 blind people revealed that B# is effective in correcting errors at character-level, thus providing opportunities for instant corrections of unrecognized chords; and at word-level, where it outperforms a popular spellchecker by providing correct suggestions for 72% of incorrect words (against 38%). We finish with implications for designing chord-based correction system and avenues for future work.
Mobile devices are increasingly used for text-entry in contexts where visual attention is fragmented and graphical information is inadequate, yet the current solutions to typing on virtual keyboards make it a visually demanding task. This work looks at assistive technologies and interface attributes as tools to ease the task. Two within-subject experiments were performed with 23 and 17 participants, respectively. The first experiment aimed to understand how walking affected text-entry performance and additionally to assess how effective assistive technologies can be in mobile contexts. In the second experiment, adaptive keyboards featuring character prediction and pre-attentive attributes to ease visual demands of text-entry interfaces were developed and evaluated. It has been found that both text-input speed and overall quality are affected in mobile situations. Contrary to the expectations, assistive technologies proved ineffective with visual feedback. The second experiment showed that pre-attentive attributes do not affect users’ performance in task-entry tasks, even though a 3.3–4.3 % decrease in error rates was measured. It was found that users reduce walking speed to compensate for challenges placed by mobile text-entry. Caution should be exercised when transferring assistive technologies to mobile contexts, since they need adaptations to address mobile users’ needs. Also, while pre-attentive attributes seemingly have no effect on experienced QWERTY typists’ performance, they showed promise for both novice users and typists in attention-demanding contexts.
Touchscreen mobile devices are highly customizable, allowing designers to create inclusive user interfaces that are accessible to a broader audience. However, the knowledge to provide this new generation of user interfaces is yet to be uncovered. The goal was to thoroughly study mobile touchscreen interfaces and provide guidelines for informed design. The paper presents an evaluation performed with 15 tetraplegic and 18 able-bodied users that allowed to identify their main similarities and differences within a set of interaction techniques (Tapping, Crossing, and Directional Gesturing) and parameterizations. Results show that Tapping and Crossing are the most similar and easy to use techniques for both motor-impaired and able-bodied users. Regarding Tapping, error rates start to converge at 12 mm, showing to be a good compromise for target size. As for Crossing, it offered a similar level of accuracy; however, larger targets (17 mm) are significantly easier to cross for motor-impaired users. Directional Gesturing was the least inclusive technique. Regarding position, edges showed to be troublesome. For instance, they have shown to increase Tapping precision for disabled users, while decreasing able-bodied users’ accuracy when targets are too small (7 mm). It is argued that despite the expected error rate disparity, there are clear resemblances between user groups, thus enabling the development of inclusive touch interfaces. Tapping, a traditional interaction technique, was among the most effective for both target populations, along with Crossing. The main difference concerns Directional Gesturing that in spite of its unconstrained nature shows to be inaccurate for motor-impaired users.
In recent years there has been a surge in the development of non-visual interaction techniques targeting two application areas: making content accessible to visually impaired people, and supporting minimal attention user interfaces for situationally impaired users. This SIG aims to bring together the community of researchers working around non-visual interaction techniques for people of all abilities. It will unite members of this burgeoning community in a lively discussion and brainstorming session. Attendees will work to identify and report current and future research challenges as well as new research avenues.
Current touch interfaces lack the rich tactile feedback that allows blind users to detect and correct errors. This is especially relevant for multitouch interactions, such as Braille input. We propose HoliBraille, a system that combines touch input and multi-point vibrotactile output on mobile devices. We believe this technology can offer several benefits to blind users; namely, convey feedback for complex multitouch gestures, improve input performance, and support inconspicuous interactions. In this paper, we present the design of our unique prototype, which allows users to receive multitouch localized vibrotactile feedback. Preliminary results on perceptual discrimination show an average of 100% and 82% accuracy for single-point and chord discrimination, respectively. Finally, we discuss a text-entry application with rich tactile feedback.
Despite the overwhelming emergence of accessible digital technologies, Braille still plays a role in providing blind people with access to content. Nevertheless, many fail to see the benefits of nurturing Braille, particularly given the time and effort required to achieve proficiency. Our research focuses on maximizing access and motivation to learn and use Braille. We present initial insights from 5 interviews with blind people, comprising of Braille instructors and students, where we characterize the learning process and usage of Braille. Based on our findings, we have identified a set of opportunities around Braille education. Moreover, we devised scenarios, and built hardware and software solutions to motivate discovery and retention of Braille literacy.
Blind people typically resort to audio feedback to access information on electronic devices. However, this modality is not always an appropriate form of output. Novel approaches that allow for private and inconspicuous interaction are paramount. In this paper, we present a vibrotactile reading device that leverages the users’ Braille knowledge to read textual information. UbiBraille consists of six vibrotactile actuators that are used to code a Braille cell and communicate single characters. The device is able to simultaneously actuate the users’ index, middle, and ring fingers of both hands, providing fast and mnemonic output. We conducted two user studies on UbiBraille to assess both character and word reading performance. Character recognition rates ranged from 54% to 100% and were highly character- and user-dependent. Indeed, participants with greater expertise in Braille reading/writing were able to take advantage of this knowledge and achieve higher accuracy rates. Regarding word reading performance, we investigated four different vibrotactile timing conditions. Participants were able to read entire words and obtained recognition rates up to 93% with the most proficient ones being able achieve a rate of 1 character per second.
Mobile devices gather the communication capabilities as no other gadget. Plus, they now comprise a wider set of applications while still maintaining reduced size and weight. They have started to include accessibility features that enable the inclusion of disabled people. However, these inclusive efforts still fall short considering the possibilities of such devices. This is mainly due to the lack of interoperability and extensibility of current mobile operating systems (OS). In this paper, we present a case study of a multiimpaired person where access to basic mobile applications was provided in an applicational basis. We outline the main flaws in current mobile OS and suggest how these could further empower developers to provide accessibility components. These could then be compounded to provide system-wide inclusion to a wider range of (multi)-impairments.
Recent decades brought technological advances able to improve the life quality of people with disabilities. However, beneﬁts in the rehabilitation of motor disabled people are still scarce. Therapeutic processes are lengthy and demanding to therapists and patients. Our goal is to assist therapists in rehabilitation procedures providing a tool for accurate monitoring and evolution analysis enriched with their own knowledge. We analysed therapy sessions with tetraplegics to better understand the rehabilitation process and highlight the major requirements for a technology-enhanced tool. Results suggest that virtual movement analysis and comparison increases the awareness of a patient’s condition and progress during therapy.
Maintaining orientation while traveling in complex or unknown environments is a challenging task for visually impaired (VI) pedestrians. In this paper, we propose a novel approach to assist blind people during navigation between waypoints (walk straight) with tactors on their wrists. Our main goal is to decrease the cognitive load needed by blind people to follow instructions in overloaded environments. Two issues are discussed, 1) the number of vibration motors used; 2) the type of vibration dimensions issued. Preliminary results from of an informal evaluation performed with two blind users showed that vibrations could help the users maintaining their straight path, however patterns were sometimes confusing. This reinforced that walking an unknown path is a demanding and stressful task and the cognitive load should be reduced to a minimum.
Touchscreen devices have become increasingly popular. Yet they lack of tactile feedback and motor stability, making it difficult effectively typing on virtual keyboards. This is even worse for elderly users and their declining motor abilities, particularly hand tremor. In this paper we examine text-entry performance and typing patterns of elderly users on touch-based devices. Moreover, we analyze users’ hand tremor profile and its relationship to typing behavior. Our main goal is to inform future designs of touchscreen keyboards for elderly people. To this end, we asked 15 users to enter text under two device conditions (mobile and tablet) and measured their performance, both speedand accuracy-wise. Additionally, we thoroughly analyze different types of errors (insertions, substitutions, and omissions) looking at touch input features and their main causes. Results show that omissions are the most common error type, mainly due to cognitive errors, followed by substitutions and insertions. While tablet devices can compensate for about 9% of typing errors, omissions are similar across conditions. Measured hand tremor largely correlates with text-entry errors, suggesting that it should be approached to improve input accuracy. Finally, we assess the effect of simple touch models and provide implications to design.
There is no such thing as an ultimate text-entry method. People are diverse and mobile touch typing takes place in many different places and scenarios. This translates to a wide and dynamic diversity of abilities. Conversely, different methods present different demands and are adequate to different people / situations. In this paper we focus our attention on blind and situationally blind people; how abilities differ between people and situations, and how we can cope with those differences either by varying or adapting methods. Our research goal is to identify the human abilities that influence mobile text-entry and match them with methods (and underlying demands) in a comprehensive and extensible design space.
Recent advances on mobile technologies are blurring the frontiers between able-bodied and disabled users. Indeed, mobile settings have a negative impact on motor abilities. Mobile users’ bodies are prone to vibrations, resulting in hand tremors, which hinder target selection accuracy. These users seem to share some problems with elderly people, who experience increased physiological tremor. However, this hypothesis has yet to be thoroughly researched. In this work, we propose to bridge the gap between different domains, allowing designers to build more inclusive and comprehensive solutions using recent touch-based devices. We present two evaluations comparing situational- to health-impaired users and report on the main differences and similarities we found on text-entry tasks. Our results show that while elderly users are more likely to commit cognitive errors, both user groups experience similar substitution errors. We found that the increased demands of mobility and type of device seemingly induce a “disability continuum”, where both situationally- and health-impaired users’ performance is interleaved
In this paper we present AppInsight, a visualization tool that enables users to reminisce on their computer usage history and derive meaningful insights about behaviors and trends. Human memory has the ability to re-experience episodes from our lives when supplied with suitable contextual cues, such as places, music, and so on. We explore a small set of properties, such as the application’s name, URL and window title as contextual cues, in order to characterize the users’ activity on their personal computers and how it evolves over time. Our user study shows that users enjoyed viewing their computer usage history and were able to both recall past events and introspect about their lives. Moreover, one of the most surprising outcomes is that they found several different applications for our tool, such as improving usage behaviors, controlling productivity, generating activity reports, and monitoring users in psychological studies. Finally, we discuss some lessons learned from our study and propose future research directions
Mobile touch devices have become increasingly popular, yet typing on virtual keyboards whilst walking is still an overwhelming task. In this paper we analyze; firstly, the negative effect of walking on text-input performance, particularly the users’ main difficulties and error patterns. We focused our research on thumb typing, since this is a commonly used technique to interact with touch interfaces. Secondly, we analyze how these effects can be compensated by two-hand interaction and increasing target size. We asked 22 participants to input text under three mobility conditions (seated, slow walking, and normal walking) and three hand conditions (one-hand/portrait, twohand/portrait, and two-hand/landscape). Results show that independently of hand condition, mobility significantly decreased input quality, leading to specific error patterns. Moreover, it was shown that target size can compensate the negative effect of walking, while two-hand interaction does not provide additional stability or input accuracy. We finish with implications for future designs.
More and more people interact with mobile devices whilst walking. This new interaction paradigm imposes a novel set of challenges and restrictions to mobile users, denominated Situationally-Induced Impairments and Disabilities. The tremor originated of such contexts results in inaccurate movements and erroneous actions. These difficulties are particularly visible in recent touch interfaces that lack the tactile cues and physical stability provided by their keypad-based counterparts. Nevertheless, these difficulties are not new to the accessibility community, particularly for those studying motor impaired users. In fact, both user populations (situationally and physically impaired) seem to share similar interaction problems. This work aims to thoroughly understand to what extend technology can be transferred between those domains. Unlike the embryonic stage of mobile research, the accessibility community has the accumulated knowledge of more than two decades of research. Building a relationship between these domains will contribute towards a more inclusive and universal design approach, which will benefit and bring closer two distinct research communities.
The emergence of touch-based mobile devices brought fresh and exciting possibilities. These came at the cost of a considerable number of novel challenges. They are particularly apparent with the blind population, as these devices lack tactile cues and are extremely visually demanding. Existing solutions resort to assistive screen reading software to compensate the lack of sight, still not all the information reaches the blind user. Good spatial ability is still required to have notion of the device and its interface, as well as the need to memorize buttons‟ position on screen. These abilities, as many other individual attributes as age, age of blindness onset or tactile sensibility are often forgotten, as the blind population is presented with the same methods ignoring capabilities and needs. Herein, we present a study with 13 blind people consisting of a touch screen text-entry task with four different methods. Results show that different capability levels have significant impact on performance and that this impact is related with the different methods‟ demands. These variances acknowledge the need of accounting for individual characteristics and giving space for difference, towards inclusive design.
Touch screen mobile devices are highly flexible and customizable, allowing designers to create inclusive user interfaces that are accessible to a broader user population. However, the knowledge to provide this new generation of user interfaces is yet to be uncovered. Our goal is to thoroughly study mobile touch interfaces, thus providing the tools for informed design. We present an evaluation performed with 15 tetraplegic and 18 able-bodied people that allowed us to identify their main similarities and differences within a set of interaction techniques (Tapping, Crossing, and Directional Gesturing) and parameterizations. Results show that despite the expected error rate disparity, there are clear resemblances, thus enabling the development of inclusive touch interfaces. Tapping, a traditional interaction technique, was among the most effective for both target populations, along with Crossing. The main difference concerns Directional Gesturing that in spite of its unconstrained nature shows to be inaccurate for motor impaired users.
Mobile devices are used in increasingly demanding contexts, which compete for the visual resources required for an effective interaction. This is more obvious when considering current visually demanding user interfaces. In this work, we propose using solutions initially designed for blind people in order to ease the visual demand of current mobile interfaces. A comparative user study was conducted with 23 sighted volunteers who performed text-entry tasks with three methods, QWERTY, VoiceOver alike and NavTouch in three mobility conditions. We first analyzed the effect of walking and visual demand, followed by the effect of using assistive technologies in mobile contexts. Results show that traditional QWERTY keyboard outperforms alternative textentry methods for the blind, as users prefer visual feedback over their auditory counterpart. Moreover assistive technologies and their interaction processes revealed to be cognitively demanding and therefore inadequate in mobile contexts. These findings suggest that technology transfer should be performed with caution, and adaptations must be done to account for differences in users’ capabilities.
The emergence of touch screen devices poses a new set of challenges regarding text-entry. These are more obvious when considering blind people, as touch screens lack the tactile feedback they are used to when interacting with devices. The available solutions to enable non-visual text-entry resort to a wide set of targets, complex interaction techniques or unfamiliar layouts. We propose BrailleType, a text-entry method based on the Braille alphabet. BrailleType avoids multi-touch gestures in favor of a more simple single-finger interaction, featuring few and large targets. We performed a user study with fifteen blind subjects, to assess this method’s performance against Apple’s VoiceOver approach. BrailleType although slower, was significantly easier and less error prone. Results suggest that the target users would have a smoother adaptation to BrailleType than to other more complex methods.
No two persons are alike. We usually ignore this diversity as we have the capability to adapt and, without noticing, become experts in interfaces that were probably misadjusted to begin with. This adaptation is not always at the user’s reach. One neglected group is the blind. Age of blindness onset, age, cognitive, and sensory abilities are some characteristics that diverge between users. Regardless, all are presented with the same methods ignoring their capabilities and needs. Interaction with mobile devices is highly visually demanding which widens the gap between blind people. Herein, we present studies performed with 13 blind people consisting on key acquisition tasks with 10 mobile devices. Results show that different capability levels have significant impact on user performance and that this impact is related with the device and its demands. It is paramount to understand mobile interaction demands and relate them with the users’ capabilities, towards inclusive design.
No two people are alike. We usually ignore this diversity as we have the capability to adapt and, without noticing, become experts in interfaces that were probably misadjusted to begin with. This adaptation is not always at the user’s reach. One neglected group is the blind. Spatial ability, memory, and tactile sensitivity are some characteristics that diverge between users. Regardless, all are presented with the same methods ignoring their capabilities and needs. Interaction with mobile devices is highly visually demanding which widens the gap between blind people. Our research goal is to identify the individual attributes that influence mobile interaction, considering the blind, and match them with mobile interaction modalities in a comprehensive and extensible design space. We aim to provide knowledge both for device design, device prescription and interface adaptation.
O processo de fisioterapia consiste em devolver alguma qualidade de vida a deficientes motores, através do treino de um conjunto de movimentos. Cabe ao fisioterapeuta conseguir observar, interpretar e avaliar o estado actual e evolu- ção dos seus pacientes, de forma a maximizar o seu desempenho físico. Neste artigo, apresentamos uma análise ao processo actual de fisioterapia, num centro de reabilitação para tetraplégicos, identificando as suas principais limita- ções e oportunidades para uma ferramenta tecnológica. Seguindo uma abordagem de desenho centrado no utilizador, é descrita uma plataforma de suporte aos fisioterapeutas, cujo principal objectivo é tornar a reabilitação num processo mais fiável e robusto. Avaliações preliminares com a população-alvo confirmam a utilidade da nossa abordagem, contribuindo para um acompanhamento mais preciso. Por fim, são apresentados alguns cenários de interacção ilustrando todas as potencialidades do sistema.
Mobile touch-screen interfaces and tetraplegic people have a controversial connection. While users with residual capacities in their upper extremities could benefit immensely from a device which does not require strength to operate, the precision needed to effectively select a target bars these people access to countless communication, leisure and productivity opportunities. Insightful projects attempted to bridge this gap via either special hardware or particular interface tweaks. Still, we need further insight into the challenges and the frontiers separating failure from success for such applications to take hold. This paper discusses an evaluation conducted with 15 tetraplegic people to learn the limits to their performance within a comprehensive set of interaction methods. We then present the results concerning a particular interaction technique: Tapping. Results show that performance varies across different areas of the screen whose distribution changes with target size.
Mobile devices are designed mostly to fit users with no particular disability. Tactile affordances are neglected at the expense of more attractive stylish interfaces and assistive solutions are stereotypical, also facing disabilities with a narrow perspective. A blind user is presented with screen reading software to overcome the inability to receive feedback from the device. However, these solutions go only half-way. In the absence of sight other capabilities stand up. Above all, the sense of touch plays an essential role while interacting with physical keypads. To empower these users, a deeper understanding of their capabilities and how they relate with technology is mandatory. We propose a user-product compatibility approach, taking in account that blind users have different tactile attributes. We expect to correlate the user’s tactile sensitivity and keypad demands, enabling informed keypad design and selection.
We are moving towards a future where people will be surrounded by technology and multiple appliances, allowing the creation of a truly intelligent environment. However, this multitude of devices raises several issues to the HCI research area. Our preliminary studies confirmed that most devices are difficult to use by blind people, due to inappropriate interfaces. The approach described in this work tries to deal with this problem by moving the user interface from the appliances to an intermediary device, which users are familiar with and can fully control. Additionally, we propose an interface generation algorithm, which provides consistent user interfaces to all appliances in the environment.
Este artigo apresenta uma avaliação efectuada a 15 utilizadores tetraplégicos com o objectivo de compreender as suas capacidades num conjunto de técnicas de interacção (Tapping, Crossing, Exiting e Gestos Direccionais) e respectivas parametrizações (posição, tamanho e direcção). Os resultados mostraram que para cada técnica a eficácia e precisão varia de acordo com as diferentes parametrizações. Regra geral, o Tapping (método tradicional) foi a técnica de interacção preferida e entre as mais eficazes. Isto mostra que é possível criar interfaces unificadas e acessíveis a utilizadores com e sem deficiências, caso existam métodos de parametrização ou adaptação apropriados.
Embora dispositivos como os telemóveis assumam um papel cada vez mais importante na vida diária de muitas pessoas, estes continuam a apresentar dificuldades e restrições a populações com necessidades especiais. Os cegos e deficientes visuais em particular, privados de informação visual na qual a maioria dos dispositivos se baseia, necessitam de um esforço cognitivo suplementar na interacção com telemóveis. Apesar de existir interesse em perceber a importância de características humanas na interacção com tecnologia, existe uma grande lacuna no que respeita a estudos que relacionem capacidades cognitivas e o uso de dispositivos móveis por parte de deficientes visuais. Face ao esforço cognitivo superior, na ausência de visão, pretendem-se caracterizar os diferentes tipos de utilizadores de acordo com as suas capacidades cognitivas, de modo a permitir explorar diferentes métodos de interacção e, assim, criar soluções que se adeqúem ao perfil de cada um.
O processo de reabilitação actual caracteriza-se pela sua longa duração e natureza desmotivante. Porém, é uma actividade indispensável para a recuperação de pacientes tetraplégicos. O objectivo deste trabalho é tornar a fisioterapia num processo mais divertido e aliciante para os utilizadores. A primeira contribuição deste artigo consiste numa descrição detalhada do processo tradicional de fisioterapia, nomeadamente na caracterização e compreensão dos exercícios mais relevantes. Em segundo lugar, e tendo em conta as necessidades dos utilizadores, retiramos algumas implicações para o desenho de plataformas tecnológicas. Em seguida, apresentamos a nossa abordagem,que conjuga elementos e processos do mundo real com elementos virtuais, podendo assim oferecer aos utilizadores uma experiência mais rica e envolvente, e propomos um conjunto de soluções tecnológicas que poderão tornar a fisioterapia numa actividade mais divertida.
Os dispositivos móveis são normalmente desenhados para utilizadores sem qualquer tipo de deficiência. Consequentemente, o retorno táctil é muitas vezes negligenciado em detrimento de dispositivos esteticamente atractivos Mais, as soluções de acessibilidade são normalmente estereotipadas, encarando as deficiências através de uma perspectiva limitada. Em particular, os leitores de ecrã são usados por utilizadores cegos como forma de ultrapassar a incapacidade de receber retorno do dispositivo. Porém, estas soluções apenas resolvem alguns problemas existentes. Na cegueira, outras capacidades ganham uma maior relevância. Acima de tudo, o tacto desempenha um papel essencial quando se interage com teclados físicos. Para maximizar o desempenho destes utilizadores, é necessário ter um conhecimento mais aprofundado das suas capacidades. Neste trabalho propomos uma aproximação de compatibilidade utilizador-produto, tentando correlacionar a sensibilidade táctil dos utilizadores e exigências dos teclados, permitindo a criação de interfaces através de um desenho informado.
A crescente miniaturização dos dispositivos móveis e as suas interfaces visualmente exigentes impõem diversos desafios à população cega. Em particular, os métodos de introdução de texto tradicionais mostram-se desadequados às necessidades destes utilizadores. Este artigo descreve uma nova abordagem de entrada de dados em dispositivos móveis com base numa interface gestual. O NavTilt apresenta-se como um método de interacção simples e natural, recorrendo à utilização de apenas uma mão, podendo ser usado sem retorno visual.
Touch screen mobile devices bear the promise of endless leisure, communication, and productivity opportunities to motor-impaired people. Indeed, users with residual capacities in their upper extremities could benefit immensely from a device with no demands regarding strength. However, the precision required to effectively select a target without physical cues creates problems to people with limited motor abilities. Our goal is to thoroughly study mobile touch screen interfaces, their characteristics and parameterizations, thus providing the tools for informed interface design for motor-impaired users. We present an evaluation performed with 15 tetraplegic people that allowed us to understand the factors limiting user performance within a comprehensive set of interaction techniques (Tapping, Crossing, Exiting and Directional Gesturing) and parameterizations (Position, Size and Direction). Our results show that for each technique, accuracy and precision vary across different areas of the screen and directions, in a way that is directly dependent on target size. Overall, Tapping was both the preferred technique and among the most effective. This proves that it is possible to design inclusive unified interfaces for motor-impaired and able-bodied users once the correct parameterization or adaptability is assured.
NavTap is a navigational method that enables blind users to input text in a mobile device by reducing the associated cognitive load. In this paper, we present studies that go beyond a laboratorial setting, exploring the methods’ effectiveness and learnability as well as its influence on the users’ daily lives. Eight blind users participated in designing the prototype (3 weeks) while five took part in the studies along 16 more weeks. Results gathered in controlled weekly sessions and real life usage logs enabled us to better understand NavTap’s advantages and limitations. The method revealed itself both as easy to learn and improve. Indeed, users were able to better control their mobile devices to send SMS and use other tasks that require text input such as managing a phonebook, from day one, in real-life settings. While individual user profiles play an important role in determining their evolution, even less capable users (with ageinduced impairments or cognitive difficulties), were able to perform the assigned tasks (sms, directory) both in the laboratory and in everyday use, showing continuous improvement to their skills. According to interviews, none were able to input text before. Nav-Tap dramatically changed their relation with mobile devices and noticeably improved their social interaction capabilities.
Most blind users frequently need help when visiting unknown places. While the white cane or guide dog can aid the users in their mobility, the major difficulties arise in orientation. The lack of both reference points and visual cues are the main causes. Despite extensive research in orientation interfaces for the blind, their guiding instructions are not aligned with the users’ needs and language, resulting in solutions which provide inadequate feedback. We aim to overcome this issue allowing users to walk through unknown places, by receiving a familiar and natural feedback. Our contributions are in understanding, through user studies, how blind users explore an unknown place, their difficulties, capabilities, needs and behaviors. We also analyzed how these users create their own mental maps, verbalize a route and communicate with each other. By structuring and generalizing this information, we were able to create a prototype that generates familiar instructions, behaving like a blind companion, one with similar capabilities that understands their “friend” and speaks the same language. Finally, we evaluated the system with the target population, validating our approach and guidelines. Results show a high degree of overall user satisfaction and provide encouraging cues to further the present line of work.
For the majority of blind people, walking in unknown places is a very difficult, or even impossible, task to perform, when without help. The adoption of the white cane is the main aid to a blind user’s mobility. However, the major difficulties arise in the orientation task. The lack of reference points and the inability to access visual cues are its main causes. We aim to overcome this issue allowing users to walk through unknown places, by receiving a familiar and easily understandable feedback. Our preliminary contributions are in understanding, through user studies, how blind users explore an unknown place, their difficulties, capabilities and needs. We also analyzed how these users create their own mental maps, verbalize a route and communicate with each other. Structuring and generalizing this information, we were able to create a prototype that generates familiar and adequate instructions, behaving like a blind companion, one with similar capabilities that understands his “friend” and speaks the same language. We evaluated the system with the target population, validating our approach and orientation guidelines, while gathering overall user satisfaction.
NavTap is a navigational method that enables blind users to input text in a mobile device by reducing the associated cognitive load. We present studies that go beyond a laboratorial setting, exploring the methods’ effectiveness and learnability as well as its influence in the users’ daily lives. Eight blind users participated in the prototype’s design (3 weeks) while five took part in the studies along 16 more weeks. All were unable to input text before. Results gathered in controlled weekly sessions and real life interaction logs revealed the method as easy to learn and improve performance, as the users were able to fully control mobile devices in the first contact within real life scenarios. The individual profiles play an important role determining evolution and even less capable users (with age-induced impairments or cognitive difficulties) were able to perform the required tasks, in and out of the laboratory, with continuous improvements. NavTap dramatically changed the users’ relation with the devices and improved their social interaction capabilities.
Mobile phones play an important role in modern society. Their applications extend beyond basic communications, ranging from productivity to leisure. However, most tasks beyond making a call require significant visual skills. While screen-reading applications make text more accessible, most interaction, such as menu navigation and especially text entry, requires hand–eye coordination, making it difficult for blind users to interact with mobile devices and execute tasks. Although solutions exist for people with special needs, these are expensive and cumbersome, and software approaches require adaptations that remain ineffective, difficult to learn, and error prone. Recently, touch-screen equipped mobile phones, such as the iPhone, have become popular. The ability to directly touch and manipulate data on the screen without using any intermediary devices has a strong appeal, but the possibilities for blind users are at best limited. In this article, we describe NavTouch, a new, gesture-based, text-entry method developed to aid vision-impaired users with mobile devices that have touch screens. User evaluations show it is both easy to learn and more effective than previous approaches.
Os dispositivos moveis desempenham um papel importante na sociedade moderna. As suas funcionalidades vão além da simples comunicação, juntando agora um grande leque de funcionalidades, sejam elas de lazer ou de cariz profissional. A interacção com estes dispositivos e visualmente exigente, dificultando ou impossibilitando os utilizadores invisuais de terem controlo sobre o seu dispositivo. Em particular, a introdução de texto, uma tarefa transversal a muitas aplicaçoes, e de difícil realização, uma vez que depende do retorno visual, tanto do teclado, como do ecrã. Assim, através da utilização de novos sistemas de introdução de texto, que exploram as capacidades dos utilizadores invisuais, o sistema apresentado neste artigo oferece-lhes a possibilidade de operarem diferentes tipos de dispositivos. Para alem dos telemóveis comuns, apresentamos também um método de interacção em dispositivos com ecrãs tácteis. Estudos com utilizadores invisuais validaram as abordagens propostas para os varios dispositivos que suplantam os métodos tradicionais ao nível do desempenho, aprendizagem e satisfação do utilizador-alvo.