The DCitizens SIG aims to navigate ethical dimensions in forthcoming Digital Civics projects, ensuring enduring benefits and community resilience. Additionally, it seeks to shape the future landscape of digital civics for ethical and sustainable interventions. As we dive into these interactive processes, a challenge arises of discerning authentic intentions and validating perspectives. This exploration extends to evaluating the sustainability of future interactions and scrutinising biases impacting engaged communities. The commitment is to ensure future outcomes align with genuine community needs and address the ethical imperative of a considerate departure strategy. This dialogue encourages future researchers and practitioners to integrate ethical considerations and community-centric principles, fostering a more sustainable and responsible approach to technology-driven interventions in future urban regeneration and beyond.
Accessibility research has gained traction, yet ethical gaps persist in the inclusion of individuals with disabilities, especially children. Inclusive research practices are essential to ensure that research and design solutions cater to the needs of all individuals, regardless of their abilities. Working with children with disabilities in Human-Computer Interaction and Human-Robot Interaction presents a unique set of ethical dilemmas. These young participants often require additional care, support, and accommodations, which can fall off researchers’ resources or expertise. The lack of clear guidance on navigating these challenges further aggravates the problem. To provide a basis on which to address this issue, we adopt a critical reflective approach, evaluating our impact by analyzing two case studies involving children with disabilities in HCI/HRI research. Flowing from these, we call for a shift in our approach to ethics in participatory research contexts to one that is processual, situational, and community-led.
The surge of Multidisciplinary Teams (MDTs) has transformed healthcare, moving from siloed medical teams to collaborative units comprising professionals from diverse medical specialties. Despite their global adoption and recognized benefits, there is a research gap regarding the current context and dynamics of MDT Meetings (MDTMs), hindering the design of systems tailored to this context. This study delves into cancer MDTMs, highlighting emerging practices and challenges. We conducted an observational study across three hospitals, uncovering the intricate interplay of organizational, technological, and interpersonal factors. Our insights emphasize the complexities of MDTMs, including physical infrastructure, MDTM’s discussion structure, and adaptability, revealing challenges in information management and turn-taking strategies. By addressing these dimensions, our aim is to inform the development of more efficient and effective MDTMs in healthcare.
This work-in-progress presents ALMA, an innovative prototype for storytelling with a smart soft toy inspired by Snoezelen principles. Its objective is to improve children’s emotion regulation while facilitating children’s exploration of sensory perceptions, emotion labeling, and self-reflection. While current methods in Child-Computer Interaction (CCI) frequently emphasize individual aspects like storytelling or multisensory experiences, there is a gap in interactive storytelling incorporating soft toys that integrate multisensory and Snoezelen principles, despite the well-documented advantages of such integration. By leveraging the synergies between multisensory experiences and storytelling, ALMA seeks to foster children’s emotion regulation and, therefore a holistic development.
Shape-changing skin is an exciting modality due to its accessible and engaging nature. Its softness and flexibility make it adaptable to different interactive devices that children with and without visual impairments can share. Although their potential as an emotionally expressive medium has been shown for sighted adults, their potential as an inclusive modality remains unexplored. This work explores the shape-emotional mappings in children with and without visual impairment. We conducted a user study with 50 children (26 with visual impairment) to investigate their emotional associations with five skin shapes and two movement conditions. Results show that shape-emotional mappings are dependent on visual abilities. Our study raises awareness of the influence of visual experiences on tactile vocabulary and emotional mapping among sighted, low-vision, and blind children. We finish discussing the causal associations between tactile stimuli and emotions and suggest inclusive design recommendations for shape-changing devices.
Research on robotic ostracism is still scarce and has only explored its effects on adult populations. Although the results revealed important carryover effects of robotic exclusion, there is no evidence yet that those results occur in child-robot interactions. This paper provides the first exploration of robotic ostracism with children. We conducted a study using the Robotic Cyberball Paradigm in a third-person perspective with a sample of 52 children aged between five to ten years old. The experimental study had two conditions: Exclusion and Inclusion. In the Exclusion condition, children observed a peer being excluded by two robots; while in the Inclusion condition, the observed peer interacted equally with the robots. Notably, even 5-year-old children could discern when robots excluded another child. Children who observed exclusion reported lower levels of belonging and control, and exhibited higher prosocial behaviour than those witnessing inclusion. However, no differences were found in children’s meaningful existence, self-esteem, and physical proximity across conditions. Our user study provides important methodological considerations for applying the Robotic Cyberball Paradigm with children. The results extend previous literature on both robotic ostracism with adults and interpersonal ostracism with children. We finish discussing the broader implications of children observing ostracism in human-robot interactions.
Children with visual impairments often struggle to fully participate in group activities due to limited access to visual cues. They have difficulty perceiving what is happening, when, and how to act-leading to children with and without visual impairments being frustrated with the group activity, reducing mutual interactions. To address this, we created Touchibo, a tactile storyteller robot acting in a multisensory setting, encouraging touch-based interactions. Touchibo provides an inclusive space for group interaction as touch is a highly accessible modality in a mixed-visual ability context. In a study involving 107 children (37 with visual impairments), we compared Touchibo to an audio-only storyteller. Results indicate that Touchibo significantly improved children’s individual and group participation perception, sparking touch-based interactions and the storyteller was more likable and helpful. Our study highlights touch-based robots’ potential to enrich children’s social interactions by prompting interpersonal touch, particularly in mixed-visual ability settings.
This workshop proposal advocates for a dynamic, community-led approach to ethics in Human-Computer Interaction (HCI) by integrating principles from feminist HCI and digital civics. Traditional ethics in HCI often overlook interpersonal considerations, resulting in static frameworks ill-equipped to address dynamic social contexts and power dynamics. Drawing from feminist perspectives, the workshop aims to lay the groundwork for developing a meta-toolkit for community-led feminist ethics, fostering collaborative research practices grounded in feminist ethical principles. Through pre-workshop activities, interactive sessions, and post-workshop discussions, participants will engage in dialogue to advance community-led ethical research practices. Additionally, the workshop seeks to strengthen the interdisciplinary community of researchers and practitioners interested in ethics, digital civics, and feminist HCI. By fostering a reflexive approach to ethics, the workshop contributes to the discourse on design’s role in shaping future interactions between individuals, communities, and technology.
Participatory design initiatives, especially within the realm of digital civics, are often integrated and codeveloped with the very citizens and communities they intend to assist. Digital civics research aims to create positive social change using a variety of digital technologies. These research projects commonly adopt various embedded processes, such as commissioning models [5]. Despite the adoption of this process within a range of domains, there isn’t currently a framework for best practices and accountability procedures to ensure we engage with citizens ethically and ensure the sustainability of our projects. This workshop aims to provide a space to start collaboratively constructing a dynamic framework of best practices, laying the groundwork for the future of sustainable embedded research processes. The overarching goal is to foster discussions and share insights that contribute to developing effective practices, ensuring the longevity and impact of participatory digital civics projects.
This paper introduces a hands-on workshop centered on participatory design (PD) approaches tailored for engaging young children, with a special focus on failures, challenges, and successes in prior experiences within the child-computer interaction (CCI) domain. Although previous efforts have highlighted the advantages of engaging young children in PD, research has overlooked their involvement as co-designers, leading to a lack of exploration and understanding of their unique perspectives and challenges in the design process. Through an interactive session and collaborative activities, this workshop will facilitate discussions surrounding challenges, successes, and lessons learned through PD with young children. By evaluating and exchanging experiences, we aim to enhance our understanding of PD and refine its methodologies for this particular population. By synthesizing the shortcomings, difficulties, and successes of past experiences, the workshop will bring together researchers and practitioners to initiate efforts toward closing this research gap. Together we will establish the groundwork for enhanced approaches and a deeper understanding of how to involve young children in PD, which will enhance future efforts in the field of CCI.
Neurodivergent children spend most of their time in neurodiverse schools alongside their neurotypical peers and often face social exclusion. Inclusive play activities are a strong vehicle of inclusion. Unfortunately, games designed for the specific needs of neurodiverse groups are scarce. Given the potential of robots to support play, we led a co-design process to build an inclusive robotic game for neurodiverse classrooms. We conducted five co-design workshops, engaging 80 children from neurodiverse classrooms in designing an inclusive game. Employing the resulting design insights, we iteratively prototyped and playtested a tabletop robotic game leveraging off-the-shelf robots. Reflecting upon our findings, we discuss how the longitudinal co-design process (rather than the resulting game) was key in allowing children the space to learn how to accommodate accessibility needs and create inclusive play experiences. We posit the use of co-design to enhance children’s interpersonal relationships, foster feelings of ownership, and encourage appropriation practices as a strategy to sustain inclusive experiences that extend beyond project timelines or artefact designs.
We present LocomotiVR, a Virtual Reality tool designed with phys- iotherapists to improve the gait rehabilitation in clinical practice. The tool features two interfaces: a VR environment to immerse the patient in the therapy activity; and a desktop tool operated by a physiotherapist to customize exercises and follow the patient’s per- formance. Results revealed that LocomotiVR presented promising acceptability, usage, and engagement scores. These results were sup- ported by qualitative data collected from participating experts, which discussed high levels of satisfaction, motivation, and acceptance to incorporate the LocomotiVR in daily therapy practices. Concerns were related to patient safety and lack of legal regulation.
Accessibility research has gained traction, yet ethical gaps persist in the inclusion of individuals with disabilities, especially children. Inclusive research practices are essential to ensure research and design solutions cater to the needs of all individuals, regardless of their abilities. Working with children with disabilities in Human-Computer Interaction and Human-Robot Interaction presents a unique set of ethical dilemmas. These young participants often require additional care, support, and accommodations, which can fall off researchers’ resources or expertise. The lack of clear guidance on navigating these challenges further aggravates the problem. To provide a base and address this issue, we adopt a critical reflective approach, evaluating our impact by analyzing two case studies involving children with disabilities in HCI/HRI research.
This paper introduces a relational perspective on ethics within the context of Feminist Digital Civics and community-led design. Ethics work in HCI has primarily focused on prescriptive machine ethics and bioethics principles rather than people. In response, we advocate for a community-led, processual approach to ethics, acknowledging power dynamics and local contexts. We thus propose a multidimensional adaptive model for ethics in HCI design, integrating an intersectional feminist ethical lens. This framework embraces feminist epistemologies, methods, and methodologies, fostering a reflexive practice. By weaving together situated knowledges, standpoint theory, intersectionality, participatory methods, and care ethics, our approach offers a holistic foundation for ethics in HCI, aiming to advance community-led practices and enrich the discourse surrounding ethics within this field.
We report on the design and execution of a probe as an anonymous self-reporting tool to investigate the perception of mental wellbeing and support services for university students. The pictorial describes a six-day probe study with students. The study focuses on students’ perceptions, struggles and coping strategies to maintain their mental wellbeing. Our contribution is multifold. We detail the design and deployment of the probe for HCI practitioners and designers to adapt and adopt it, while we reflect on the data, deriving sensitizing concepts and personas to support the design practice for students’ mental wellbeing
Collaborative coding environments foster learning, social skills, computational thinking training, and supportive relationships. In the context of inclusive education, these environments have the potential to promote inclusive learning activities for children with mixed-visual abilities. However, there is limited research focusing on remote collaborative environments, despite the opportunity to design new modes of access and control of content to promote more equitable learning experiences. We investigated the tradeoffs between remote and co-located collaboration through a tangible coding kit. We asked ten pairs of mixed-visual ability children to collaborate in an interdependent and asymmetric coding game. We contribute insights on six dimensions - effectiveness, computational thinking, accessibility, communication, cooperation, and engagement - and reflect on differences, challenges, and advantages between collaborative settings related to communication, workspace awareness, and computational thinking training. Lastly, we discuss design opportunities of tangibles, audio, roles, and tasks to create inclusive learning activities in remote and co-located settings
Inclusion is key in group work and collaborative learning. We devel- oped a mediator robot to support and promote inclusion in group conversations, particularly in groups composed of children with and without visual impairment. We investigate the effect of two mediation strategies on group dynamics, inclusion, and perception of the robot. We conducted a within-subjects study with 78 children, 26 experienced visual impairments, in a decision-making activity. Results indicate that the robot can foster inclusion in mixed-visual ability group conversations. The robot succeeds in balancing par- ticipation, particularly when using a highly intervening mediating strategy (directive strategy). However, children feel more heard by their peers when the robot is less intervening (organic strategy). We extend prior work on social robots to assist group work and contribute with a mediator robot that enables children with visual impairments to engage equally in group conversations. We finish by discussing design implications for inclusive social robots.
Playful robotics engages children in learning through play experiences while simultaneously developing critical thinking, and social, cognitive, and motor skills through play. Such playful experiences are particularly valuable in inclusive education to promote social and inclusive behaviors. We present TACTOPI, an inclusive and playful multisensory environment that leverages tangible interaction and a robot as the main character. We investigate how TACTOPI supports play in 10 dyads of children with mixed visual abilities. Results show that multisensory elements supported children to experience activities as joyful. Storytelling and guided-play added a layer of meaningfulness to the activities, and the robot engaged children in minds-on thinking. TACTOPI afforded children to engage in collaborative social play and facilitated supportive and inclusive behaviours. We contribute with a playful multisensory environment, an analysis of the effect of its components on social, cognitive, and inclusive play, and design considerations for inclusive multisensory environments that prioritize play.
Transitioning to and through University is a delicate period for students’ well-being. Moreover, the recent COVID-19 pandemic added a further toll through the various challenges related to studying, socializing, community-building, and safety. These challenges inspired the design of a mobile application, called Tecnico GO!, to support university students’ well-being and academic performance. This paper presents the design rationale and evaluation of the app conducted during the academic year 2021-2022. Findings cluster around three themes: students studying needs; building a sense of community; and gamification strategies. The discussion elaborates on the student’s perceptions of well-being during pandemics. Students’ perception of the app is positive, appreciative of the crowdsensing features, supporting learning goals, community building, and safety. On the other hand, the gamification features, as currently deployed, do not achieve the expected goals.
Many neurodivergent children are integrated into mainstream schools alongside their neurotypical peers. However, they often face so- cial exclusion, which may have lifelong effects. Inclusive play activities can be a strong driver of inclusion. Unfortunately, games designed for the specific needs of neurodiverse groups, those that include neurodi- vergent and neurotypical individuals, are scarce. Given the potential of robots as engaging devices, we led a 6-month co-design process to build an inclusive and entertaining robotic game for neurodiverse classrooms. We first interviewed neurodivergent adults and educators to identify the barriers and facilitators for including neurodivergent children in main- stream classrooms. Then, we conducted five co-design sessions, engaging four neurodiverse classrooms with 81 children (19 neurodivergent). We present a reflection on our co-design process and the resulting robotic game through the lens of Self-Determination Theory, discussing how our methodology supported the intrinsic motivations of neurodivergent children
Current signing avatars are often described as unnatural as they cannot accurately reproduce all the subtleties of synchronized body behaviors of a human signer. In this paper, we investigate a new dynamic approach for transitions between signs and the effect of mouthing behaviors. Although native signers preferred animations with dynamic transitions, we did not find significant differences in comprehension and perceived naturalness scores. On the other hand, we show that including mouthing behaviors improved comprehension and perceived naturalness for novice Portuguese sign language learners.
Current signing avatars are often described as unnatural as they cannot accurately reproduce all the subtleties of synchronized body behaviors of a human signer. In this paper, we propose a new dynamic approach for transitions between signs, focusing on mouthing animations for Portuguese Sign Language. Although native signers preferred animations with dynamic transitions, we did not find significant differences in comprehension and perceived naturalness scores. On the other hand, we show that including mouthing behaviors improved comprehension and perceived naturalness for novice sign language learners. Results have implications in computational linguistics, humancomputer interaction, and synthetic animation of signing avatars.
Play is a central aspect of childhood development, with games as a vital tool to promote it. However, neurodivergent children, especially those in neurodiverse environments, are underserved by HCI games research. Most existing work takes on a top-down approach, disregarding neurodivergent interest for the majority of the design process. Co-design is often proposed as a tool to create truly accessible and inclusive gaming experiences. Nevertheless, co-designing with neurodivergent children within neurodiverse groups brings about unique challenges, such as different communication styles, sensory needs and preferences. Building upon recommendations from prior work in neurodivergent, mixed-ability, and child-led co-design, we propose a concrete participatory game design kit for neurodiverse classrooms: PartiPlay. Moreover, we present preliminary findings from an in-the-wild experiment with the said kit, showcasing its ability to create an inclusive co-design process for neurodiverse groups of children. We aim to provide actionable steps for future participatory design research with neurodiverse children.
Touchibo is a modular robotic platform for enriching interpersonal communication in human-robot group activities, suitable for children with mixed visual abilities. Touchibo incorporates several modalities, including dynamic textures, scent, audio, and light. Two prototypes are demonstrated for supporting storytelling activities and mediating group conversations between children with and without visual impairment. Our goal is to provide an inclusive platform for children to interact with each other, perceive their emotions, and become more aware of how they impact others.
Recognition of computational thinking as a relevant skill set has increased its prevalence in school curricula, and the number of coding platforms and kits available. The inaccessibility of the latter has been a focus of recent attention resulting in the emergence of accessible approaches. Conversely, there has been limited attention to activities, how training platforms are being used in curricular practice, and how they are being adapted for children with disabilities. We present findings from a qualitative interview study with 6 IT instructors depicting their practices, experiences, and their views towards an inclusive future classroom.
COVID-19 gave rise to discussions around designing for life during the pandemic, in particular related to health, leisure and education. In 2020, an online survey aimed at university students (N=225) pointed the authors to various challenges related to well-being in terms of studying, socializing, community, and safety during the COVID-19 pandemic. These results shaped the crowdsensing-enabled service design of a mobile application, Tecnico GO!, aimed at supporting students’ well-being. Considering the constant changing context caused by the pandemic, we present a study conducted during the academic year 2021-2022 and if/how the App’s features continue to respond to student’s needs. The evaluation of the App focused on 12 semi-structured interviews and think-aloud protocols. Findings cluster around three themes: a) Supporting the study experience; b) Building a sense of community; c) Improving gamification for better participation. Discussion elaborates on the student’s perceptions around well-being during pandemics. Students’ insights of the App are overall positive and highlight that crowdsensing-enabled design does contribute to learning, community and safety, but the gamification as currently deployed does not.
Dissociative Identity Disorder (DID) is characterized by the presence of at least two distinct identities in the same individual. This paper describes a co-design process with a person living with DID. We first aimed to uncover the main challenges experienced by the co-designer as well as design opportunities for novel technologies. We then engaged in a prototyping stage to design a wearable display (WhoDID) to facilitate in-person social interactions. The prototype aims to be used as a necklace and enable the user to make their fronting personality visible to others. Thus, facilitating social encounters or sudden changes of identity. We reflect on the design features of WhoDID in the broader context of supporting people with DID. Moreover, we provide insights on co-designing with someone with multiple (sometimes conflicting) personalities regarding requirement elicitation, decision-making, prototyping, and ethics. To our knowledge, we report the first design process with a DID user within the ASSETS and CHI communities. We aim to encourage other assistive technology researchers to design with DID users.
Storytelling has the potential to be an inclusive and collaborative activity. However, it is unclear how interactive storytelling systems can support such activities, particularly when considering mixed-visual ability children. In this paper, we present an interactive multisensory storytelling system and explore the extent to which an emotional robot can be used to support inclusive experiences. We investigate the effect of the robot’s emotional behavior on the joint storytelling process, resulting narratives, and collaboration dynamics. Results show that when children co-create stories with a robot that exhibits emotional behaviors, they include more emotive elements in their stories and explicitly accept more ideas from their peers. We contribute with a multisensory environment that enables children with visual impairments to engage in joint storytelling activities with their peers and analyze the effect of a robot’s emotional behaviors on an inclusive storytelling experience.
Typing on mobile devices is a common and complex task. The act of typing itself thereby encodes rich information, such as the typing method, the context it is performed in, and individual traits of the person typing. Researchers are increasingly using a selection or combination of experience sampling and passive sensing methods in real-world settings to examine typing behaviours. However, there is limited understanding of the effects these methods have on measures of input speed, typing behaviours, compliance, perceived trust and privacy. In this paper, we investigate the tradeoffs of everyday data collection methods. We contribute empirical results from a four-week field study (N=26). Here, participants contributed by transcribing, composing, passively having sentences analyzed and reflecting on their contributions. We present a tradeoff analysis of these data collection methods, discuss their impact on text-entry applications, and contribute a flexible research platform for in the wild text-entry studies.
Visually impaired children are increasingly educated in mainstream schools following an inclusive educational approach. However, even though visually impaired (VI) and sighted peers are side by side in the classroom, previous research showed a lack of participation of VI children in classroom dynamics and group activities. That leads to a reduced engagement between VI children and their sighted peers and a missed opportunity to value and explore class members’ differences. Robots due to their physicality, and ability to perceive the world, socially-behave and act in a wide range of interactive modalities, can leverage mixed-visual ability children access to group activities while fostering their mutual understanding and social engagement. With this work, we aim to use social robots, as facilitators, to booster inclusive activities in mixed-visual abilities classroom.
Physical rehabilitation plays an essencial role in recovering from a stroke, but it can become repetitive and boring. We present an innovative sound-based prototype for real-time sonification of an upper limb exercise. The prototype is designed to engage stroke survivors in upper body exercises and influence their body perceptions. We ran a preliminary study with ten healthy participants to validate our sonification approach. Findings suggested that movement sonification has the potential for patient engagement and positively influences perceived body weight and capability. Moreover, the proposed approach holds promising results for future research with stroke survivors.
An uprising trend of Personal Informatics has leveraged mobile applications to help users track their wellbeing; however, these digital solutions focus on quantitative data, lacking the insights provided by qualitative data in paper notebooks. We propose to digitally augment a paper diary to allow both analogue and digital data, bridging the gap between qualitative and quantitative data tracking practices to support better awareness and reflection on health data. As a first case-study, we designed a self-tracking tool to help college students manage their wellbeing by increasing self-awareness and easing help-seeking behaviours. Next, we conducted a longitudinal study to validate the tool’s effectiveness and analyse its acceptability. Results show that our approach helped students by allowing moments of selfreflection and self-awareness. Additionally, our findings suggest that qualitative data is most useful when important events and abrupt changes to wellbeing occur. Preference for paper or digital diaries is highly user-dependent; however, most participants favoured a digital-only tool with notetaking capabilities.
The recent uprising trend of remote approaches to group physical activity has shown how these strategies lack social engagement. Following a user-centred design process grounded on the Playful Experience (PLEX) Framework’s dimensions, we developed an augmentation of video conference-based group exercise to enhance the social dynamics of high-intensity interval training. We conducted a user study (N = 12) to analyse the effect of our approach on the perceived playfulness of the experience, enjoyment, and effort of participants. Results show an increase in the PLEX Framework dimensions of Competition and Sensation. Additionally, our findings suggest positive trends in the participants’ enjoyment and effort, thus raising new design implications related to the design space of videoconference group exercise interfaces.
Touch data, and in particular text-entry data, has been mostly collected in the laboratory, under controlled conditions. While touch and text-entry data has consistently shown its potential for monitoring and detecting a variety of conditions and impairments, its deployment in-the-wild remains a challenge. In this paper, we present WildKey, an Android keyboard toolkit that allows for the usable deployment of in-the-wild user studies. WildKey is able to analyse text-entry behaviours through implicit and explicit text-entry data collection while ensuring user privacy. We detail each of the WildKey’s components and features, all of the metrics collected, and discuss the steps taken to ensure user privacy and promote compliance.
A lingua gestual portuguesa, tal como a lingua portuguesa, evoluiu de forma natural, adquirindo caracteristicas gramaticais distintas do portugues. Assim, o desenvolvimento de um tradutor entre as duas nao consiste somente no mapeamento de uma palavra num gesto (portugues gestuado), mas em garantir que os gestos resultantes satisfazem a gramatica da lingua gestual portuguesa e que as traducoes estejam semanticamente corretas. Trabalhos desenvolvidos anteriormente utilizam exclusivamente regras de traduçao manuais, sendo muito limitados na quantidade de fenomenos gramaticais abrangidos, produzindo pouco mais que portugues gestuado. Neste artigo, apresentasse o primeiro sistema de traducao de portugues para a lingua gestual portuguesa, o PE2LGP, que, para alem de regras manuais, se baseia em regras de traducao construidas automaticamente a partir de um corpus de referencia. Dada uma frase em portugues, o sistema devolve uma sequencia de glosas com marcadores que identicam expressoes faciais, palavras soletradas, entre outras. Uma avaliacao automatica e uma avaliacao manual sao apresentadas, indicando os resultados melhorias na qualidade da traducao de frases simples e pequenas em comparacao ao sistema baseline (portugues gestuado). E tambem o primeiro trabalho que lida com as expresoes faciais gramaticais que marcam as frases interrogativas e negativas.
Accessible introductory programming environments are scarce, and their study within ecological settings (e.g., at home) is almost non-existent. We present ACCembly, an accessible block-based environment that enables children with visual impairments to perform spatial programming activities. ACCembly allows children to assemble tangible blocks to program a multimodal robot. We evaluated this approach with seven families that used the system autonomously at home. Results showed that both the children and family members learned from what was an inclusive and engaging experience. Children leveraged fundamental computational thinking concepts to solve spatial programming challenges; parents took different roles as mediators, some actively teaching and scaffolding, others learning together with their child. We contribute with an environment that enables children with visual impairments to engage in spatial programming activities, an analysis of parent-child interactions, and reflections on inclusive programming environments within a shared family experience.
Visually impaired children (VI) face challenges in collaborative learning in classrooms. Robots have the potential to support inclusive classroom experiences by leveraging their physicality, bespoke social behaviors, sensors, and multimodal feedback. However, the design of social robots for mixed-visual abilities classrooms remains mostly unexplored. This paper presents a four-month-long community-based design process where we engaged with a school community. We provide insights into the barriers experienced by children and how social robots can address them. We also report on a participatory design activity with mixed-visual abilities children, highlighting the expected roles, attitudes, and physical characteristics of robots. Findings contextualize social robots within inclusive classroom settings as a holistic solution that can interact anywhere when needed and suggest a broader view of inclusion beyond disability. These include children’s personality traits, technology access, and mastery of school subjects. We finish by providing reflections on the community-based design process.
It is estimated that 55% to 75% of individuals who experience a stroke have persistent impairment of the affected upper limb (UL). It is needed to identify training strategies allied with interactive systems for retraining motor function of the UL. Virtual reality (VR), using either immersive or nonimmersive technology, seems to be one of those promising strategies. Virtual reality allows patients to have close-to-reality experiences, providing them varied, engaging, and realistic experiences. For the physiotherapist, the use of the interactive technologies is a challenge which can improve treatment adherence, allow new environments adapted to patient needs, abilities and goals, as well as different task options. The objective of this analysis was to systematically review the benefits and limitations of VR towards motor recovery of upper limb in post-stroke population. Randomised controlled trials were researched in Pubmed and PEDro databases, between January 2009 and January 2019, using the following keywords: "Virtual reality", "video games", "upper limb" and "stroke". We included articles that used immersive and nonimmersive technology in upper limb recovery after stroke, and which compared VR with others modalities We excluded all articles in which the patient received home based intervention or community rehabilitation programs. All included clinical trials had level of evidence equal or superior to 6 score, assessed by PEDro scale. Fifteen studies met the inclusion criteria. Only three studies considered immersive VR. The training of functional tasks appears to provide the greatest benefits in upper extremity function with improvements in joint range of motion, hand motor function, grip strength, and dexterity. Two studies indicated that long-term improvements persist at follow-up. None of the studies reported any significant adverse effects. There is moderate to high evidence that supports the beneficial effects of VR on stroke patient upper limb motor recovery. However, more studies are needed to determine what kind of VR systems are the most appropriate, particularly which ones may contribute or affect cortical reorganisation. It is also needed to identify the most adequate frequency, duration and intensity for the sessions.
Health conditions, both chronic and acute, are often accompanied by disability-like impairments that might affect mobility, cognition, or perception. These impairments are often pernicious because they are difficult to isolate, vary in intensity and extent over time, and are under-investigated. Here, we make the case that solutions to these impairments are often impervious to traditional accessibility solutions and thinking, and that new solutions are needed. We present argumentation and case-studies, which build the case for a different category of impairments called ‘Health-Induced Impairments and Disabilities’ (HIID). The distinction between traditionally defined disabilities and HIIDs is essential because an understanding that this category of impairments is fundamentally different both in cause and nature affects the effectiveness of the accessibility solutions we provide. Here, we intended to outline the ’problem’ space and elaborate on the four main characteristics of HIIDs (as we see them) to provide delineation and clarity. It is the only way we can enact on robust solutions within this problem space, being: (1) Combinatorial Impairments; (2) Dynamic Impairments varying in Magnitude and Extent; (3) Impairments as a Comorbidity; and (4) Socio-Technical. We intend to outline these characteristics with third-party cases to serve as exemplars of the problems faced. We do not provide research solutions, or indeed any novel empirical evidence. Instead, we define a place for discussions to begin. Therefore, this work is better understood as a position paper or a call-to-action. We make the case that addressing the disability (caused by the underlying illness) is often ineffective; what we need to do is address the illness directly which will in turn address the disability through their transitory relationship.
Previous attempts to make block-based programming accessible to visually impaired children have mostly focused on audio-based challenges, leaving aside spatial constructs, commonly used in learning settings. We sought to understand the qualities and flaws of current programming environments in terms of accessibility in educational settings. We report on a focus group with IT and special needs educators, where they discussed a variety of programming environments for children, identifying their merits, barriers and opportunities. We then conducted a workshop with 7 visually impaired children where they experimented with a bespoke tangible robot-programming environment. Video recordings of such activity were analyzed with educators to discuss children’s experiences and emergent behaviours. We contribute with a set of qualities that programming environments should have to be inclusive to children with different visual abilities, insights for the design of situated classroom activities, and evidence that inclusive tangible robot-based programming is worth pursuing.
Geometry and handwriting rely heavily on the visual representation of basic shapes. It can become challenging for students with visual impairments to perceive these shapes and understand complex spatial constructs. For instance, knowing how to draw is highly dependent on spatial and temporal components, which are often inaccessible to children with visual impairments. Hand-held robots, such as the Cellulo robots, open unique opportunities to teach drawing and writing through haptic feedback. In this paper, we investigate how these tangible robots could support inclusive, collaborative learning activities, particularly for children with visual impairments. We conducted a user study with 20 pupils with and without visual impairments, where they engaged in multiple drawing activities with tangible robots. We contribute novel insights on the design of children-robot interaction, learning shapes and letters, children engagement, and responses in a collaborative scenario that address the challenges of inclusive learning.
Inclusion of vulnerable people in society is essential to grant human rights and equal opportunities for all. Our research goal is to mitigate the disparities in education and ensure access to all children, including pupils having a special educational need and disability (SEND) and promote inclusion among students using social robots. Inclusion in schools has different dimensions to be considered, namely: identification of exclusion reasons and behaviours, accessibility to school activities, and promotion of diverse and inclusive culture among children. Our approach to this challenge was a 6-month long community engagement effort with a local school community to get insights into different stakeholders: children with and without disabilities (Visual Impairment and Autism), parents, teachers and several therapists, such as: braille, speech and occupational therapy, psychologists, mobility and navigation. We then conducted a participatory design session to build robots, during lectures, with 50 children with mixed abilities. We contribute novel insights on the design of robots for mixed abilities groups of children, in remote and co-located settings and the challenges and opportunities for an inclusive school raised by the school community.
Sign Languages are visual languages and the main means of communication used by Deaf people. However, the majority of the information available online is presented through written form. Hence, it is not of easy access to the Deaf community. Avatars that can animate sign languages have gained an increase of interest in this area due to their flexibility in the process of generation and edition. Synthetic animation of conversational agents can be achieved through the use of notation systems. HamNoSys is one of these systems, which describes movements of the body through symbols. Its XML-compliant, SiGML, is a machine-readable input of HamNoSys able to animate avatars. Nevertheless, current tools have no freely available open source libraries that allow the conversion from HamNoSys to SiGML. Our goal is to develop a tool of open access, which can perform this conversion independently from other platforms. This system represents a crucial intermediate step in the bigger pipeline of animating signing avatars. Two cases studies are described in order to illustrate different applications of our tool.
Software for the production of sign languages is much less common than for spoken languages. Such software usually relies on 3D humanoid avatars to produce signs which, inevitably, necessitates the use of animation. One barrier to the use of popular animation tools is their complexity and steep learning curve, which can be hard to master for inexperienced users. Here, we present PE2LGP, an authoring system that features a 3D avatar that signs Portuguese Sign Language. Our Animator is designed specifically to craft sign language animations using a key frame method, and is meant to be easy to use and learn to users without animation skills. We conducted a preliminary evaluation of the Animator, where we animated seven Portuguese Sign Language sentences and asked four sign language users to evaluate their quality. This evaluation revealed that the system, in spite of its simplicity, is indeed capable of producing comprehensible messages.
The design of graphical user interfaces has been evolving from skeuomorph interfaces – which use elements that mimic the aesthetics and functionality of their real-world counterparts – to minimalist and flat designs. Despite the growing popularity of these new design approaches, they can be challenging for older adults who experience a decline in visual and cognitive abilities. Still, little is known about user performance, aesthetic perception, and preference of older adults, particularly in comparison to younger users and traditional skeuomorph interfaces. In this paper, we examine the performance and aesthetic perception of older (65-77 years old) and younger (20-40) adults with three design approaches: skeuomorph, skeuominimalist, and flat design. Results show flat design is either slower or less accurate than traditional skeuomorph interfaces for older adults across three tasks: visual search, identifying clickable objects, and multiple page navigation. Younger adults were less susceptible to performance differences between design approaches, but still subject to “click uncertainty” with flat interfaces. Skeuominimalism did not show clear performance benefits over flat design or skeuomorphism, while the latter reduced the performance gap between age groups. Finally, younger adults preferred the simplicity of skeuominimalism, while older adults preferred skeuomorph interfaces because of the perceived usability, beauty, and trustiness.
Blind people face significant challenges when using smartphones. The focus on improving non-visual mobile accessibility has been at the level of touchscreen access. Our research investigates the challenges faced by blind people in their everyday usage of mobile phones. In this paper, we present a set of studies performed with the target population, novices and experts, using a variety of methods, targeted at identifying and verifying challenges; and coping mechanisms. Through a multiple methods approach we identify and validate challenges locally with a diverse set of user expertise and devices, and at scale through the analyses of the largest Android and iOS dedicate forums for blind people. We contribute with a comprehensive corpus of smartphone challenges for blind people, an assessment of their perceived relevance for users with different expertise levels, and a discussion on a set of directions for future research that tackle the open and often overlooked challenges.
Interactive tabletops offer unique collaborative features, particularly their size, geometry, orientation and, more importantly, the ability to support multi-user interaction. Although previous efforts were made to make interactive tabletops accessible to blind people, the potential to use them in collaborative activities remains unexplored. In this paper, we present the design and implementation of a multi-user auditory display for interactive tabletops, supporting three feedback modes that vary on how much information about the partners’ actions is conveyed. We conducted a user study with ten blind people to assess the effect of feedback modes on workspace awareness and task performance. Furthermore, we analyze the type of awareness information exchanged and the emergent collaboration strategies. Finally, we provide implications for the design of future tabletop collaborative tools for blind users.
There are over 80 million stroke survivors globally, making it the main cause of long-term disability worldwide. Not only do the challenges associated with stroke affect the quality of life (QoL) of survivors, but also of their families. To explore these challenges and define design opportunities for technologies to improve the QoL of both stakeholders, we conducted semi-structured interviews with 10 survivors and one of their family members. We uncovered three major interlinked themes: strategies to cope with technological barriers, the (in)adequacy of assistive technologies, and limitations of the rehabilitation process. Findings highlight multiple design opportunities, including the need for meaningful patient-centered tools and methods to improve rehabilitation effectiveness, emotion-aware computing for family emotional support, and re-thinking the nature of assistive technologies to consider the perception of transitory stroke-related disabilities. We thus argue for a new class of dual-purpose technologies that fit survivors’ abilities while promoting the regain of function.
Feet input can support mid-air hand gestures for touchless medical image manipulation to prevent unintended activations, especially in sterile contexts. However, foot interaction has yet to be investigated in dental settings. In this paper, we conducted a mixed methods research study with medical dentistry professionals. To this end, we developed a touchless medical image system in either sitting or standing configurations. Clinicians could use both hands as 3D cursors and a minimalist single-foot gesture vocabulary to activate manipulations. First, we performed a qualitative evaluation with 18 medical dentists to assess the utility and usability of our system. Second, we used quantitative methods to compare pedal foot-supported hand interaction and hands-only conditions next to 22 medical dentists. We expand on previous work by characterizing a range of potential limitations of foot-supported touchless 3D interaction in the dental domain. Our findings suggest that clinicians are open to use their foot for simple, fast and easy access to image data during surgical procedures, such as dental implant placement. Furthermore, 3D hand cursors, supported by foot gestures for activation events, were considered useful and easy to employ for medical image manipulation. Even though most clinicians preferred hands-only manipulation for pragmatic purposes, feet-supported interaction was found to provide more precise control and, most importantly, to decrease the number of unintended activations during manipulation. Finally, we provide design considerations for future work exploring foot-supported touchless interfaces for sterile settings in Dental Medicine, regarding: interaction design, foot input devices, the learning process and camera occlusions.
Word completion interfaces are ubiquitously available in mobile virtual keyboards; however, there is no prior research on how to design these interfaces for screen reader users. In address this, we propose a design space for nonvisual representation of word completions. The design space covers seven categories aiming to identify challenges and opportunities for interaction design in an unexplored research topic. It is intended to guide the design of novel interaction techniques, serving as a framework for researchers and practitioners working on nonvisual word completion. To demonstrate its potential, we engaged blind users in an exploration of the design space, to create their own bespoke word completion solutions. Through this study we found that users create alternative interfaces that extended current screen readers’ capabilities. Resulting interfaces are less conservative than mainstream solutions on notification frequency and cardinality. Customization decisions were based on perceived benefits/costs and varied depending on multiple factors such as users’ perceived prediction accuracy, potential keystroke gains, and situational restrictions.
Mobile device users are required to constantly learn to use new apps, features, and adapt to updates. For blind people, adapting to a new interface requires additional time and effort. At the limit, and often so, devices and applications may become unusable without support from someone else. Using tutorials is a common approach to foster independent learning of new concepts and workflows. However, most tutorials available online are limited in scope, detail, or quickly become outdated. Also, they presume a degree of tech savviness that is not at the reach of the common mobile device user. Our research explores the democratization of assistance by enabling non-technical people to create tutorials in their mobile phones for others. We report on the interaction and information needs of blind people when following ‘amateur’ tutorials. Thus, providing insights into how to widen and improve the authoring and playthrough of these learning artifacts. We conducted a study where 12 blind users followed tutorials previously created by blind or sighted people. Our findings suggest that instructions authored by sighted and blind people are limited in different aspects, and that those limitations prevent effective learning of the task at hand. We identified the types of contents produced by authors and the information required by followers during playthrough, which often do not align. We provide insights on how to support both authoring and playthrough of nonvisual smartphone tutorials. There is an opportunity to design solutions that mediate authoring, combine contributions, adapt to user profile, react to context and are living artifacts capable of perpetual improvement.
In this preliminary study, we propose visual biofeedback techniques for representing compensatory movements that are commonly found in upper limb rehabilitation exercises. Here, visual biofeedback is represented by stick figures adorned with different graphical elements to highlight abnormal motor patterns. We explore 4 visual biofeedback techniques for analysing movements designed for neuromotor rehabilitation of the upper limb. Co-design sessions were conducted next to 5 rehabilitation professionals. The resulting visual designs were then evaluated by 3 other physiotherapists, each evaluated the visual biofeedback of two types of compensatory movements: arm elevation-flexion and cephalic tilt. Results indicate that although there is a preferred technique, participants suggested to design a novel representation that should incorporate features from different sources, thus designing a hybrid visual biofeedback technique.
Over the last three decades, the Web has become an increasingly important platform that affects every part of our lives: from requesting simple navigation instructions to active participating in political activities; from playing video games to remotely coordinate teams of professionals; from paying monthly bills to engaging is micro-funding activities. Missing on these opportunities is a strong vehicle of info-, economic-, and social-exclusion. For people with disabilities, accessing the Web is sometimes a challenging task. Assistive technologies are used to lower barriers and enable people to fully leverage all the opportunities available in (and through) the Web. This chapter introduces a brief overview of how both assistive technologies and the Web evolved over the years. It also considers some of the most commonly used assistive technologies as well as recent research efforts in the field of accessible computing. Finally, it provides a discussion of future directions for an inclusive Web.
The constant barrage of updates and novel applications to explore creates a ceaseless cycle of new layouts and interaction methods that we must adapt to. One way to address these challenges is through in-context interactive tutorials. Most applications provide onboarding tutorials using visual metaphors to guide the user through the core features available. However, these tutorials are limited in their scope and are often inaccessible to blind people. In this paper, we present AidMe, a system-wide authoring and playthrough of non-visual interactive tutorials. Tutorials are created via user demonstration and narration. Using AidMe, in a user study with 11 blind participants we identified issues with instruction delivery and user guidance providing insights into the development of accessible interactive non-visual tutorials.
Braille input enables fast nonvisual entry speeds on mobile touchscreen devices. Yet, the lack of tactile cues commonly results in typing errors, which are hard to correct. We propose Hybrid-Brailler, an input solution that combines physical and gestural interaction to provide fast and accurate Braille input. We use the back of the device for physical chorded input while freeing the touchscreen for gestural interaction. Gestures are used in editing operations, such as caret movement, text selection, and clipboard control, enhancing the overall text entry experience. We conducted two user studies to assess both input and editing performance. Results show that Hybrid-Brailler supports fast entry rates as its virtual counterpart, while significantly increasing input accuracy. Regarding editing performance, when compared with the mainstream technique, Hybrid-Brailler shows performance benefits of 21% in speed and increased editing accuracy. We finish with lessons learned for designing future nonvisual input and editing techniques.
Blind people face many barriers using smartphones. Still, previous research has been mostly restricted to non-visual gestural interaction, paying little attention to the deeper daily challenges of blind users. To bridge this gap, we conducted a series of workshops with 42 blind participants, uncovering application challenges across all levels of expertise, most of which could only be surpassed through a support network. We propose Hint Me!, a human-powered service that allows blind users to get in-app assistance by posing questions or browsing previously answered questions on a shared knowledge-base. We evaluated the perceived usefulness and acceptance of this approach with six blind people. Participants valued the ability to learn independently and anticipated a series of usages: labeling, layout and feature descriptions, bug workarounds, and learning to accomplish tasks. Creating or browsing questions depends on aspects like privacy, knowledge of respondents and response time, revealing the benefits of a hybrid approach.
Over the last decade there have been numerous studies on touchscreen typing by blind people. However, there are no reports about blind users’ everyday typing performance and how it relates to laboratory settings. We conducted a longitudinal study involving five participants to investigate how blind users truly type on their smartphones. For twelve weeks, we collected field data, coupled with eight weekly laboratory sessions. This paper provides a thorough analysis of everyday typing data and its relationship with controlled laboratory assessments. We improve state-of-the-art techniques to obtain intent from field data, and provide insights on real-world performance. Our findings show that users improve over time, even though it is at a slow rate. Substitutions are the most common type of error and have a significant impact on entry rates in both field and laboratory settings. Results show that participants are 1.3-2 times faster when typing during everyday tasks. On the other hand, they are less accurate. We finished by deriving some implications that should inform the design of future virtual keyboard for non-visual input. Moreover, findings should be of interest to keyboard designers and researchers looking to conduct field studies to understand everyday input performance.
Deaf and hard of hearing students must constantly switch between several visual sources to gather all necessary information during a classroom lecture (e.g., instructor, slides, sign language interpreter or captioning). Using smart glasses, this research tested a potential means to reduce the effects of visual field switches, proposing that consolidating sources into a single display may improve lecture comprehension. Results showed no statistically significant comprehension improvements with the glasses, but interviews indicated that participants found it easier to follow the lecture with glasses and saw the potential for them in the classroom. Future work highlights priorities for smart glasses consideration and new research directions.
Following multimedia lectures in mainstream classrooms is challenging for deaf and hard-of-hearing (DHH) students, even when provided with accessibility services. Due to multiple visual sources of information (e.g. teacher, slides, interpreter), these students struggle to divide their attention among several simultaneous sources, which may result in missing important parts of the lecture; as a result, access to information is limited in comparison to their hearing peers, having a negative effect in their academic achievements. In this paper we propose a novel approach to improve classroom accessibility, which focuses on improving the delivery of multimedia lectures. We introduce SlidePacer, a tool that promotes coordination between instructors and sign language interpreters, creating a single instructional unit and synchronizing verbal and visual information sources. We conducted a user study with 60 participants on the effects of SlidePacer in terms of learning performance and gaze behaviors. Results show that SlidePacer is effective in providing increased access to multimedia information; however, we did not find significant improvements in learning performance. We finish by discussing our results and limitations of our user study, and suggest future research avenues that build on these insights.
Touch-enabled devices have a growing variety of screen sizes; however, there is little knowledge on the effect of key size on non-visual text-entry performance. We conducted a user study with 12 blind participants to investigate how non-visual input performance varies with four QWERTY keyboard sizes (ranging from 15mm to 2.5mm). This paper presents an analysis of typing performance and touch behaviors discussing its implications for future research. Our findings show that there is an upper limit to the benefits of larger target sizes between 10mm and 15mm. Input speed decreases from 4.5 to 2.4 words per minute (WPM) for targets sizes below 10mm. The smallest size was deemed unusable by participants even though performance was in par with previous work.
Word prediction can significantly improve text-entry rates on mobile touchscreen devices. However, these interactions are inherently visual and require constant scanning for new word predictions to actually take advantage of the suggestions. In this paper, we discuss the design space for non-visual word prediction interfaces and finally present Shout-out Suggestions, a novel interface to provide non-visual access to word predictions on existing mobile devices.
Interaction with large touch surfaces is still a relatively infant domain, particularly when looking at the accessibility solutions offered to blind users. Their smaller mobile counterparts are shipped with built-in accessibility features, enabling non-visual exploration of linearized screen content. However, it is unknown how well these solutions perform in large interactive surfaces that use more complex spatial content layouts. We report on a user study with 14 blind participants performing common touchscreen interactions using one and two-hand exploration. We investigate the exploration strategies applied by blind users when interacting with a tabletop. We identified six basic strategies that were commonly adopted and should be considered in future designs. We finish with implications for the design of accessible large touch interfaces.
Non-visual text-entry for people with visual impairments has focused mostly on the comparison of input techniques reporting on performance measures, such as accuracy and speed. While researchers have been able to establish that non-visual input is slow and error prone, there is little understanding on how to improve it. To develop a richer characterization of typing performance, we conducted a longitudinal study with five novice blind users. For eight weeks, we collected in-situ usage data and conducted weekly laboratory assessment sessions. This paper presents a thorough analysis of typing performance that goes beyond traditional aggregated measures of text-entry and reports on character-level errors and touch measures. Our findings show that users improve over time, even though it is at a slow rate (0.3 WPM per week). Substitutions are the most common type of error and have a significant impact on entry rates. In addition to text input data, we analyzed touch behaviors, looking at touch contact points, exploration movements, and lift positions. We provide insights on why and how performance improvements and errors occur. Finally, we derive some implications that should inform the design of future virtual keyboards for non-visual input
The advent of system-wide accessibility services on mainstream touch-based smartphones has been a major point of inclusion for blind and visually impaired people. Ever since, researchers aimed to improve the accessibility of specific tasks, such text-entry and gestural interaction. However, little work aimed to understand and improve the overall accessibility of these devices in real world settings. In this paper, we present an eight-week long study with five novice blind participants where we seek to understand major concerns, expectations, challenges, barriers, and experiences with smartphones. The study included pre-adoption and weekly interviews, weekly controlled task assessments, and in-the wild system-wide usage. Our results show that mastering these devices is an arduous and long task, confirming the users’ initial concerns. We report on accessibility barriers experienced throughout the study, which could not be encountered in task-based laboratorial settings. Finally, we discuss how smartphones are being integrated in everyday activities and highlight the need for better adoption support tools.
Most work investigating mobile HCI is carried out within controlled laboratory settings; these spaces are not representative of the real-world environments for which the technology will predominantly be used. The result of which can produce a skewed or inaccurate understanding of interaction behaviors and users’ abilities. While mobile in-the-wild studies provide more realistic representations of technology usage, there are additional challenges to conducting data collection outside of the lab. In this paper we discuss these challenges and present TinyBlackBox, a standalone data collection framework to support mobile in-thewild studies with today’s smartphone and tablet devices.
We propose HoliBraille, a system that enables Braille input and output on current mobile devices. We use vibrotactile motors combined with dampening materials in order to actuate directly on users’ fingers. The prototype can be attached to current capacitive touchscreen devices enabling multipoint and localized feedback. HoliBraille can be leveraged in several applications including educational tools for learning Braille, as a communication device for deaf-blind people, and as a tactile feedback system for multitouch Braille input. We conducted a user study with 12 blind participants on Braille character discrimination. Results show that HoliBraille is effective in providing localized feedback; however, character discrimination performance is strongly related with number of simultaneous stimuli. We finish by discussing the obtained results and propose future research avenues to improve multipoint vibrotactile perception.
Tablet devices can display full-size QWERTY keyboards similar to the physical ones. Yet, the lack of tactile feedback and the inability to rest the fingers on the home keys result in a highly demanding and slow exploration task for blind users. We present SpatialTouch, an input system that leverages previous experience with physical QWERTY keyboards, by supporting two-handed interaction through multitouch exploration and spatial, simultaneous audio feedback. We conducted a user study, with 30 novice touchscreen participants entering text under one of two conditions: (1) SpatialTouch or (2) mainstream accessibility method Explore by Touch. We show that SpatialTouch enables blind users to leverage previous experience as they do a better use of home keys and perform more efficient exploration paths. Results suggest that although SpatialTouch did not result in faster input rates overall, it was indeed able to leverage previous QWERTY experience in contrast to Explore by Touch.
Touchscreens are pervasive in mainstream technologies; they offer novel user interfaces and exciting gestural interactions. However, to interpret and distinguish between the vast ranges of gestural inputs, the devices require users to consistently perform interactions inline with the predefined location, movement and timing parameters of the gesture recognizers. For people with variable motor abilities, particularly hand tremors, performing these input gestures can be extremely challenging and impose limitations on the possible interactions the user can make with the device. In this paper, we examine touchscreen performance and interaction behaviors of motor-impaired users on mobile devices. The primary goal of this work is to measure and understand the variance of touchscreen interaction performances by people with motor-impairments. We conducted a four-week in-the-wild user study with nine participants using a mobile touchscreen device. A Sudoku stimulus application measured their interaction performance abilities during this time. Our results show that not only does interaction performance vary significantly between users, but also that an individual’s interaction abilities are significantly different between device sessions. Finally, we propose and evaluate the effect of novel tap gesture recognizers to accommodate for individual variances in touchscreen interactions.
Braille has paved its way into mobile touchscreen devices, providing faster text input for blind people. This advantage comes at the cost of accuracy, as chord typing over a flat surface has proven to be highly error prone. A misplaced finger on the screen translates into a different or unrecognized character. However, the chord itself gathers information that can be leveraged to improve input performance. We present B#, a novel correction system for multitouch Braille input that uses chords as the atomic unit of information rather than characters. Experimental results on data collected from 11 blind people revealed that B# is effective in correcting errors at character-level, thus providing opportunities for instant corrections of unrecognized chords; and at word-level, where it outperforms a popular spellchecker by providing correct suggestions for 72% of incorrect words (against 38%). We finish with implications for designing chord-based correction system and avenues for future work.
Mobile devices are increasingly used for text-entry in contexts where visual attention is fragmented and graphical information is inadequate, yet the current solutions to typing on virtual keyboards make it a visually demanding task. This work looks at assistive technologies and interface attributes as tools to ease the task. Two within-subject experiments were performed with 23 and 17 participants, respectively. The first experiment aimed to understand how walking affected text-entry performance and additionally to assess how effective assistive technologies can be in mobile contexts. In the second experiment, adaptive keyboards featuring character prediction and pre-attentive attributes to ease visual demands of text-entry interfaces were developed and evaluated. It has been found that both text-input speed and overall quality are affected in mobile situations. Contrary to the expectations, assistive technologies proved ineffective with visual feedback. The second experiment showed that pre-attentive attributes do not affect users’ performance in task-entry tasks, even though a 3.3–4.3 % decrease in error rates was measured. It was found that users reduce walking speed to compensate for challenges placed by mobile text-entry. Caution should be exercised when transferring assistive technologies to mobile contexts, since they need adaptations to address mobile users’ needs. Also, while pre-attentive attributes seemingly have no effect on experienced QWERTY typists’ performance, they showed promise for both novice users and typists in attention-demanding contexts.
Touchscreen mobile devices are highly customizable, allowing designers to create inclusive user interfaces that are accessible to a broader audience. However, the knowledge to provide this new generation of user interfaces is yet to be uncovered. The goal was to thoroughly study mobile touchscreen interfaces and provide guidelines for informed design. The paper presents an evaluation performed with 15 tetraplegic and 18 able-bodied users that allowed to identify their main similarities and differences within a set of interaction techniques (Tapping, Crossing, and Directional Gesturing) and parameterizations. Results show that Tapping and Crossing are the most similar and easy to use techniques for both motor-impaired and able-bodied users. Regarding Tapping, error rates start to converge at 12 mm, showing to be a good compromise for target size. As for Crossing, it offered a similar level of accuracy; however, larger targets (17 mm) are significantly easier to cross for motor-impaired users. Directional Gesturing was the least inclusive technique. Regarding position, edges showed to be troublesome. For instance, they have shown to increase Tapping precision for disabled users, while decreasing able-bodied users’ accuracy when targets are too small (7 mm). It is argued that despite the expected error rate disparity, there are clear resemblances between user groups, thus enabling the development of inclusive touch interfaces. Tapping, a traditional interaction technique, was among the most effective for both target populations, along with Crossing. The main difference concerns Directional Gesturing that in spite of its unconstrained nature shows to be inaccurate for motor-impaired users.
Blind people typically resort to audio feedback to access information on electronic devices. However, this modality is not always an appropriate form of output. Novel approaches that allow for private and inconspicuous interaction are paramount. In this paper, we present a vibrotactile reading device that leverages the users’ Braille knowledge to read textual information. UbiBraille consists of six vibrotactile actuators that are used to code a Braille cell and communicate single characters. The device is able to simultaneously actuate the users’ index, middle, and ring fingers of both hands, providing fast and mnemonic output. We conducted two user studies on UbiBraille to assess both character and word reading performance. Character recognition rates ranged from 54% to 100% and were highly character- and user-dependent. Indeed, participants with greater expertise in Braille reading/writing were able to take advantage of this knowledge and achieve higher accuracy rates. Regarding word reading performance, we investigated four different vibrotactile timing conditions. Participants were able to read entire words and obtained recognition rates up to 93% with the most proficient ones being able achieve a rate of 1 character per second.
Current touch interfaces lack the rich tactile feedback that allows blind users to detect and correct errors. This is especially relevant for multitouch interactions, such as Braille input. We propose HoliBraille, a system that combines touch input and multi-point vibrotactile output on mobile devices. We believe this technology can offer several benefits to blind users; namely, convey feedback for complex multitouch gestures, improve input performance, and support inconspicuous interactions. In this paper, we present the design of our unique prototype, which allows users to receive multitouch localized vibrotactile feedback. Preliminary results on perceptual discrimination show an average of 100% and 82% accuracy for single-point and chord discrimination, respectively. Finally, we discuss a text-entry application with rich tactile feedback.
Despite the overwhelming emergence of accessible digital technologies, Braille still plays a role in providing blind people with access to content. Nevertheless, many fail to see the benefits of nurturing Braille, particularly given the time and effort required to achieve proficiency. Our research focuses on maximizing access and motivation to learn and use Braille. We present initial insights from 5 interviews with blind people, comprising of Braille instructors and students, where we characterize the learning process and usage of Braille. Based on our findings, we have identified a set of opportunities around Braille education. Moreover, we devised scenarios, and built hardware and software solutions to motivate discovery and retention of Braille literacy.
In recent years there has been a surge in the development of non-visual interaction techniques targeting two application areas: making content accessible to visually impaired people, and supporting minimal attention user interfaces for situationally impaired users. This SIG aims to bring together the community of researchers working around non-visual interaction techniques for people of all abilities. It will unite members of this burgeoning community in a lively discussion and brainstorming session. Attendees will work to identify and report current and future research challenges as well as new research avenues.
Mobile devices gather the communication capabilities as no other gadget. Plus, they now comprise a wider set of applications while still maintaining reduced size and weight. They have started to include accessibility features that enable the inclusion of disabled people. However, these inclusive efforts still fall short considering the possibilities of such devices. This is mainly due to the lack of interoperability and extensibility of current mobile operating systems (OS). In this paper, we present a case study of a multiimpaired person where access to basic mobile applications was provided in an applicational basis. We outline the main flaws in current mobile OS and suggest how these could further empower developers to provide accessibility components. These could then be compounded to provide system-wide inclusion to a wider range of (multi)-impairments.
Recent decades brought technological advances able to improve the life quality of people with disabilities. However, benefits in the rehabilitation of motor disabled people are still scarce. Therapeutic processes are lengthy and demanding to therapists and patients. Our goal is to assist therapists in rehabilitation procedures providing a tool for accurate monitoring and evolution analysis enriched with their own knowledge. We analysed therapy sessions with tetraplegics to better understand the rehabilitation process and highlight the major requirements for a technology-enhanced tool. Results suggest that virtual movement analysis and comparison increases the awareness of a patient’s condition and progress during therapy.
Recent advances on mobile technologies are blurring the frontiers between able-bodied and disabled users. Indeed, mobile settings have a negative impact on motor abilities. Mobile users’ bodies are prone to vibrations, resulting in hand tremors, which hinder target selection accuracy. These users seem to share some problems with elderly people, who experience increased physiological tremor. However, this hypothesis has yet to be thoroughly researched. In this work, we propose to bridge the gap between different domains, allowing designers to build more inclusive and comprehensive solutions using recent touch-based devices. We present two evaluations comparing situational- to health-impaired users and report on the main differences and similarities we found on text-entry tasks. Our results show that while elderly users are more likely to commit cognitive errors, both user groups experience similar substitution errors. We found that the increased demands of mobility and type of device seemingly induce a “disability continuum”, where both situationally- and health-impaired users’ performance is interleaved
Touchscreen devices have become increasingly popular. Yet they lack of tactile feedback and motor stability, making it difficult effectively typing on virtual keyboards. This is even worse for elderly users and their declining motor abilities, particularly hand tremor. In this paper we examine text-entry performance and typing patterns of elderly users on touch-based devices. Moreover, we analyze users’ hand tremor profile and its relationship to typing behavior. Our main goal is to inform future designs of touchscreen keyboards for elderly people. To this end, we asked 15 users to enter text under two device conditions (mobile and tablet) and measured their performance, both speedand accuracy-wise. Additionally, we thoroughly analyze different types of errors (insertions, substitutions, and omissions) looking at touch input features and their main causes. Results show that omissions are the most common error type, mainly due to cognitive errors, followed by substitutions and insertions. While tablet devices can compensate for about 9% of typing errors, omissions are similar across conditions. Measured hand tremor largely correlates with text-entry errors, suggesting that it should be approached to improve input accuracy. Finally, we assess the effect of simple touch models and provide implications to design.
There is no such thing as an ultimate text-entry method. People are diverse and mobile touch typing takes place in many different places and scenarios. This translates to a wide and dynamic diversity of abilities. Conversely, different methods present different demands and are adequate to different people / situations. In this paper we focus our attention on blind and situationally blind people; how abilities differ between people and situations, and how we can cope with those differences either by varying or adapting methods. Our research goal is to identify the human abilities that influence mobile text-entry and match them with methods (and underlying demands) in a comprehensive and extensible design space.
Maintaining orientation while traveling in complex or unknown environments is a challenging task for visually impaired (VI) pedestrians. In this paper, we propose a novel approach to assist blind people during navigation between waypoints (walk straight) with tactors on their wrists. Our main goal is to decrease the cognitive load needed by blind people to follow instructions in overloaded environments. Two issues are discussed, 1) the number of vibration motors used; 2) the type of vibration dimensions issued. Preliminary results from of an informal evaluation performed with two blind users showed that vibrations could help the users maintaining their straight path, however patterns were sometimes confusing. This reinforced that walking an unknown path is a demanding and stressful task and the cognitive load should be reduced to a minimum.
In this paper we present AppInsight, a visualization tool that enables users to reminisce on their computer usage history and derive meaningful insights about behaviors and trends. Human memory has the ability to re-experience episodes from our lives when supplied with suitable contextual cues, such as places, music, and so on. We explore a small set of properties, such as the application’s name, URL and window title as contextual cues, in order to characterize the users’ activity on their personal computers and how it evolves over time. Our user study shows that users enjoyed viewing their computer usage history and were able to both recall past events and introspect about their lives. Moreover, one of the most surprising outcomes is that they found several different applications for our tool, such as improving usage behaviors, controlling productivity, generating activity reports, and monitoring users in psychological studies. Finally, we discuss some lessons learned from our study and propose future research directions
Mobile touch devices have become increasingly popular, yet typing on virtual keyboards whilst walking is still an overwhelming task. In this paper we analyze; firstly, the negative effect of walking on text-input performance, particularly the users’ main difficulties and error patterns. We focused our research on thumb typing, since this is a commonly used technique to interact with touch interfaces. Secondly, we analyze how these effects can be compensated by two-hand interaction and increasing target size. We asked 22 participants to input text under three mobility conditions (seated, slow walking, and normal walking) and three hand conditions (one-hand/portrait, twohand/portrait, and two-hand/landscape). Results show that independently of hand condition, mobility significantly decreased input quality, leading to specific error patterns. Moreover, it was shown that target size can compensate the negative effect of walking, while two-hand interaction does not provide additional stability or input accuracy. We finish with implications for future designs.
More and more people interact with mobile devices whilst walking. This new interaction paradigm imposes a novel set of challenges and restrictions to mobile users, denominated Situationally-Induced Impairments and Disabilities. The tremor originated of such contexts results in inaccurate movements and erroneous actions. These difficulties are particularly visible in recent touch interfaces that lack the tactile cues and physical stability provided by their keypad-based counterparts. Nevertheless, these difficulties are not new to the accessibility community, particularly for those studying motor impaired users. In fact, both user populations (situationally and physically impaired) seem to share similar interaction problems. This work aims to thoroughly understand to what extend technology can be transferred between those domains. Unlike the embryonic stage of mobile research, the accessibility community has the accumulated knowledge of more than two decades of research. Building a relationship between these domains will contribute towards a more inclusive and universal design approach, which will benefit and bring closer two distinct research communities.
The emergence of touch-based mobile devices brought fresh and exciting possibilities. These came at the cost of a considerable number of novel challenges. They are particularly apparent with the blind population, as these devices lack tactile cues and are extremely visually demanding. Existing solutions resort to assistive screen reading software to compensate the lack of sight, still not all the information reaches the blind user. Good spatial ability is still required to have notion of the device and its interface, as well as the need to memorize buttons‟ position on screen. These abilities, as many other individual attributes as age, age of blindness onset or tactile sensibility are often forgotten, as the blind population is presented with the same methods ignoring capabilities and needs. Herein, we present a study with 13 blind people consisting of a touch screen text-entry task with four different methods. Results show that different capability levels have significant impact on performance and that this impact is related with the different methods‟ demands. These variances acknowledge the need of accounting for individual characteristics and giving space for difference, towards inclusive design.
Touch screen mobile devices are highly flexible and customizable, allowing designers to create inclusive user interfaces that are accessible to a broader user population. However, the knowledge to provide this new generation of user interfaces is yet to be uncovered. Our goal is to thoroughly study mobile touch interfaces, thus providing the tools for informed design. We present an evaluation performed with 15 tetraplegic and 18 able-bodied people that allowed us to identify their main similarities and differences within a set of interaction techniques (Tapping, Crossing, and Directional Gesturing) and parameterizations. Results show that despite the expected error rate disparity, there are clear resemblances, thus enabling the development of inclusive touch interfaces. Tapping, a traditional interaction technique, was among the most effective for both target populations, along with Crossing. The main difference concerns Directional Gesturing that in spite of its unconstrained nature shows to be inaccurate for motor impaired users.
Mobile devices are used in increasingly demanding contexts, which compete for the visual resources required for an effective interaction. This is more obvious when considering current visually demanding user interfaces. In this work, we propose using solutions initially designed for blind people in order to ease the visual demand of current mobile interfaces. A comparative user study was conducted with 23 sighted volunteers who performed text-entry tasks with three methods, QWERTY, VoiceOver alike and NavTouch in three mobility conditions. We first analyzed the effect of walking and visual demand, followed by the effect of using assistive technologies in mobile contexts. Results show that traditional QWERTY keyboard outperforms alternative textentry methods for the blind, as users prefer visual feedback over their auditory counterpart. Moreover assistive technologies and their interaction processes revealed to be cognitively demanding and therefore inadequate in mobile contexts. These findings suggest that technology transfer should be performed with caution, and adaptations must be done to account for differences in users’ capabilities.
The emergence of touch screen devices poses a new set of challenges regarding text-entry. These are more obvious when considering blind people, as touch screens lack the tactile feedback they are used to when interacting with devices. The available solutions to enable non-visual text-entry resort to a wide set of targets, complex interaction techniques or unfamiliar layouts. We propose BrailleType, a text-entry method based on the Braille alphabet. BrailleType avoids multi-touch gestures in favor of a more simple single-finger interaction, featuring few and large targets. We performed a user study with fifteen blind subjects, to assess this method’s performance against Apple’s VoiceOver approach. BrailleType although slower, was significantly easier and less error prone. Results suggest that the target users would have a smoother adaptation to BrailleType than to other more complex methods.
No two persons are alike. We usually ignore this diversity as we have the capability to adapt and, without noticing, become experts in interfaces that were probably misadjusted to begin with. This adaptation is not always at the user’s reach. One neglected group is the blind. Age of blindness onset, age, cognitive, and sensory abilities are some characteristics that diverge between users. Regardless, all are presented with the same methods ignoring their capabilities and needs. Interaction with mobile devices is highly visually demanding which widens the gap between blind people. Herein, we present studies performed with 13 blind people consisting on key acquisition tasks with 10 mobile devices. Results show that different capability levels have significant impact on user performance and that this impact is related with the device and its demands. It is paramount to understand mobile interaction demands and relate them with the users’ capabilities, towards inclusive design.
No two people are alike. We usually ignore this diversity as we have the capability to adapt and, without noticing, become experts in interfaces that were probably misadjusted to begin with. This adaptation is not always at the user’s reach. One neglected group is the blind. Spatial ability, memory, and tactile sensitivity are some characteristics that diverge between users. Regardless, all are presented with the same methods ignoring their capabilities and needs. Interaction with mobile devices is highly visually demanding which widens the gap between blind people. Our research goal is to identify the individual attributes that influence mobile interaction, considering the blind, and match them with mobile interaction modalities in a comprehensive and extensible design space. We aim to provide knowledge both for device design, device prescription and interface adaptation.
O processo de fisioterapia consiste em devolver alguma qualidade de vida a deficientes motores, através do treino de um conjunto de movimentos. Cabe ao fisioterapeuta conseguir observar, interpretar e avaliar o estado actual e evolu- ção dos seus pacientes, de forma a maximizar o seu desempenho físico. Neste artigo, apresentamos uma análise ao processo actual de fisioterapia, num centro de reabilitação para tetraplégicos, identificando as suas principais limita- ções e oportunidades para uma ferramenta tecnológica. Seguindo uma abordagem de desenho centrado no utilizador, é descrita uma plataforma de suporte aos fisioterapeutas, cujo principal objectivo é tornar a reabilitação num processo mais fiável e robusto. Avaliações preliminares com a população-alvo confirmam a utilidade da nossa abordagem, contribuindo para um acompanhamento mais preciso. Por fim, são apresentados alguns cenários de interacção ilustrando todas as potencialidades do sistema.
Mobile touch-screen interfaces and tetraplegic people have a controversial connection. While users with residual capacities in their upper extremities could benefit immensely from a device which does not require strength to operate, the precision needed to effectively select a target bars these people access to countless communication, leisure and productivity opportunities. Insightful projects attempted to bridge this gap via either special hardware or particular interface tweaks. Still, we need further insight into the challenges and the frontiers separating failure from success for such applications to take hold. This paper discusses an evaluation conducted with 15 tetraplegic people to learn the limits to their performance within a comprehensive set of interaction methods. We then present the results concerning a particular interaction technique: Tapping. Results show that performance varies across different areas of the screen whose distribution changes with target size.
Mobile devices are designed mostly to fit users with no particular disability. Tactile affordances are neglected at the expense of more attractive stylish interfaces and assistive solutions are stereotypical, also facing disabilities with a narrow perspective. A blind user is presented with screen reading software to overcome the inability to receive feedback from the device. However, these solutions go only half-way. In the absence of sight other capabilities stand up. Above all, the sense of touch plays an essential role while interacting with physical keypads. To empower these users, a deeper understanding of their capabilities and how they relate with technology is mandatory. We propose a user-product compatibility approach, taking in account that blind users have different tactile attributes. We expect to correlate the user’s tactile sensitivity and keypad demands, enabling informed keypad design and selection.
We are moving towards a future where people will be surrounded by technology and multiple appliances, allowing the creation of a truly intelligent environment. However, this multitude of devices raises several issues to the HCI research area. Our preliminary studies confirmed that most devices are difficult to use by blind people, due to inappropriate interfaces. The approach described in this work tries to deal with this problem by moving the user interface from the appliances to an intermediary device, which users are familiar with and can fully control. Additionally, we propose an interface generation algorithm, which provides consistent user interfaces to all appliances in the environment.
Este artigo apresenta uma avaliação efectuada a 15 utilizadores tetraplégicos com o objectivo de compreender as suas capacidades num conjunto de técnicas de interacção (Tapping, Crossing, Exiting e Gestos Direccionais) e respectivas parametrizações (posição, tamanho e direcção). Os resultados mostraram que para cada técnica a eficácia e precisão varia de acordo com as diferentes parametrizações. Regra geral, o Tapping (método tradicional) foi a técnica de interacção preferida e entre as mais eficazes. Isto mostra que é possível criar interfaces unificadas e acessíveis a utilizadores com e sem deficiências, caso existam métodos de parametrização ou adaptação apropriados.
Embora dispositivos como os telemóveis assumam um papel cada vez mais importante na vida diária de muitas pessoas, estes continuam a apresentar dificuldades e restrições a populações com necessidades especiais. Os cegos e deficientes visuais em particular, privados de informação visual na qual a maioria dos dispositivos se baseia, necessitam de um esforço cognitivo suplementar na interacção com telemóveis. Apesar de existir interesse em perceber a importância de características humanas na interacção com tecnologia, existe uma grande lacuna no que respeita a estudos que relacionem capacidades cognitivas e o uso de dispositivos móveis por parte de deficientes visuais. Face ao esforço cognitivo superior, na ausência de visão, pretendem-se caracterizar os diferentes tipos de utilizadores de acordo com as suas capacidades cognitivas, de modo a permitir explorar diferentes métodos de interacção e, assim, criar soluções que se adeqúem ao perfil de cada um.
O processo de reabilitação actual caracteriza-se pela sua longa duração e natureza desmotivante. Porém, é uma actividade indispensável para a recuperação de pacientes tetraplégicos. O objectivo deste trabalho é tornar a fisioterapia num processo mais divertido e aliciante para os utilizadores. A primeira contribuição deste artigo consiste numa descrição detalhada do processo tradicional de fisioterapia, nomeadamente na caracterização e compreensão dos exercícios mais relevantes. Em segundo lugar, e tendo em conta as necessidades dos utilizadores, retiramos algumas implicações para o desenho de plataformas tecnológicas. Em seguida, apresentamos a nossa abordagem,que conjuga elementos e processos do mundo real com elementos virtuais, podendo assim oferecer aos utilizadores uma experiência mais rica e envolvente, e propomos um conjunto de soluções tecnológicas que poderão tornar a fisioterapia numa actividade mais divertida.
Os dispositivos móveis são normalmente desenhados para utilizadores sem qualquer tipo de deficiência. Consequentemente, o retorno táctil é muitas vezes negligenciado em detrimento de dispositivos esteticamente atractivos Mais, as soluções de acessibilidade são normalmente estereotipadas, encarando as deficiências através de uma perspectiva limitada. Em particular, os leitores de ecrã são usados por utilizadores cegos como forma de ultrapassar a incapacidade de receber retorno do dispositivo. Porém, estas soluções apenas resolvem alguns problemas existentes. Na cegueira, outras capacidades ganham uma maior relevância. Acima de tudo, o tacto desempenha um papel essencial quando se interage com teclados físicos. Para maximizar o desempenho destes utilizadores, é necessário ter um conhecimento mais aprofundado das suas capacidades. Neste trabalho propomos uma aproximação de compatibilidade utilizador-produto, tentando correlacionar a sensibilidade táctil dos utilizadores e exigências dos teclados, permitindo a criação de interfaces através de um desenho informado.
A crescente miniaturização dos dispositivos móveis e as suas interfaces visualmente exigentes impõem diversos desafios à população cega. Em particular, os métodos de introdução de texto tradicionais mostram-se desadequados às necessidades destes utilizadores. Este artigo descreve uma nova abordagem de entrada de dados em dispositivos móveis com base numa interface gestual. O NavTilt apresenta-se como um método de interacção simples e natural, recorrendo à utilização de apenas uma mão, podendo ser usado sem retorno visual.
Touch screen mobile devices bear the promise of endless leisure, communication, and productivity opportunities to motor-impaired people. Indeed, users with residual capacities in their upper extremities could benefit immensely from a device with no demands regarding strength. However, the precision required to effectively select a target without physical cues creates problems to people with limited motor abilities. Our goal is to thoroughly study mobile touch screen interfaces, their characteristics and parameterizations, thus providing the tools for informed interface design for motor-impaired users. We present an evaluation performed with 15 tetraplegic people that allowed us to understand the factors limiting user performance within a comprehensive set of interaction techniques (Tapping, Crossing, Exiting and Directional Gesturing) and parameterizations (Position, Size and Direction). Our results show that for each technique, accuracy and precision vary across different areas of the screen and directions, in a way that is directly dependent on target size. Overall, Tapping was both the preferred technique and among the most effective. This proves that it is possible to design inclusive unified interfaces for motor-impaired and able-bodied users once the correct parameterization or adaptability is assured.
NavTap is a navigational method that enables blind users to input text in a mobile device by reducing the associated cognitive load. In this paper, we present studies that go beyond a laboratorial setting, exploring the methods’ effectiveness and learnability as well as its influence on the users’ daily lives. Eight blind users participated in designing the prototype (3 weeks) while five took part in the studies along 16 more weeks. Results gathered in controlled weekly sessions and real life usage logs enabled us to better understand NavTap’s advantages and limitations. The method revealed itself both as easy to learn and improve. Indeed, users were able to better control their mobile devices to send SMS and use other tasks that require text input such as managing a phonebook, from day one, in real-life settings. While individual user profiles play an important role in determining their evolution, even less capable users (with ageinduced impairments or cognitive difficulties), were able to perform the assigned tasks (sms, directory) both in the laboratory and in everyday use, showing continuous improvement to their skills. According to interviews, none were able to input text before. Nav-Tap dramatically changed their relation with mobile devices and noticeably improved their social interaction capabilities.
Most blind users frequently need help when visiting unknown places. While the white cane or guide dog can aid the users in their mobility, the major difficulties arise in orientation. The lack of both reference points and visual cues are the main causes. Despite extensive research in orientation interfaces for the blind, their guiding instructions are not aligned with the users’ needs and language, resulting in solutions which provide inadequate feedback. We aim to overcome this issue allowing users to walk through unknown places, by receiving a familiar and natural feedback. Our contributions are in understanding, through user studies, how blind users explore an unknown place, their difficulties, capabilities, needs and behaviors. We also analyzed how these users create their own mental maps, verbalize a route and communicate with each other. By structuring and generalizing this information, we were able to create a prototype that generates familiar instructions, behaving like a blind companion, one with similar capabilities that understands their “friend” and speaks the same language. Finally, we evaluated the system with the target population, validating our approach and guidelines. Results show a high degree of overall user satisfaction and provide encouraging cues to further the present line of work.
For the majority of blind people, walking in unknown places is a very difficult, or even impossible, task to perform, when without help. The adoption of the white cane is the main aid to a blind user’s mobility. However, the major difficulties arise in the orientation task. The lack of reference points and the inability to access visual cues are its main causes. We aim to overcome this issue allowing users to walk through unknown places, by receiving a familiar and easily understandable feedback. Our preliminary contributions are in understanding, through user studies, how blind users explore an unknown place, their difficulties, capabilities and needs. We also analyzed how these users create their own mental maps, verbalize a route and communicate with each other. Structuring and generalizing this information, we were able to create a prototype that generates familiar and adequate instructions, behaving like a blind companion, one with similar capabilities that understands his “friend” and speaks the same language. We evaluated the system with the target population, validating our approach and orientation guidelines, while gathering overall user satisfaction.
NavTap is a navigational method that enables blind users to input text in a mobile device by reducing the associated cognitive load. We present studies that go beyond a laboratorial setting, exploring the methods’ effectiveness and learnability as well as its influence in the users’ daily lives. Eight blind users participated in the prototype’s design (3 weeks) while five took part in the studies along 16 more weeks. All were unable to input text before. Results gathered in controlled weekly sessions and real life interaction logs revealed the method as easy to learn and improve performance, as the users were able to fully control mobile devices in the first contact within real life scenarios. The individual profiles play an important role determining evolution and even less capable users (with age-induced impairments or cognitive difficulties) were able to perform the required tasks, in and out of the laboratory, with continuous improvements. NavTap dramatically changed the users’ relation with the devices and improved their social interaction capabilities.
Mobile phones play an important role in modern society. Their applications extend beyond basic communications, ranging from productivity to leisure. However, most tasks beyond making a call require significant visual skills. While screen-reading applications make text more accessible, most interaction, such as menu navigation and especially text entry, requires hand–eye coordination, making it difficult for blind users to interact with mobile devices and execute tasks. Although solutions exist for people with special needs, these are expensive and cumbersome, and software approaches require adaptations that remain ineffective, difficult to learn, and error prone. Recently, touch-screen equipped mobile phones, such as the iPhone, have become popular. The ability to directly touch and manipulate data on the screen without using any intermediary devices has a strong appeal, but the possibilities for blind users are at best limited. In this article, we describe NavTouch, a new, gesture-based, text-entry method developed to aid vision-impaired users with mobile devices that have touch screens. User evaluations show it is both easy to learn and more effective than previous approaches.
Os dispositivos moveis desempenham um papel importante na sociedade moderna. As suas funcionalidades vão além da simples comunicação, juntando agora um grande leque de funcionalidades, sejam elas de lazer ou de cariz profissional. A interacção com estes dispositivos e visualmente exigente, dificultando ou impossibilitando os utilizadores invisuais de terem controlo sobre o seu dispositivo. Em particular, a introdução de texto, uma tarefa transversal a muitas aplicaçoes, e de difícil realização, uma vez que depende do retorno visual, tanto do teclado, como do ecrã. Assim, através da utilização de novos sistemas de introdução de texto, que exploram as capacidades dos utilizadores invisuais, o sistema apresentado neste artigo oferece-lhes a possibilidade de operarem diferentes tipos de dispositivos. Para alem dos telemóveis comuns, apresentamos também um método de interacção em dispositivos com ecrãs tácteis. Estudos com utilizadores invisuais validaram as abordagens propostas para os varios dispositivos que suplantam os métodos tradicionais ao nível do desempenho, aprendizagem e satisfação do utilizador-alvo.