Browsing by Author "Ren, Xiangshi"
Now showing 1 - 8 of 8
Results Per Page
Sort Options
Item Ability-based optimization of touchscreen interactions(2018-01-01) Sarcar, Sayan; Jokinen, Jussi P.P.; Oulasvirta, Antti; Wang, Zhenxin; Silpasuwanchai, Chaklam; Ren, Xiangshi; Kochi University of Technology; Department of Communications and Networking; Department of Computer ScienceAbility-based optimization is a computational approach for improving interface designs for users with sensorimotor and cognitive impairments. Designs are created by an optimizer, evaluated against task-specific cognitive models, and adapted to individual abilities. The approach does not necessitate extensive data collection and could be applied both automatically and manually by users, designers, or caretakers. As a first step, the authors present optimized touchscreen layouts for users with tremor and dyslexia that potentially improve text-entry speed and reduce error.Item Adaptive feature guidance: Modelling visual search with graphical layouts(Academic Press Inc., 2020-04-01) Jokinen, Jussi P.P.; Wang, Zhenxin; Sarcar, Sayan; Oulasvirta, Antti; Ren, Xiangshi; Department of Communications and Networking; User Interfaces; Finnish Center for Artificial Intelligence, FCAI; Kochi University of TechnologyWe present a computational model of visual search on graphical layouts. It assumes that the visual system is maximising expected utility when choosing where to fixate next. Three utility estimates are available for each visual search target: one by unguided perception only, and two, where perception is guided by long-term memory (location or visual feature). The system is adaptive, starting to rely more upon long-term memory when its estimates improve with experience. However, it needs to relapse back to perception-guided search if the layout changes. The model provides a tool for practitioners to evaluate how easy it is to find an item for a novice or an expert, and what happens if a layout is changed. The model suggests, for example, that (1) layouts that are visually homogeneous are harder to learn and more vulnerable to changes, (2) elements that are visually salient are easier to search and more robust to changes, and (3) moving a non-salient element far away from original location is particularly damaging. The model provided a good match with human data in a study with realistic graphical layouts.Item Approaching Aesthetics on User Interface and Interaction Design(2018-11-19) Wang, Chen; Sarcar, Sayan; Kurosu, Masaaki; Bardzell, Jeffrey; Oulasvirta, Antti; Miniukovich, Aliaksei; Ren, Xiangshi; Department of Communications and Networking; Helsinki Institute for Information Technology (HIIT); User Interfaces; Kochi University of Technology; Open University of Japan; Indiana University; BEC-INFM; University of TsukubaAlthough the HCI community inevitably contributes to engagement via beauty according to the attention paid to known and yet to be discovered principles of aesthetics for digital interface design, it is lacking an epistemological corpus which should include the notion, human factors and the quantification of aesthetic aspects. The aim of the proposed workshop is to discuss these issues in order to strengthen aesthetic studies specifically for HCI and related fields. We want to create a forum for discussing, drafting and promoting the foundations for disciplined aesthetics design within the HCI community. We thus welcome contributions such as theories, methodologies, evaluation methods, and potential applications regarding effective aesthetics for HCI and related fields. Concretely, we aim to (i) map the present state-of-art of aesthetic research in HCI, (ii) build a multidisciplinary community of experts, and (iii) raise the profile of this aesthetics research area within HCI community.Item Approaching Engagement towards Human-Engaged Computing(2018-04) Salehzadeh Niksirat, Kavous; Sarcar, Sayan; Sun, Huatong; Law, Effie LC; Clemmensen, Torkil; Bardzell, Jeffrey; Oulasvirta, Antti; Silpasuwanchai, Chaklam; Light, Ann; Ren, Xiangshi; Department of Communications and Networking; Helsinki Institute for Information Technology (HIIT); User Interfaces; Kochi University of Technology; University of Washington; University of Leicester; Copenhagen Business School; Indiana University of Pennsylvania; Stamford International University; University of SussexDebates regarding the nature and role of HCI research and practice have intensified in recent years, given the ever increasingly intertwined relations between humans and technologies. The framework of Human-Engaged Computing (HEC) was proposed and developed over a series of scholarly workshops to complement mainstream HCI models by leveraging synergy between humans and computers with its key notion of "engagement". Previous workshop meetings found "engagement" to be a constructive and extendable notion through which to investigate synergized human-computer relationships, but many aspects concerning the core concept remain underexplored. This SIG aims to tackle the notion of engagement considered through discussions of four thematic threads. It will bring together HCI practitioners and researchers from different disciplines including Humanities, Design, Positive Psychology, Communication and Media Studies, Neuroscience, Philosophy and Eastern Studies, to share and discuss relevant knowledge and insights and identify new research opportunities and future directions.Item How We Type: Eye and Finger Movement Strategies in Mobile Typing(2020-04-21) Jiang, Xinhui; Li, Yang; Jokinen, Jussi P.P.; Hirvola, Viet Ba; Oulasvirta, Antti; Ren, Xiangshi; Department of Communications and Networking; User Interfaces; Helsinki Institute for Information Technology (HIIT); Aalto University; Kochi University of TechnologyRelatively little is known about eye and finger movement in typing with mobile devices. Most prior studies of mobile typing rely on log data, while data on finger and eye movements in typing come from studies with physical keyboards. This paper presents new findings from a transcription task with mobile touchscreen devices. Movement strategies were found to emerge in response to sharing of visual attention: attention is needed for guiding finger movements and detecting typing errors. In contrast to typing on physical keyboards, visual attention is kept mostly on the virtual keyboard, and glances at the text display are associated with performance. When typing with two fingers, although users make more errors, they manage to detect and correct them more quickly. This explains part of the known superiority of two-thumb typing over one-finger typing. We release the extensive dataset on everyday typing on smartphones.Item Learning to type with mobile keyboards: Findings with a randomized keyboard(PERGAMON-ELSEVIER SCIENCE LTD, 2022-01) Jiang, Xinhui; Jokinen, Jussi P.P.; Oulasvirta, Antti; Ren, Xiangshi; Department of Communications and Networking; Finnish Center for Artificial Intelligence, FCAI; Helsinki Institute for Information Technology (HIIT); User Interfaces; Kochi University of TechnologyThis paper demonstrates the learning process of typing by tracing the development of eye and finger movement strategies over time. We conducted a controlled experiment in which users typed with Qwerty and randomized keyboards on a smartphone, allowing us to induce and analyze users’ behavioral strategies with different amounts of accumulated typing experience. We demonstrate how strategies, such as speed-accuracy trade-offs and gaze deployment between different regions of the typing interface depend on the amount of experience. The results suggest that, in addition to motor learning, the development of performance in mobile typing is attributable to the adaptation of visual attention and eye-hand coordination, in particular, the development of better location memory for the keyboard layout shapes the strategies. The findings shed light on how visuomotor control strategies develop during learning to type.Item Modelling Learning of New Keyboard Layouts(2017) Jokinen, Jussi PP; Sarcar, Sayan; Oulasvirta, Antti; Silpasuwanchai, Chaklam; Wang, Zhenxin; Ren, Xiangshi; Department of Communications and Networking; Helsinki Institute for Information Technology (HIIT); User Interfaces; Kochi University of TechnologyPredicting how users learn new or changed interfaces is a long-standing objective in HCI research. This paper contributes to understanding of visual search and learning in text entry. With a goal of explaining variance in novices' typing performance that is attributable to visual search, a model was designed to predict how users learn to locate keys on a keyboard: initially relying on visual short-term memory but then transitioning to recall-based search. This allows predicting search times and visual search patterns for completely and partially new layouts. The model complements models of motor performance and learning in text entry by predicting change in visual search patterns over time. Practitioners can use it for estimating how long it takes to reach the desired level of performance with a given layout.Item Swap: A Replacement-based Text Revision Technique for Mobile Devices(2020-04-21) Li, Yang; Sancar, Sayan; Kim, Sunjun; Ren, Xiangshi; Department of Communications and Networking; User Interfaces; Kochi University of Technology; University of TsukubaText revision is an important task to ensure the accuracy of text content. Revising text on mobile devices is cumbersome and time-consuming due to the imprecise caret control and the repetitive use of the backspace. We present Swap, a novel replacement-based technique to facilitate text revision on mobile devices. We conducted two user studies to validate the feasibility and the effectiveness of Swap compared to traditional text revision techniques. Results showed that Swap reduced efforts in caret control and repetitive backspace pressing during the text revision process. Most participants preferred to use the replacement-based technique rather than backspace and caret. They also commented that the new technique is easy to learn, and it makes text revision rapid and intuitive.