Work Experience

  • Present 2021

    Team Leader

    Interaction Design and Technologies, Fraunhofer IAO, Germany

  • 2021 2020

    Senior Researcher

    Analytic Computing, University of Stuttgart, Germany

  • 2020 2015

    Senior Researcher

    Institute for Web Science and Technologies, University of Koblenz, Germany

  • 2015 2012

    Doctoral Researcher

    Media Informatics and Multimedia Systems Group, University of Oldenburg, Germany

  • 2012 2010

    Research Associate

    Interactive Systems Group, OFFIS - Institute for Information Technology, Germany

  • 2010 2009

    Research Intern - Engineer

    Search Sciences Group, Yahoo! Labs Bangalore, India

  • 2009 2006

    Research Assistant

    Language Technologies Research Centre, IIIT Hyderabad, India

Education

  • Ph.D. 2016

    Ph.D. in Computer Science

    University of Oldenburg, Germany

  • M.S.2009

    Master of Science by Research (Informatics)

    IIIT Hyderabad, India

  • B.E.2006

    Bachelor of Engineering (Computer Science)

    Bhilai Institute of Technology, Durg, India

Awards and Fellowships

  • 2018
    Best Video Award at ACM ETRA 2018, Warsaw, Poland
    Best Video Award at the 2018 ACM Symposium on Eye Tracking (ETRA) in Warsaw, June 2018. The video had been submitted as addition to the paper "Enhanced Representation of Web Pages for Usability Analysis with Eye Tracking". We have proposed a method that identifies fixed elements on Web pages and combines user viewport screenshots in relation to fixed elements for an enhanced representation of the page. This representation can be overlaid with gaze and interaction data, to allow a more efficient usability analysis. The award signifies the usefulness and need for the research as proposed in the GazeMining project.
  • 2018
    Finalist at Unitymedia Digitial Imagination Challenge, Berlin
    We as GazeTheWeb team pitched our gaze-controlled Web browser at the Unitymedia Digitial Imagination Challenge on 15th February in Berlin and scored the third place in the Final round! The challenge is about inclusion of people with disabilities in the digital environment and GazeTheWeb was especially mentioned for its advanced technical implementation that takes the accessibility of the Web to the next level.
  • 2017
    Winners of Web Accessibility Challenge at Web For All 2017, Australia
    Our system "GazeTheWeb: A Gaze Controlled Web Browser" received the Web For All 2017 (w4a2017) “Web Accessibility Challenge” award in Perth, Australia. The 11th TPG Web Accessibility Challenge is a worldwide competition to showcase advanced Web and Mobile technologies to technical leaders from academia and industry. The goal of the Challenge is to acknowledge the development of innovative and usable technologies to make Web accessible to all people. The Challenge featured several experimental systems and technologies that were compared and evaluated by a panel of accessibility experts and delegates to identify the most significant advances in accessibility research in the year 2017. GazeTheWeb received a highly positive response for its usability and impact in the field of Web accessibility, and was adjudged as the winner of the 11th TPG Web Accessibility Challenge 2017.
  • 2017
    Honorable Mention at WWW 2017, Australia
    At the 26th International World Wide Web Conference, we presented our methodology of unsupervised web extraction and extendable framework to include explicit interaction events in web pages. we demonstrated the Chromium based inclusive framework to adapt eye gaze events in Web interfaces, which includes the Web extraction methodology to identify the input and selectable objects, and a gaze enhanced design to interact with the objects. The novel idea was very much appreciated by the World Wide Web community and our paper ‘Chromium based Framework to Include Gaze Interaction in Web Browser’ received the honourable mention award competing with several other state of the art researches in international Web community.
  • 2017
    Best Paper at CBMS 2017, Greece
    Our paper on "Analyzing the Impact of Cognitive Load in Evaluating Gaze-based Typing" by Korok Sengupta, Jun Sun, Raphael Menges, Chandan Kumar and Steffen Staab has received best STUDENT paper award at Computer Based Medical systems conference 2017.
  • 2016
    Doktor der Naturwissenschaften, Oldenburg, Germany
    I successfully defended the dissertation on "Regional Search and Visualization Methodologies for Multi-Criteria Geographic Retrieval" and awarded Dr. rer. nat. degree with "magna cum laude - very good" distinction.
  • 2013
    Best Paper Honorable Mention at i-KNOW 2013, Austria
    I received the award at the 13th ACM International Conference on Knowledge Management and Knowledge Computing (i-KNOW 2013), Graz, Austria. Our paper entitled "A Visual Interactive System for Spatial Querying and Ranking of Geographic Regions“ demonstrated a prototype which helps users for search and exploration of relevant geographical regions. Different regions on a surface maps can be specified, analyzed, and compared to assist end users in relocation or touristic scenarios. The application was highly appreciated by the conference attendees for the real world relevance, and was judged as the most likely to be turned over into business case by the three experienced business angels.
  • 2009
    Indian Young Researcher Award - Italian Fellowship
    I was selected for Indian young researchers fellowship, in collaboration with Govt of Italy and Birla Science Centre Hyderabad.
  • 2007
    Kalpana Chawla Fellowship for Higher Education, India
    I was awarded Kalpana Chawla Scholarship for higher education from Govt of Chhattisgarh, India.
  • 2006
    IIIT Hyderabad Full Fellowship, India
    I was selected for the full research fellowship program for MS specialization at IIIT Hyderabad; hence the tuition fee and all study expenses were covered by the institute.
  • 2002
    State Meritorious Student Award, Chhattisgarh, India
    I secured 5th rank in the top students list of Chhattisgarh state for the higher secondary school examination, and was later awarded by the chief minister.

Research Projects

  • image

    GazeTheWeb: Eye-Controlled Web Browser

    The product won the 11th Web Accessibility Challenge 2017 As The Future of Accessible Work

    GazeTheWeb integrates the visual appearance and control functionality of webpages in an eye tracking environment. It combines both, webpage element extraction and emulation of traditional input devices within the browser in order to provide smooth and reliable Web access. GazeTheWeb not only supports the efficient interaction with the webpage (scrolling, link navigation, text input), it effectively supports all essential browsing menu operations like bookmarks, history, and tab management.

    The detailed description of this project could be found at the official project page or through my recent publications.

  • image

    GazeMining: Analysis of Eye Tracking and Interaction Data on Dynamic Web Content

    The first prototype demo won the Best video award at ETRA 2018 Warsaw.

    The aim of the research project GazeMining is to capture Web sessions semantically and thus obtain a complete picture of visual content, perception and interaction. The log streams of usability tests are evaluated using data mining. The analysis and interpretability of the data collected in this way is made possible by a user-friendly presentation, semi-automatic and automatic analysis procedures.

    The detailed description of this project could be found at the official project page or through my recent publications.

  • image

    CUTLER: Coastal Urban development through the Lenses of resiliency

    Project funded by the H2020 European Union

    Coastal urban development incorporates a wide range of development activities that are taking place as a result of the water element existing in the fabric of the city. We aim to shift the existing paradigm of policy making in coastal areas, which is largely based on intuition, towards an evidence-driven approach enabled by big data. Our basis is the sensing infrastructures installed in the cities o↵ering demographic data, statistical information, sensor readings and user contributed content forming the big data layer. Methods for big data analytics and visualization are used to measure the economic activity, assess the environmental impact and evaluate the social consequences.

    The detailed description of this project could be found at the official project page

  • image

    MAMEM: Multimedia Authoring and Management using your Eyes and Mind

    Project funded by the H2020 European Union

    MAMEM is a platform designed to aid people with physical disabilities to use digital devices, in order to create optimal conditions for digitally and socially inclusive activities that promote their quality of life. Thus, MAMEM has sought to reshape radically the human- computer interaction with the purpose of offering a technology that will enable individuals with disabilities to fully use software applications so as to perform multimedia-related tasks using their eyes and mind.

    The detailed description of this project could be found at the official project page or through my recent publications.

  • image

    UrbanExplorer - Interactive Exploration of Geo-Located Infrastructure and Facilities

    Project funded by the DFG (Geman National Science Foundation)

    In this project I investigated methods for retrieving geo-spatially related information of various types and from different sources, integrating these into novel visualizations that are easy to interpret. We developed interactive interfaces which go beyond map-based points and provide intuitive visualization for relevant aspects of the geo-spatial availability of services and infrastructure such as spreading and distribution, sparseness and density, reachability and connectivity.

    The detailed description of this project could be found in the University project page

  • image

    GazeTheKey: Interactive Keys to Enhance Gaze-based Typing Experience

    The keyboard was demonstrated at IUI Intelligent User Interfaces conference in Cyprus 2017

    We have developed "GazeTheKey” interface for eye typing, where keys not only signifies the input letter, but also predict the relevant words that could be selected by user’s gaze utilizing a two-step dwell time. The proposed design brings the relevant suggestions in the visual attention of users, minimizing the additional cost of scanning an external word suggestion list. Furthermore, it offers the possibility to include much more suggestions than the conventional interfaces having few suggestions at the top of keyboard layout.

  • image

    C3World - Connected Cars in a Connected World

    Project work in collaboration with Volkswagen research, Germany

    The project goal was to present useful spatial information from Web to car passengers with respect to the context and driving situation. To realise such an information system, I contributed in the development of specialised geographical search engine with the enhancements of retrieval methods for geospatial data collection through focused web crawling, address extraction and indexing.

    The detailed description of this project could be found in the OFFIS project page

  • image

    High Precision Attribute Extraction from Web Pages

    Project work at Yahoo! Labs Bangalore.

    I worked on the problem of attribute extraction from template-generated web pages. I contributed in building a high-performance extraction system using machine learning techniques to perform the extraction on real-life web pages, specifically in product and business domain. Another aspect of focus is on the problem of robust Wrapper generation to accommodate seasonal changes in web page structure and produce precise extractions across time.

  • image

    Personalized Information Access for Mobile Devices

    Research work in collaboration with Nokia Research Centre, Helsinki.

    In this work my aim was to provide ”more information with less overload” for faster and easier way to access precise information to the end user. This project is especially targeted for the small device users, where accessing information through documents becomes really difficult due to the limitations like small display, tiny keypad and low bandwidth. I used techniques like clustering, summarization and personalization to produce final result as a summary text in response to user query.

PhD Thesis

Summary

The objective of my PhD research was to investigate and develop sophisticated visual interfaces and ranking methods to enable end-users in discovering knowledge hidden in multi-dimensional geospatial databases. The proposed interaction procedure goes beyond the conventional list based local search interfaces and proposes to access the geospatial data sources with regional overview. Users can compare the characterization of urban areas with respect to multiple spatial dimensions of interest and can search for the most suitable region. The search experience is further enhanced via efficient regional ranking algorithms and optimization methods to accomplish the complex search task in computationally effective manner.

Advisor: Prof. Susanne Boll

Master Thesis

Summary

The thesis is focused on proposing theoretical framework for document summarization; I formalize document summarization as a decision making problem, and derive a general extraction mechanism for picking sentences based on expected risk of information loss. Through this formulation I come up with a lightweight function to generate more informative summary than the earlier approaches which use complex algorithm for summary generation.

Advisor: Prof. Vasudeva Varma

Filter by type:

Sort by year:

Hummer: Text Entry by Gaze and Hum

Ramin Hedeshy, Chandan Kumar, Raphael Menges, Steffen Staab
Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, (CHI 21), Yokohama, Japan

Abstract

Text entry by gaze is a useful means of hands-free interaction that is applicable in settings where dictation suffers from poor voice recognition or where spoken words and sentences jeopardize privacy or confidentiality. However, text entry by gaze still shows inferior performance and it quickly exhausts its users. We introduce text entry by gaze and hum as a novel hands-free text entry. We review related literature to converge to word-level text entry by analysis of gaze paths that are temporally constrained by humming. We develop and evaluate two design choices: ``HumHum'' and ``Hummer.'' The first method requires short hums to indicate the start and end of a word. The second method interprets one continuous humming as an indication of the start and end of a word. In an experiment with 12 participants, Hummer achieved a commendable text entry rate of 20.45 words per minute, and outperformed HumHum and the gaze-only method EyeSwipe in both quantitative and qualitative measures.

TAGSwipe: Touch Assisted Gaze Swipe for Text Entry

Chandan Kumar, Ramin Hedeshy, Scott MacKenzie, Steffen Staab
Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, (CHI 20), Honolulu, Hawai'i

Abstract

The conventional dwell-based methods for text entry by gaze are typically slow and uncomfortable. A swipe-based method that maps gaze path into words offers an alternative. However, it requires the user to explicitly indicate the beginning and ending of a word, which is typically achieved by tedious gaze-only selection. This paper introduces TAGSwipe, a bi-modal method that combines the simplicity of touch with the speed of gaze for swiping through a word. The result is an efficient and comfortable dwell-free text entry method. In the lab study TAGSwipe achieved an average text entry rate of 15.46 wpm and significantly outperformed conventional swipe-based and dwell-based methods in efficacy and user satisfaction.

Signal processing to drive human-computer interaction (Book)

Spiros Nikolopoulos, Chandan Kumar, Yiannis Kompatsiaris (Editors)
Control, Robotics, and Sensors. Institution of Engineering and Technology 2020

Abstract

The evolution of eye tracking and brain-computer interfaces has given a new perspective on the control channels that can be used for interacting with computer applications. In this book leading researchers show how these technologies can be used as control channels with signal processing algorithms and interface adaptations to drive a human-computer interface. Topics included in the book include a comprehensive overview of eye-mind interaction incorporating algorithm and interface developments; modeling the (dis)abilities of people with motor impairment and their computer use requirements and expectations from assistive interfaces; and signal processing aspects including acquisition, preprocessing, enhancement, feature extraction, and classification of eye gaze, EEG (Steady-state visual evoked potentials, motor imagery and error-related potentials) and near-infrared spectroscopy (NIRS) signals. Finally, the book presents a comprehensive set of guidelines, with examples, for conducting evaluations to assess usability, performance, and feasibility of multi-model interfaces combining eye gaze and EEG based interaction algorithms. The contributors to this book are researchers, engineers, clinical experts, and industry practitioners who have collaborated on these topics, providing an interdisciplinary perspective on the underlying challenges of eye and mind interaction and outlining future directions in the field.

Eye tracking for interaction: evaluation methods

Chandan Kumar, Raphael Menges, Korok Sengupta, Steffen Staab
In: Signal Processing to Drive Human-Computer Interaction: Chap.6. Control, Robotics, and Sensors. Institution of Engineering and Technology, 2020, pp. 117–144

Abstract

The motivation to investigate eye tracking as a hands-free input method for interaction is pertinent, because eye control can be a significant addition to the lives of people with a motor disability, which hinders their use of mouse and keyboard. With this motivation in mind, so far research in eye-controlled interaction has focused on several aspects of interpreting eye tracking as input for pointing, typing, and interaction methods with interfaces. In this regard, the major question is about how well does the eye-controlled interaction work for the proposed methods? How efficiently can pointing and selection be performed? Whether com- mon tasks can be performed quickly and accurately with the novel interface? How different gaze interaction methods can be compared? What is the user experience while using eye-controlled interfaces? These are the sorts of questions that can be answered with an appropriate evaluation methodology. Therefore, in this chapter, we review and elaborate different evaluation methods used in gaze interaction research, so the readers can inform themselves of the procedure and metrics to assess their novel gaze interaction method or interface.

GIUPlayer: A Gaze Immersive YouTube Player Enabling Eye Control and Attention Analysis

Ramin Hedeshy, Chandan Kumar, Raphael Menges, Steffen Staab
ACM Symposium on Eye Tracking Research and Applications. 2020, (ETRA'20)

Abstract

We developed a gaze immersive YouTube player, called GIUPlayer, with two objectives: First to enable eye-controlled interaction with video content, to support people with motor disabilities. Second to enable the prospect of quantifying attention when users view video content, which can be used to estimate natural viewing behaviour. In this paper, we illustrate the functionality and design of GIUPlayer, and the visualization of video viewing pattern. The long-term perspective of this work could lead to the realization of eye control and attention based recommendations in online video platforms and smart TV applications that record eye tracking data.

A Visualization Tool for Eye Tracking Data Analysis in the Web

Raphael Menges, Sophia Kramer, Stefan Hill, Marius Nisslmueller, Chandan Kumar, Steffen Staab
ACM Symposium on Eye Tracking Research and Applications. 2020, (ETRA'20)

Abstract

Usability analysis plays a significant role in optimizing Web interaction by understanding the behavior of end users. To support such analysis, we present a tool to visualize gaze and mouse data of Web site interactions. The proposed tool provides not only the traditional visualizations with fixations, scanpath, and heatmap, but allows for more detailed analysis with data clustering, demographic correlation, and advanced visualization like attention flow and 3D-scanpath. To demonstrate the usefulness of the proposed tool, we conducted a remote qualitative study with six analysts, using a dataset of 20 users browsing eleven real-world Web sites.

Eye tracking for interaction: adapting multimedia interfaces

Raphael Menges, Chandan Kumar, Steffen Staab
In: Signal Processing to Drive Human-Computer Interaction: Chap.5. Control, Robotics, and Sensors. Institution of Engineering and Technology, 2020, pp. 83–116

Abstract

This chapter describes how eye tracking can be used for interaction. The term eye tracking refers to the process of tracking the movement of eyes in relation to the head, to estimate the direction of eye gaze. The eye gaze direction can be related to the absolute head position and the geometry of the scene, such that a point-of-regard (POR) may be estimated. We call the sequential estimations of the POR gaze signals in the following, and a single estimation gaze sample. In Section 5.1, we provide basic description of the eye anatomy, which is required to understand the technologies behind eye tracking and the limitations of the same. Moreover, we discuss popular technologies to perform eye tracking and explain how to process the gaze signals for real-time interaction. In Section 5.2, we describe the unique challenges of eye tracking for interaction, as we use the eyes primarily for perception and potentially overload them with interaction. In Section 5.3, we survey graphical interfaces for multimedia access that have been adapted to work effectively with eye-controlled interaction. After discussing the state-of-the-art in eye-controlled multimedia interfaces, we outline in Section 5.4 how the contextualized integration of gaze signals might proceed in order to provide richer interaction with eye tracking.

TouchGazePath: Multimodal Interaction with Touch and Gaze Path for Secure Yet Efficient PIN Entry

Chandan Kumar, Daniyal Akbari, Raphael Menges, Scott MacKenzie, Steffen Staab
ACM International Conference on Multimodal Interaction (ICMI'19), Suzhou, Jiangsu

Abstract

We present TouchGazePath, a multimodal method for entering personal identification numbers (PINs). Using a touch-sensitive display showing a virtual keypad, the user initiates input with a touch at any location, glances with their eye gaze on the keys bearing the PIN numbers, then terminates input by lifting their finger. TouchGazePath is not susceptible to security attacks, such as shoulder surfing, thermal attacks, or smudge attacks. In a user study with 18 participants, TouchGazePath was compared with the traditional Touch-Only method and the multimodal Touch+Gaze method, the latter using eye gaze for targeting and touch for selection. The average time to enter a PIN with TouchGazePath was 3.3 s. This was not as fast as Touch-Only (as expected), but was about twice as fast the Touch+Gaze. TouchGazePath was also more accurate than Touch+Gaze. TouchGazePath had high user ratings as a secure PIN input method and was the preferred PIN input method for 11 of 18 participants.

Improving User Experience of Eye Tracking-based Interaction: Introspecting and Adapting Interfaces

Raphael Menges, Chandan Kumar, Steffen Staab
ACM Trans. Comput.-Hum. Interact. (ACM TOCHI'19)

Abstract

Eye tracking systems have greatly improved in recent years, being a viable and affordable option as digital communication channel, especially for people lacking fine motor skills. Using eye tracking as an input method is challenging due to accuracy and ambiguity issues, and therefore research in eye gaze interaction is mainly focused on better pointing and typing methods. However, these methods eventually need to be assimilated to enable users to control application interfaces. A common approach to employ eye tracking for controlling application interfaces is to emulate mouse and keyboard functionality. We argue that the emulation approach incurs unnecessary interaction and visual overhead for users, aggravating the entire experience of gaze-based computer access. We discuss how the knowledge about the interface semantics can help reducing the interaction and visual overhead to improve the user experience. Thus, we propose the efficient introspection of interfaces to retrieve the interface semantics and adapt the interaction with eye gaze. We have developed a Web browser, GazeTheWeb, that introspects Web page interfaces and adapts both the browser interface and the interaction elements on Web pages for gaze input. In a summative lab study with 20 participants, GazeTheWeb allowed the participants to accomplish information search and browsing tasks significantly faster than an emulation approach. Additional feasibility tests of GazeTheWeb in lab and home environment showcase its effectiveness in accomplishing daily Web browsing activities and adapting large variety of modern Web pages to suffice the interaction for people with motor impairment.

Impact of Variable Positioning of Text Prediction in Gaze-based Text Entry

Korok Sengupta, Raphael Menges, Chandan Kumar, Steffen Staab
In Proceedings of the Ninth Biennial ACM Symposium on Eye Tracking Research and Applications (ETRA ’19), Denver

Abstract

Text predictions play an important role in improving the performance of gaze-based text entry systems. However, visual search, scanning, and selection of text predictions require a shift in the user's attention from the keyboard layout. Hence the spatial positioning of predictions becomes an imperative aspect of the end-user experience. In this work, we investigate the role of spatial positioning by comparing the performance of three different keyboards entailing variable positions for text predictions. The experiment result shows no significant differences in the text entry performance, i.e., displaying suggestions closer to visual fovea did not enhance the text entry rate of participants, however they used more keystrokes and backspace. This implies to the inessential usage of suggestions when it is in the constant visual attention of users, resulting in increased cost of correction. Furthermore, we argue that the fast saccadic eye movements undermines the spatial distance optimization in prediction positioning.

Enhanced Representation of Web Pages for Gaze-based Attention Analysis

Raphael Menges, Hanadi Tamimi, Chandan Kumar, Tina Walber, Christoph Schaefer, Steffen Staab
In Proceedings of the Ninth Biennial ACM Symposium on Eye Tracking Research and Applications (ETRA ’18), Warsaw

Abstract

Eye tracking as a tool to quantify user attention plays a major role in research and application design. For Web usability, it has become a prominent measure to assess which sections of a Web page are read, glanced or skipped. Such assessments primarily depend on the mapping of gaze data to a page representation. However, current representation methods, a virtual screenshot of the Web page or a video recording of the complete interaction session, suffer either from accuracy or scalability issues. We present a method that identifies fixed elements on Web pages and combines user viewport screenshots in relation to fixed elements for an enhanced representation of the page, in alignment with the user experience. We conducted an experiment with 10 participants and the results signify that analysis with our method is more efficient than a video recording, which is an essential criterion for large scale Web studies.

Hands-Free Web Browsing: Enriching the User Experience with Gaze and Voice Modality

Korok Sengupta, Min Ke, Raphael Menges, Chandan Kumar, Steffen Staab
In Proceedings of the Ninth Biennial ACM Symposium on Eye Tracking Research and Applications (ETRA ’18), Warsaw

Abstract

Hands-free browsers provide an effective tool for Web interaction and accessibility, overcoming the need for conventional input sources. Current approaches to hands-free interaction are primarily categorized in either voice or gaze-based modality. In this work, we investigate how these two modalities could be integrated to provide a better hands-free experience for end-users. We demonstrate a multimodal browsing approach combining eye gaze and voice inputs for optimized interaction, and to suffice user preferences with unimodal benefits. The initial assessment with five participants indicates improved performance for the multimodal prototype in comparison to single modalities for hands-free Web browsing.

Chromium based Framework to Include Gaze Interaction in Web Browser

Chandan Kumar, Raphael Menges, Daniel Mueller, Steffen Staab
International World Wide Web Conference (WWW 17), Perth, Australia, Honorable Mention

Abstract

Enabling Web interaction by non-conventional input sources like eyes has great potential to enhance Web accessibility. In this paper, we present a Chromium based inclusive framework to adapt eye gaze events in Web interfaces. The framework provides more utility and control to develop a full-featured interactive browser, compared to the related approaches of gaze-based mouse and keyboard emulation or browser extensions. We demonstrate the framework through a sophisticated gaze driven Web browser, which effectively supports all browsing operations like search, navigation, bookmarks, and tab management.

GazeTheWeb: A Gaze-Controlled Web Browser

Raphael Menges, Chandan Kumar, Daniel Mueller, Korok Sengupta
Web For All, The future of accessible work (W4A 17), Perth, Australia, TPG Challange Winner

Abstract

Web is essential for most people, and its accessibility should not be limited to conventional input sources like mouse and keyboard. In recent years, eye tracking systems have greatly improved, beginning to play an important role as input medium. In this work, we present GazeTheWeb, a Web browser accessible solely by eye gaze input. It effectively supports all browsing operations like search, navigation and bookmarks. GazeTheWeb is based on a Chromium powered framework, comprising Web extraction to classify interactive elements, and application of gaze interaction paradigms to represent these elements.

Analyzing the Impact of Cognitive Load in Evaluating Gaze-based Typing

Korok Sengupta, Jun Sun, Raphael Menges, Chandan Kumar, Steffen Staab
30th IEEE International Symposium on Computer-Based Medical Systems, Greece, Best Student Paper

Abstract

Gaze-based virtual keyboards provide an effective interface for text entry by eye movements. The efficiency and usability of these keyboards have traditionally been evaluated with conventional text entry performance measures such as words per minute, keystrokes per character, backspace usage, etc. However, in comparison to the traditional text entry approaches, gaze-based typing involves natural eye movements that are highly correlated with human brain cognition. Employ- ing eye gaze as an input could lead to excessive mental demand, and in this work we argue the need to include cognitive load as an eye typing evaluation measure. We evaluate three variations of gaze-based virtual keyboards, which implement variable designs in terms of word suggestion positioning. The conventional text entry metrics indicate no significant difference in the performance of the different keyboard designs. However, STFT (Short-time Fourier Transform) based analysis of EEG signals indicate variances in the mental workload of participants while interacting with these designs. Moreover, the EEG analysis provides insights into the user’s cognition variation for different typing phases and intervals, which should be considered in order to improve eye typing usability.

GazeTheKey: Interactive Keys to Enhance Gaze-based Typing Experience

Korok Sengupta, Raphael Menges, Chandan Kumar, Steffen Staab
Intelligent User Interfaces (IUI 17), Limassol, Cyprus

Abstract

In the conventional keyboard interfaces for eye typing, the functionalities of the virtual keys are static, i.e., user’s gaze at a particular key simply translates the associated letter as user’s input. In this work we argue the keys to be more dynamic and embed intelligent predictions to support gaze-based text entry. In this regard, we demonstrate a novel "GazeTheKey" interface where a key not only signifies the input character, but also predict the relevant words that could be selected by user's gaze utilizing a two-step dwell time.

Assessing the Usability of a Gaze-Adapted Interface with Conventional Eye-based Emulation

Chandan Kumar, Raphael Menges, Steffen Staab
MultiModal Interfaces for Natural Human Computer Interaction - IEEE CBMS 2017.

Abstract

In recent years, eye tracking systems have greatly improved, beginning to play a promising role as an input medium. Eye trackers can be used for application control either by simply emulating the mouse and keyboard devices in the traditional graphical user interface, or by customized interfaces for eye gaze events. In this work, we evaluate these two approaches to assess their impact in usability. We present a gaze-adapted Twitter application interface with direct interaction of eye gaze input, and compare it to Twitter in a conventional browser interface with gaze-based mouse and keyboard emulation. We conducted an experimental study, which indicates a significantly better subjective user experience for the gaze-adapted approach. Based on the results, we argue the need of user interfaces interacting directly to eye gaze input to provide an improved user experience, more specifically in the field of accessibility.

Schau genau! A Gaze-Controlled 3D Game for Entertainment and Education

Raphael Menges, Chandan Kumar, Ulrich Wechselberger, Christoph Schaefer, Tina Walber, Steffen Staab
The European Conference on Eye Movements (ECEM 2017)

Abstract

Eye tracking devices have become affordable. However, they are still not very much pre- sent in everyday lives. To explore the feasibility of modern low-cost hardware in terms of reliability and usability for broad user groups, we present a gaze-controlled game in a standalone arcade box with a single physical buzzer for activation. The player controls an avatar in appearance of a butterfly, which flies over a meadow towards the horizon. Goal of the game is to collect spawning flowers by hitting them with the avatar, which increases the score. Three mappings of gaze on screen to world position of the avatar, featuring different levels of intelligence, have been defined and were randomly assigned to players. Both a survey after a session and the high score distribution are considered for evaluation of these control styles. An additional serious part of the game educates the players in flower species, who are rewarded with a point-multiplier for prior knowledge. During this part, gaze data on images is collected, which can be used for saliency calculations. Nearly 3000 completed game sessions were recorded on a state horticulture show in Germany, which demonstrates the impact and acceptability of this novel input technique among lay users.

Usability Heuristics for Eye-controlled User Interfaces

Korok Sengupta, Chandan Kumar, Steffen Staab
The 2017 COGAIN Symposium: Communication by Gaze Interaction, Wuppertal, Germany.

Abstract

Evolution of affordable assistive technologies like eye tracking helps people with motor disabilities to communicate with computers by eye-based interaction. Eye-controlled interface environments need to be specially built for better usability and accessibility of the content and should not be on interface layouts that are conducive to conventional mouse or touch-based interfaces. In this work we argue the need of the domain specific heuristic checklist for eye-controlled interfaces, which conforms to the usability, design principles and less demanding from cognitive load perspective. It focuses on the need to understand the product in use inside the gaze based environment and apply the heuristic guidelines for design and evaluation. We revisit Nielsen’s heuristic guidelines to acclimatize it to eye-tracking environment, and infer a questionnaire for the subjective assessment of eye-controlled user interfaces.

Eye-Controlled Interfaces for Multimedia Interaction

Chandan Kumar, Raphael Menges, Steffen Staab
IEEE Multimedia (Volume: 23, Issue: 4, Oct.-Dec. 2016)

Abstract

The EU-funded MAMEM project (Multimedia Authoring and Management using your Eyes and Mind) aims to propose a framework for natural interaction with multimedia information for users who lack fine motor skills. As part of this project, the authors have developed a gaze-based control paradigm. Here, they outline the challenges of eye-controlled interaction with multimedia information and present initial project results. Their objective is to investigate how eye-based interaction techniques can be made precise and fast enough to let disabled people easily interact with multimedia information.

eyeGUI: A Novel Framework for Eye-Controlled User Interfaces

Raphael Menges, Chandan Kumar, Korok Sengupta, Steffen Staab
9th Nordic Conference on Human-Computer Interaction (NordiCHI 2016), Gothenburg, Sweden

Abstract

The user interfaces and input events are typically composed of mouse and keyboard interactions in generic applications. Eye-controlled applications need to revise these interactions to eye gestures, and hence design and optimization of interface elements becomes a substantial feature. In this work, we propose a novel eyeGUI framework, to support the development of such interactive eye-controlled applications with many significant aspects, like rendering, layout, dynamic modification of content, support of graphics and animation.

Visual Overlay on OpenStreetMap Data to Support Spatial Exploration of Urban Environments

Chandan Kumar, Wilko Heuten, Susanne Boll
International Journal of Geo-Information (IJGI-15)

Abstract

Increasing volumes of spatial data about urban areas are captured and made available via volunteered geographic information (VGI) sources, such as OpenStreetMap (OSM). Hence, new opportunities arise for regional exploration that can lead to improvements in the lives of citizens through spatial decision support. We believe that the VGI data of the urban environment could be used to present a constructive overview of the regional infrastructure with the advent of web technologies. Current location-based services provide general map-based information for the end users with conventional local search functionality, and hence, the presentation of the rich urban information is limited. In this work, we analyze the OSM data to classify the geo entities into consequential categories with facilities, landscape and land use distribution. We employ a visual overlay of heat map and interactive visualizations to present the regional characterization on OSM data classification. In the proposed interface, users are allowed to express a variety of spatial queries to exemplify their geographic interests. They can compare the characterization of urban areas with respect to multiple spatial dimensions of interest and can search for the most suitable region. The search experience is further enhanced via efficient optimization and interaction methods to support the decision making of end users. We report the end user acceptability and efficiency of the proposed system via usability studies and performance analysis comparison.

Characterizing the Swarm Movement on Map for Spatial Visualization

Chandan Kumar , Uwe Guenefeld, Wilko Heuten, Susanne Boll
Proc. of IEEE Transactions on Visualization and Computer Graphics (IEEE VIS-14), Paris, France

Abstract

Visualization of maps to explore relevant geographic areas is one of the common practices in spatial decision scenarios. However visualizing geographic distribution with multidimensional criteria becomes a nontrivial setup in the conventional point based map space. In this work the swarm intelligence. We exploit the particle swarm optimization (PSO) framework, where particles represent geographic regions that are moving in the map space to find better position with respect to user's criteria. We track the swarm movement on map surface to generate a relevance heatmap, which could effectively support the spatial analysis task of end users.

A Regional Exploration and Recommendation System based on Georeferenced Images

Chandan Kumar, Sebastian Barton, Wilko Heuten, Susanne Boll
Proc. of the 11th International Conference on Mobile Web Information Systems (MobiWIS 2014), Barcelona, Spain

Swarming in the Urban Web Space to Discover the Optimal Region

Chandan Kumar, Uwe Gruenefeld, Wilko Heuten, Susanne Boll
Proc. of IEEE/WIC/ACM International Conference on Web Intelligence (WI-14), Warsaw, Poland

Event Based Characterization and Comparison of Geosocial Environment

Chandan Kumar, Wilko Heuten, Susanne Boll
Temporal, Social and Spatially-aware Information Access (@SIGIR-14), Gold Coast, Australia

A Visual Interactive System for Spatial Querying and Ranking of Geographic Regions

Chandan Kumar, Wilko Heuten, Susanne Boll
Proc. of the 13th ACM International Conference on Knowledge Management and Knowledge Technologies (iKNOW-13), Graz, Austria, Honorable Mention

Criteria of Query-Independent Page Significance in Geospatial Web Search

Chandan Kumar, Susanne Boll
Proc. of ACM Geographic Information Retrieval (GIR-13), Orlando, USA

Abstract

The ranking problem in geospatial Web search has traditionally been focused towards query dependent relevance, i.e., the combination of textual and geographic similarity of pages with respect to query text and footprint. The query independent relevance which is also known as the popularity or significance of a page is a valuable aspect of generic Web search ranking. But the research progression or the formalization of query independent significance for geospatial Web search has been limited to the basic adaption of general popularity of pages. In this paper, we discuss how several location sensitive properties could alter the significance of geospatial Web pages. We particularly argue the significance of pages with respect to categorical, regional, and granular criterions. We analyze these criterions over a huge geospatial Web graph of different German cities, and perform some small-scale evaluations of our approach. We derive some valuable heuristics on the link structure of geospatial Web that can be used in the ranking formulation, or to cater certain contextual information needs from end-user of a geospatial Web search system.

Geographical Queries Beyond Conventional Bound- aries: Regional Search and Exploration

Chandan Kumar, Wilko Heuten, Susanne Boll
Proc. of ACM Geographic Information Retrieval (GIR-13), Orlando, USA

Assessing End-User Interaction for Multi-Criteria Local Search with Heatmap and Icon based Visualizations

Chandan Kumar, Benjamin Poppinga, Daniel Hauser, Wilko Heuten, Susanne Boll
Proc. of the 1st ACM SIGSPATIAL International Workshop on MapInteraction (MapInteract-13), Orlando, USA

Geovisual interfaces to find suitable urban regions for citizens: A user-centered requirement study

Chandan Kumar, Benjamin Poppinga, Daniel Hauser, Wilko Heuten, Susanne Boll
Proc. Proceedings of the 2013 ACM conference on Pervasive and ubiquitous computing adjunct publication, (UbiComp-13), Zurich, Switzerland

Interactive Exploration of Geographic Regions with Web-based Keyword Distributions

Chandan Kumar, Dirk Ahlers, Wilko Heuten, Susanne Boll
Proc. of The 3rd European Workshop on Human- Computer Interaction and Information Retrieval (@SIGIR2013), Dublin, Ireland

Geovisualization for End User Decision Support: Easy and Effective Exploration of Urban Areas

Chandan Kumar, Wilko Heuten, Susanne Boll
Interactive Maps That Help People Think (GeoViz-13) , Hamburg, Germany

Visualization Support for Multi-criteria Decision Making in Geographic Information Retrieval

Chandan Kumar, Wilko Heuten, Susanne Boll
Proc. of Availability, Reliability, and Security in Information Systems and HCI (HCI-KDD 2013), Regensburg, Germany

Visual Analysis of Geo-Social Web for Spatial Decision Support

Chandan Kumar, Wilko Heuten, Susanne Boll
Proc. of Workshop on Interactive Visual Text Analytics (@VisWeek-12), Seattle, USA

Mapping the Web resources of a developing country

Dirk Ahlers, Jose Matute, Isaac Martinez, Chandan Kumar
GI Zeitgeist Young researchers forum on Geographic Information Science 2012(GI Zeitgeist-12), Munster, Germany

LocateThisPage: Drive-by Location-Aware Browsing

Chandan Kumar, Dirk Ahlers, Susanne Boll
GI Zeitgeist Young researchers forum on Geographic Information Science 2012(GI Zeitgeist-12), Munster, Germany

Personalized Relevance in Mobile Geographic Information Retrieval

Chandan Kumar, Susanne Boll
Proc. of 8th International Symposium on Location-Based Services 2011(LBS-11), Vienna, Austria

Relevance and Ranking in Geographic Information Retrieval

Chandan Kumar
Proc. of 4th Symposium on Future Directions in Information Access 2011(FDIA-11), Koblenz, Germany

An Information Loss based Framework for Document Summarization

Chandan Kumar
International Institute of Information Technology 2009, Hyderabad, India

Estimating Risk of Picking a Sentence for Document Summarization

Chandan Kumar, Prasad Pingali, Vasudeva Varma
Proc. of 10th International Conference on Intelligent Text Processing and Computational Linguistics 2009(CICLing-09), Mexico City, Mexico

A Light-Weight Summarizer based on Language Model with Relative Entropy

Chandan Kumar, Prasad Pingali, Vasudeva Varma
Proc. of 24th ACM SIGAPP Symposium on Applied Computing 2009(SAC- 09), Hawaii, USA

Generating Personalized Summaries Using Publicly Available Web Documents

Chandan Kumar, Prasad Pingali, Vasudeva Varma
Proc. of IEEE/WIC/ACM International Conference on Web Intelligence 2008(WI-08), Sydney, Australia
  • Grants

    Our proposal on "Coastal Urban developmenT through the LEnses of Resiliency - CUTLER" was accepted by the Horizon 2020 - Research and Innovation Framework Programme, involving 15 partners across Europe.

  • Review

    I like to read unpublished work and provide my feedback for improvements. I have been involved as reviewer in various journal and conferences including CHI, ETRA, UIST, WWW, WebSci, LocWeb, GIS, NordiCHI, MapInteract, TJSS, IJITDM, MTAP, etc.

  • Teaching

    I have given lectures for Web information retrieval, seminars and projects on experimental research and multimodal interfaces at WeST institute Koblenz 2015-2019, earlier I have offer Multimedia Retrieval courses and seminars, University of Oldenburg 2013-2014.

  • Training

    I have participated in the Eye tracking winter school 2016, Monte Verita, Switerland; SICSA supported project at Big Data Information Visualisation summer school, St. Andrews, Scotland, 2013; Earlier I took part in the European Summer School in Information Retrieval, Koblenz, Germany, 2011.

Student Collaborations

  • Current

    PhD - Ramin Hedeshy

    Multimodal Interaction with Gaze and Hum

  • Current

    PhD - Korok Sengupta

    Multimodal Text Entry

  • Current

    Masters - Alona Liuzniak

    User comment analysis

  • Current

    Bachelors - Mike Laeur

    Hands-free gaming

  • 2021

    PhD - Raphael Menges

    Web Interaction and Analysis by Gaze

  • 2019

    Masters - Ramin Hedeshy

    Eye Typing Keyboards

  • 2019

    Bachelors - Nico Daheim

    Opinion Clusteing and Summarization

  • 2018

    Masters - Min Ke

    Hands-free Web Browsers

  • 2018

    Masters - Daniyal Akbari

    Optimizing Gaze-Touch Interaction

  • 2017

    Masters - Hanadi Tamimi

    Web usability analysis

  • 2014

    Master Thesis - Uwe Gruenefeld

    A Computational Intelligence Framework for Multidimensional Regional Search

  • 2014

    Bachelor Thesis - Sebastian Barton

    Visual Representation of Similar Touristic Regions Using Georeferenced Images

  • 2013

    Master Thesis - Daniel Hauser

    Geovisual analytics for lay users: A user-centred approach

At My Office

You can find me at my office located at Universitaetsstraße 32, 70569, Stuttgart, Germany