About

Welcome to glase.jp.

GLASE is a project I started during my PhD studies. During my first year in the graduate school (2011), I had an opportunity to use eye trackers. An eye-tracker is a device that can capture user’s eye movements, and (if calibrated properly) map them to the coordinates on a screen.

Because my studies were about IR (information retrieval), I built a system that uses eye tracking for facilitating the search process.

Search itself is a very interesting subject to research; many IT companies, big and small likewise, are doing research on how to improve the search experience.

On the other hand, computers can provide us with useful results, if only we can express our needs clearly. However, it sometimes is not a straightforward task.

We spend our lifes constalting searching for something. Oftentimes, we may not even know what we are searching for, unless we find something.

GLASE - Gaze Learning Access and Search Engine, is an attempt to use gaze data to improve the user’s search experience. Eye movements convey information that the user doesn’t explicitly provides to the system using traditional input devices such as mouse or keyboard.

Demo

Below you can watch search session captures and read explanations for them.

Demo 1: `Animals'

A searcher inputs an ambiguous query (“nature”) into the query box. Because the system doesn’t know, what exactly user wanted, when she said “nature”, the system returns various results: images of flowers, birds, oceans, forests, etc.

The user skims through the results and holds her sight longer on several images. When a set threshold is exceeded, a popup with an enlarged image is shown to the user. In the first example, it is a image of a dog. If the user moves her sight away from the popup area, the popup gets closed. After that the user holds her sight on an image with a bird, then on an image with a dog again.

The SERP gets updated - more relevant pictures to what the user showed interest to are shown to the user: the system learned that something like an “animal” appears to be a closer match to the user’s interests. We can observe, that there are some “noise” results on the SERP as well. The user keeps looking at the SERP, and this time she stops her sight longer looking at an image with a squirrel.

(0:12) The SERP gets updated again. More images of squirrels are displayed (although, there are still some images with birds and dogs left). The user keeps looking at the images with squirrels, until she suddenly shifts towards the images with the birds. Because the user keeps looking at the images with the birds, the system “unlearns” the other “animals”, and shows more and more “bird” images, adapting to the current interest of the user throughout the whole search session.

Demo 2: `Flowers'

Another user, starting with the same SERP, but instead of looking at images with the animals, is looking at the images mostly with the flowers.

The system adapts SERPs to show more flowers. The user is not just looking at some flowers, she is looking at the images with daisies, so the system adapts to that and shows her more and more daisies. At some point (0:18), the user shifts her attention to roses. The system adapts to that, and shows more pictures of the roses.

Saving images

The user can track what images she was looking at, and “save” the images from the “history” bar to the “favorites” bar, using drag-and-drop mouse gesture. Hovering over an image in the history bar shows a popup with an enlarged version of the image.

Publications

The research is still ongoing; however, a few publications are already available. You can click on a paper image to go to the publication’s page.

  • link Viktors Garkavijs, Rika Okamoto, Tetsuo Ishikawa, Mayumi Toshima, Noriko Kando: GLASE-IRUKA: gaze feedback improves satisfaction in exploratory image search. WWW (Companion Volume) 2014: 273-274
  • link Viktors Garkavijs: Learning user’s intent using user tags: intelligent interactive image search system. ESAIR 2013: 29-32
  • link Viktors Garkavijs, Noriko Kando: Nonverbal query driven interactive search systems: a study on language agnostic information access technologies. IIiX 2012: 324
  • link Viktors Garkavijs, Mayumi Toshima, Noriko Kando: GLASE 0.1: eyes tell more than mice. SIGIR 2012: 1085-1086

There is also a patent granted for the system: 視線インタフェースを用いた対話的情報探索装置. The text is available in Japanese.