Mr Howell Istance

Job: Principal Lecturer

Faculty: Technology

School/department: School of Computer Science and Informatics

Research group(s): Centre for Computational Intelligence

Address: De Montfort University, The Gateway, Leicester, LE1 9BH

T: +44 (0)116 207 5460

E: hoi@dmu.ac.uk

W: www.dmu.ac.uk

 

Personal profile

My research interests lie primarily in the field of the human factors of interaction with computer interfaces using eye-gaze. The work is targeted at the needs of disabled users. I have worked for a number of years investigating ways to optimize interaction techniques for use with 2D desk top applications. More recently, my work has moved towards 3D gaze-based interaction techniques for use with large scale virtual environments and as well as internet-based virtual communities, such as SecondLife. I have a project currently running on this funded by DMU’s Institute of Creative Technologies (IOCT). This is a 3–year project with University of Tampere (UTA), IT University of Copenhagen and SpecialEffect (Oxford,UK) as partners. My PhD student, Steve Vickers, is funded full-time by this project.

I am also a member of COGAIN (COmmunication by GAze Interaction), the EU Framework 6 Research Network of Excellence.

Research group affiliations

Centre for Computational Intelligence.

Publications and outputs 

  • Real-Time 3D Head Pose Tracking Through 2.5D Constrained Local Models with Local Neural Fields
    Real-Time 3D Head Pose Tracking Through 2.5D Constrained Local Models with Local Neural Fields Ackland, Stephen; Chiclana, Francisco; Istance, Howell; Coupland, Simon Tracking the head in a video stream is a common thread seen within computer vision literature, supplying the research community with a large number of challenging and interesting problems. Head pose estimation from monocular cameras is often considered an extended application after the face tracking task has already been performed. This often involves passing the resultant 2D data through a simpler algorithm that best fits the data to a static 3D model to determine the 3D pose estimate. This work describes the 2.5D Constrained Local Model, combining a deformable 3D shape point model with 2D texture information to provide direct estimation of the pose parameters, avoiding the need for additional optimization strategies. It achieves this through an analytical derivation of a Jacobian matrix describing how changes in the parameters of the model create changes in the shape within the image through a full-perspective camera model. In addition, the model has very low computational complexity and can run in real-time on modern mobile devices such as tablets and laptops. The Point Distribution Model of the face is built in a unique way, so as to minimize the effect of changes in facial expressions on the estimated head pose and hence make the solution more robust. Finally, the texture information is trained via Local Neural Fields (LNFs) a deep learning approach that utilizes small discriminative patches to exploit spatial relationships between the pixels and provide strong peaks at the optimal locations. The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.
  • Irregularity-based image regions saliency identification and evaluation
    Irregularity-based image regions saliency identification and evaluation Al-Azawi, M.; Yang, Yingjie; Istance, Howell Saliency or Salient regions extraction form images is still a challenging field since it needs some understanding for the image and the nature of the image. The technique that is suitable in some application is not necessarily useful in other application, thus, saliency enhancement is application oriented. In this paper, a new technique of extracting the salient regions from an image is proposed which utilizes the local features of the surrounding region of the pixels. The level of saliency is then decided based on the global comparison of the saliency-enhanced image. To make the process fully automatic a new Fuzzy-Based thresholding technique has been proposed also. The paper contains a survey of the state-of-the-art methods of saliency evaluation and a new saliency evaluation technique was proposed. The file attached to this record is the author's final peer reviewed version. The publisher's final version of record can be found by following the DOI.
  • Designing a Gamified System to Promote Health
    Designing a Gamified System to Promote Health Kucharczyk, E.; Scase, M. O.; Istance, Howell Although gamified health interventions have the potential to enhance the quality of life of older users, there are significant design issues that need to be considered when designing games and gamified systems for an older target market. DOREMI consortium
  • What were we all looking at? Identifying objects of collective visual attention
    What were we all looking at? Identifying objects of collective visual attention Ma, Zhong; Vickers, Stephen; Istance, Howell; Ackland, Stephen; Zhao, Xinbo; Wang, Wenhu We aim to identify the salient objects in an image by applying a model of visual attention. We automate the process by predicting those objects in an image that are most likely to be the focus of someone’s visual attention. Concretely, we first generate fixation maps from the eye tracking data, which express the ground truth of people’s visual attention for each training image. Then, we extract the high-level features based on the bag-of-visual-words image representation as input attributes along with the fixation maps to train a support vector regression model. With this model, we can predict a new query image’s saliency. Our experiments show that the model is capable of providing a good estimate for human visual attention in test images sets with one salient object and multiple salient objects. In this way, we seek to reduce the redundant information within the scene, and thus provide a more accurate depiction of the scene. The file attached to this record is the authors final peer reviewed version. The publisher's final version can be found by following the DOI link below.
  • Human attention-based regions of interest extraction using computational intelligence
    Human attention-based regions of interest extraction using computational intelligence Al-Azawi, M.; Yang, Yingjie; Istance, Howell Machine vision is still a challenging topic and attracts researchers to carry out researches in this field. Efforts have been placed to design machine vision systems (MVS) that are inspired by human vision system (HVS). Attention is one of the important properties of HVS, with which the human can focus only on part of the scene at a time; regions with more abrupt features attract human attention more than other regions. This property improves the speed of HVS in recognizing and identifying the contents of a scene. In this paper, we will discuss the human attention and its application in MVS. In addition, a new method of extracting regions of interest and hence interesting objects from the images is presented. The new method utilizes neural networks as classifiers to classify important and unimportant regions.
  • An investigation into determining head pose for gaze estimation on unmodified mobile devices
    An investigation into determining head pose for gaze estimation on unmodified mobile devices Ackland, Stephen; Istance, Howell; Coupland, Simon; Vickers, Stephen Traditionally, devices which are able to determine a users gaze are large, expensive and often restrictive. We investigate the prospect of using common webcams and mobile devices such as laptops, tablets and phones without modification as an alternative means for obtaining a users gaze. A person’s gaze can be fundamentally determined by the pose of the head as well as the orientation of the eyes. This initial work investigates the first of these factors - an estimate of the 3D head pose (and subsequently the positions of the eye centres) relative to a camera device. Specifically, we seek a low cost algorithm that requires only a one-time calibration for an individual user, that can run in real-time on the aforementioned mobile devices with noisy camera data. We use our head tracker to estimate the 4 eye corners of a user over a 10 second video. We present the results at several different frames per second (fps) to analyse the impact on the tracker with lower quality cameras. We show that our algorithm is efficient enough to run at 75fps on a common laptop, but struggles with tracking loss when the fps is lower than 10fps.
  • A new gaze points agglomerative clustering algorithm and its application in regions of interest extraction
    A new gaze points agglomerative clustering algorithm and its application in regions of interest extraction Al-Azawi, M.; Yang, Yingjie; Istance, Howell In computer vision applications it is necessary to extract the regions of interest in order to reduce the search space and to improve image contents identification. Human-Oriented Regions of Interest can be extracted by collecting some feedback from the user. The feedback usually provided by the user by giving different ranks for the identified regions in the image. This rank is then used to adapt the identification process. Nowadays eye tracking technology is widely used in different applications, one of the suggested applications is by using the data collected from the eye-tracking device, which represents the user gaze points in extracting the regions of interest. In this paper we shall introduce a new agglomerative clustering algorithm which uses blobs extraction technique and statistical measures in clustering the gaze points obtained from the eye tracker. The algorithm is fully automatic, which means does not need any human intervention to specify the stopping criterion. In the suggested algorithm the points are replaced with small regions (blobs) then these blobs are grouped together to form a cloud, from which the interesting regions are constructed.
  • Performing Locomotion Tasks in Immersive Computer Games with an Adapted Eye-Tracking Interface
    Performing Locomotion Tasks in Immersive Computer Games with an Adapted Eye-Tracking Interface Vickers, Stephen; Istance, Howell; Hyrskykari, Aulikki Young people with severe physical disabilities may benefit greatly from participating in immersive computer games. In-game tasks can be fun, engaging, educational, and socially interactive. But for those who are unable to use traditional methods of computer input such as a mouse and keyboard, there is a barrier to interaction that they must first overcome. Eye-gaze interaction is one method of input that can potentially achieve the levels of interaction required for these games. How we use eye-gaze or the gaze interaction technique depends upon the task being performed, the individual performing it, and the equipment available. To fully realize the impact of participation in these environments, techniques need to be adapted to the person’s abilities. We describe an approach to designing and adapting a gaze interaction technique to support locomotion, a task central to immersive game playing. This is evaluated by a group of young people with cerebral palsy and muscular dystrophy. The results show that by adapting the interaction technique, participants are able to significantly improve their in-game character control.
  • Irregularity-based saliency identification and evaluation
    Irregularity-based saliency identification and evaluation Al-Azawi, M.; Yang, Yingjie; Istance, Howell
  • Towards dynamic accessibility through soft gaze gesture recognition
    Towards dynamic accessibility through soft gaze gesture recognition Shell, Jethro; Vickers, Stephen; Istance, Howell; Coupland, Simon It is difficult for some sets of users with physical disabilities to operate standard input devices such as a keyboard and mouse. Eye gaze technologies and more specifically gaze gestures are emerging to assist such users. There is a high level of inter and intra user variation in the ability to perform gaze gestures due to the high levels of noise with the gaze patterns. In this paper we use a novel fuzzy transfer learning approach in order to construct a fuzzy system for gaze gesture recognition which can be automatically adapted for different users and/or user groups. We show that the fuzzy system is able to recognise gestures across groups of both able bodied (AB) and disabled users through the use of a base of AB data surpassing an expert constructed classifier.

Click here to view a full listing of publications and outputs for Howell Istance.

Key research outputs

H. lstance, A. Hyrskykari, Gaze-Aware Systems and Attentive Applications, (chapter in) Gaze lnteraction and Applications of Eye Tracking: Advances in Assistive Technologies, lGl Global, New York, August 2011

H. lstance , A. Hyrskykari, L.lmmonen, S. Mansikkamaa, S. Vickers, Designing Gaze Gestures for Gaming: an lnvestigation of Performance. Proceedings of the ACM symposium on Eye tracking research & applications ETRA'10 ACM Press, New York, NY March 2010 

H. lstance, A. Hyrskykari, S. Vickers, T. Chaves (2009) For Your Eyes Only: Controlling 3D Online Games by Eye Gaze, Proceedings of 12lh lFlP conference on Human-Computer lnteraction: INTERACT 2009, Uppsala, Sweden,24-28th August 2009.

R. Bates, S. Vickers and H. O. Istance
Gaze interaction with virtual on-line communities: levelling the playing field for disabled users. Universal Access in the Information Society vol. 9, no. 3, 261-272

H. Istance, R. Bates, A. Hyrskykari, S. Vickers Snap clutch, a moded approach to solving the Midas touch problem. Proceedings of the ACM symposium on Eye tracking research & applications ETRA '08 ACM Press, New York, NY, 221-228. 2008.

Research interests/expertise

  • Human-computer interaction
  • Eye gaze communication
  • Accessible gaming
  • Gaze interaction techniques
  • Games and special needs education
  • Gaze gestures
  • Human performance modelling.

Areas of teaching

Undergraduate

  • User Centred Web Interface Development
  • Multimedia and Internet Technology
  • Graphics and Interactive Modelling

Postgraduate

  • Graphical Data - Interfaces, Visualisation and Representation
  • Research Methods

Qualifications

MSc lnformation Technology (CNAA), 1986; BSc Ergonomics (Loughborough) 1974

Courses taught

  • Creative Client Computing (1st Year Undergraduate, DMU)
  • C++ for Games (2nd Year Undergraduate, DMU)
  • Introduction to Computer Graphics and Interactive Modelling (2nd Year Undergraduate, DMU)
  • Research Methods and Statistics (Postgraduate, University of Tampere, Finland).

Honours and awards

Global Research Award, Royal Academy of Engineering, 2008.

Membership of external committees

  • Conference Co-Chair and Treasurer, ACM Eye tracking research & applications (ETRA) 2012, 2010
  • Steering Board member: COGAIN European Union Network of Excellence (2004-2009).

Membership of professional associations and societies

Association of Computing Machines (ACM), member.

Consultancy work

KTP Project No. 008844, DMU and Park Air Systems, Academic Advisor, 2013
Innovation Fellowship, UK Regional Development Fund, 2010, £16,000.

Current research students

Stephen Ackland, (PhD, 1st Supervisor)

Kirsten Wahlstrom (PhD, 2nd Supervisor)

Mohammed Alazawi (PhD, 2nd Supervisor)

Externally funded research grants information

EU Framework 6, Network of Excellence 2004-2009, £180,000 (DMU element)
Royal Academy of Engineering Global Research Award 2008-2009 £40,000.

Internally funded research project information

RIF project: Eye‐gaze interaction in support of education of children with physical disabilities, 2010, £10000.

Professional esteem indicators

  • Visiting Professor at TAUCHI, Department of Computer Sciences, University of Tampere, (2008-2009)
  • Conference Co-Chair ACM Conference Eyetracking Research and Applications ETRA 2010,2012
  • Conference Chair, COGAIN Conference Communication by Gaze Interaction, 2005, 2006, 2007, 2008
  • PhD External Examiner, Loughborough University, 2011, University of Tampere, 2006
  • Book Reviewer, MIT Press, 2012
  • Paper reviewer, Journal of Eye Movement Research, 2012
  • Paper Reviewer, ACM ASSETS Conference, 2006, 2008, 2009, 2010, 2011, 2012
  • Paper Reviewer, ACM Transactions on Interactive Intelligent Systems, 2011
  • Paper Reviewer, ACM Transactions on Human Computer Interaction, 2011
  • Paper reviewer, ACM Eyetracking Research and Applications Conference ETRA 2008, 2010,2012
  • Paper reviewer, ACM Computer-Human Interaction Conference CHI 2008, 2009, 2010, 2011, 2012
  • Paper Reviewer for journal 'Presence: Teleoperators & Virtual Environments' (June 2008)
  • Paper reviewer for the journal 'IEEE Transactions on Neural Systems & Rehabilitation Engineering' (June 2007)
  • Key Note Speaker, ACM Eyetracking Research and Applications Conference, ETRA 2006, San Diego.
Howell-Istance

Search Who's Who

 
News target area image
News

DMU is a dynamic university, read about what we have been up to in our latest news section.

Events target area image
Events

At DMU there is always something to do or see, check out our events for yourself.

Mission and vision target area image
Mission and vision

Read about our mission and vision and how these create a supportive and exciting learning environment.