Professor David Elizondo

Job: Professor in Intelligent Transport

Faculty: Computing, Engineering and Media

School/department: School of Computer Science and Informatics

Research group(s): The De Montfort University Interdisciplinary Group in Intelligent Transport Systems (DIGITS)

Address: De Montfort University, The Gateway, Leicester, LE1 9BH, United Kingdom

T: +44 (0)116 207 8471

E: Elizondo@dmu.ac.uk

W: https://www.dmu.ac.uk/digits

 

Personal profile

Dr. David Elizondo is a Principal Lecturer in the Department of Computer Technology at De Montfort University. After completing his BA in Computer Science from Knox College , Galesbourg, Illinois, USA, he worked as a software engineer/lab manager for a latinoamerican agronomical research and teaching institute based in Costa Rica ( CATIE ). This institute, through a Swiss project, sponsored him to do a MS in Artificial Intelligence at the Department of Artificial Intelligence and Cognitive Computing of the University of Georgia, Athens, Georgia, USA. After this he obtained a PhD in computer science from the University of Strasbourg , France in cooperation with the Swiss Dalle Molle Institute for Perceptual Artificial Intelligence (IDIAP). He then worked for Neuvoice, formerly Neural Systems, a spin off company of the University of Plymouth , UK. As a senior researcher he worked in the development of an intelligent monitoring system for the petroleum industry. This system was based on neural network techniques. Later, he worked as a software architect for ACTERNA, an international company which supplies software/hardware solutions to telecom companies. He was part of the team developing QMS, a quality of service management system for leased lines. In parallel to this work, he was a part time lecturer at the University of Plymouth where he taught database, and data structures and algorithms.

Research group affiliations

The De Montfort University Interdisciplinary Group in Intelligent Transport Systems (DIGITS)

I am also an active member of the following research groups:
(1) The Cyber Security Centre
(2) The Centre for Computational Intelligence (CCI).

I am the research leader of the CCI Neural Network subgroup, which is particularly well known internationally for the research work conducted in the area of Constructive Neural Networks and Linear Separability as evidenced by my on-going list of high quality publications in these two fields of research.

Publications and outputs

  • Selecting Non-Line of Sight Critical Scenarios for Connected Autonomous Vehicle Testing
    dc.title: Selecting Non-Line of Sight Critical Scenarios for Connected Autonomous Vehicle Testing dc.contributor.author: Allidina, Tanvir; Deka, Lipika; Paluszczyszyn, Daniel; Elizondo, David dc.description.abstract: The on-board sensors of connected autonomous vehicles (CAVs) are limited by their range and inability to see around corners or blind spots, otherwise known as non-line of sight scenarios (NLOS). These scenarios have the potential to be fatal (critical scenarios) as the sensors may detect an obstacle much later than the amount of time needed for the car to react. In such cases, mechanisms such as vehicular communication are required to extend the visibility range of the CAV. Despite there being a substantial body of work on the development of navigational and communication algorithms for such scenarios, there is no standard method for generating and selecting critical NLOS scenarios for testing these algorithms in a scenario-based simulation environment. This paper puts forward a novel method utilising a genetic algorithm for the selection of critical NLOS scenarios from the set of all possible NLOS scenarios in a particular road environment. The need to select critical scenarios is pertinent as the number of all possible driving scenarios generated is large and testing them against each other is time consuming, unnecessary and expensive. The selected critical scenarios are then validated for criticality by using a series of MATLAB based simulations. dc.description: open access article
  • Semi-supervised deep learning for image classification with distribution mismatch: A survey
    dc.title: Semi-supervised deep learning for image classification with distribution mismatch: A survey dc.contributor.author: Calderon-Ramirez, Saul; Yang, Shengxiang; Elizondo, David dc.description.abstract: Deep learning methodologies have been employed in several different fields, with an outstanding success in image recognition applications, such as material quality control, medical imaging, autonomous driving, etc. Deep learning models rely on the abundance of labelled observations to train a prospective model. These models are composed of millions of parameters to estimate, increasing the need of more training observations. Frequently it is expensive to gather labelled observations of data, making the usage of deep learning models not ideal, as the model might over-fit data. In a semi-supervised setting, unlabelled data is used to improve the levels of accuracy and generalization of a model with small labelled datasets. Nevertheless, in many situations different unlabelled data sources might be available. This raises the risk of a significant distribution mismatch between the labelled and unlabelled datasets. Such phenomena can cause a considerable performance hit to typical semi-supervised deep learning frameworks, which often assume that both labelled and unlabelled datasets are drawn from similar distributions. Therefore, in this paper we study the latest approaches for semi-supervised deep learning for image recognition. Emphasis is made in semi-supervised deep learning models designed to deal with a distribution mismatch between the labelled and unlabelled datasets. We address open challenges with the aim to encourage the community to tackle them, and overcome the high data demand of traditional deep learning pipelines under real-world usage settings. Impact statement: This paper is a deep review of the state of the art semi-supervised deep learning methods, focusing on methods dealing with the distribution mismatch setting. Under real world usage scenarios, a distribution mismatch might occur between the labelled and unlabelled datasets. Recent research has found an important performance degradation of the state of the art semi-supervised deep learning (SSDL) methods. Therefore, state of the art methodologies aim to increase the robustness of semi-supervised deep learning frameworks to this phenomena. In this work, we are the first to our knowledge to systematize and study recent approaches to robust SSDL under distribution mismatch scenarios. We think this work can add value to the literature around this subject, as it identifies the main tendencies surrounding it. Also we consider that our work encourages the community to draw the attention on this emerging subject, which we think is an important challenge to address in order to decrease the lab-to-real-world gap of deep learning methodologies. dc.description: The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.
  • Dealing with distribution mismatch in semi-supervised deep learning for Covid-19 detection using chest X-ray images
    dc.title: Dealing with distribution mismatch in semi-supervised deep learning for Covid-19 detection using chest X-ray images dc.contributor.author: Calderon-Ramirez, Saul; Yang, Shengxiang; Moemeni, Armaghan; Elizondo, David dc.description.abstract: In the context of the global coronavirus pandemic, different deep learning solutions for infected subject detection using chest X-ray images have been proposed. However, deep learning models usually need large labelled datasets to be effective. Semi-supervised deep learning is an attractive alternative, where unlabelled data is leveraged to improve the overall model’s accuracy. However, in real-world usage settings, an unlabelled dataset might present a different distribution than the labelled dataset (i.e. the labelled dataset was sampled from a target clinic and the unlabelled dataset from a source clinic). This results in a distribution mismatch between the unlabelled and labelled datasets. In this work, we assess the impact of the distribution mismatch between the labelled and the unlabelled datasets, for a semi-supervised model trained with chest X-ray images, for COVID-19 detection. Under strong distribution mismatch conditions, we found an accuracy hit of almost 30%, suggesting that the unlabelled dataset distribution has a strong influence in the behaviour of the model. Therefore, we propose a straightforward approach to diminish the impact of such distribution mismatch. Our proposed method uses a density approximation of the feature space. It is built upon the target dataset to filter out the observations in the source unlabelled dataset that might harm the accuracy of the semi-supervised model. It assumes that a small labelled source dataset is available together with a larger source unlabelled dataset. Our proposed method does not require any model training, it is simple and computationally cheap. We compare our proposed method against two popular state of the art out-of-distribution data detectors, which are also cheap and simple to implement. In our tests, our method yielded accuracy gains of up to 32%, when compared to the previous state of the art methods. The good results yielded by our method leads us to argue in favour for a more data-centric approach to improve model’s accuracy. Furthermore, the developed method can be used to measure data effectiveness for semi-supervised deep learning model training. dc.description: The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.
  • Dataset similarity to assess semi-supervised learning under distribution mismatch between the labelled and unlabelled datasets
    dc.title: Dataset similarity to assess semi-supervised learning under distribution mismatch between the labelled and unlabelled datasets dc.contributor.author: Calderon-Ramirez, Saul; Oala, Luis; Torrents-Barrena, Jordina; Yang, Shengxiang; Elizondo, David; Moemeni, Armaghan; Colreavy-Donnelly, Simon; Samek, Wojciech; Molina-Cabello, Miguel; Lopez-Rubio, Ezequiel dc.description.abstract: Semi-supervised deep learning (SSDL) is a popular strategy to leverage unlabelled data for machine learning when labelled data is not readily available. In real-world scenarios, different unlabelled data sources are usually available, with varying degrees of distribution mismatch regarding the labelled datasets. It begs the question which unlabelled dataset to choose for good SSDL outcomes. ftentimes, semantic heuristics are used to match unlabelled data with labelled data. However, a quantitative and systematic approach to this election problem would be preferable. In this work, we first test the SSDL MixMatch algorithm under various distribution mismatch configurations to study the impact on SSDL accuracy. Then, we propose a quantitative unlabelled dataset selection heuristic based on dataset dissimilarity measures. These are designed to systematically assess how distribution mismatch between the labelled and unlabelled datasets affects MixMatch performance. We refer to our proposed method as deep dataset dissimilarity measures (DeDiMs), designed to compare labelled and unlabelled datasets. They use the feature space of a generic Wide-ResNet, can be applied prior to learning, are quick to evaluate and model agnostic. The strong correlation in our tests between MixMatch accuracy and the proposed DeDiMs suggests that this approach can be a good fit for quantitatively ranking different unlabelled datasets prior to SSDL training. dc.description: The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.
  • A real use case of semi-supervised learning for mammogram classification in a local clinic of Costa Rica
    dc.title: A real use case of semi-supervised learning for mammogram classification in a local clinic of Costa Rica dc.contributor.author: Calderon-Ramirez, Saul; Murillo-Hernandez, Diego; Rojas-Salazar, Kevin; Elizondo, David; Yang, Shengxiang; Moemeni, Armaghan; Molina-Cabello, Miguel dc.description.abstract: The implementation of deep learning-based computer-aided diagnosis systems for the classification of mammogram images can help in improving the accuracy, reliability, and cost of diagnosing patients. However, training a deep learning model requires a considerable amount of labelled images, which can be expensive to obtain as time and effort from clinical practitioners are required. To address this, a number of publicly available datasets have been built with data from different hospitals and clinics, which can be used to pre-train the model. However, using models trained on these datasets for later transfer learning and model fine-tuning with images sampled from a different hospital or clinic might result in lower performance. This is due to the distribution mismatch of the datasets, which include different patient populations and image acquisition protocols. In this work, a real-world scenario is evaluated where a novel target dataset sampled from a private Costa Rican clinic is used, with few labels and heavily imbalanced data. The use of two popular and publicly available datasets (INbreast and CBIS-DDSM) as source data, to train and test the models on the novel target dataset, is evaluated. A common approach to further improve the model’s performance under such small labelled target dataset setting is data augmentation. However, often cheaper unlabelled data is available from the target clinic. Therefore, semi-supervised deep learning, which leverages both labelled and unlabelled data, can be used in such conditions. In this work, we evaluate the semi-supervised deep learning approach known as MixMatch, to take advantage of unlabelled data from the target dataset, for whole mammogram image classification. We compare the usage of semi-supervised learning on its own, and combined with transfer learning (from a source mammogram dataset) with data augmentation, as also against regular supervised learning with transfer learning and data augmentation from source datasets. It is shown that the use of a semi-supervised deep learning combined with transfer learning and data augmentation can provide a meaningful advantage when using scarce labelled observations. Also, we found a strong influence of the source dataset, which suggests a more data-centric approach needed to tackle the challenge of scarcely labelled data. We used several different metrics to assess the performance gain of using semi-supervised learning, when dealing with very imbalanced test datasets (such as the G-mean and the F2-score), as mammogram datasets are often very imbalanced. dc.description: The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.
  • Correcting data imbalance for semi-supervised Covid-19 detection using X-ray chest images
    dc.title: Correcting data imbalance for semi-supervised Covid-19 detection using X-ray chest images dc.contributor.author: Calderon-Ramirez, Saul; Yang, Shengxiang; Moemeni, Armaghan; Elizondo, David; Colreavy-Donnelly, Simon; Chavarria-Estrada, Luis Fernando; Molina-Cabello, Miguel A. dc.description.abstract: A key factor in the ght against viral diseases such as the coronavirus (COVID-19) is the identi cation of virus carriers as early and quickly as possible, in a cheap and efficient manner. The application of deep learning for image classi cation of chest X-ray images of COVID-19 patients could become a useful pre-diagnostic detection methodology. However, deep learning architectures require large labelled datasets. This is often a limitation when the subject of research is relatively new as in the case of the virus outbreak, where dealing with small labelled datasets is a challenge. Moreover, in such context, the datasets are also highly imbalanced, with few observations from positive cases of the new disease. In this work we evaluate the performance of the semi-supervised deep learning architecture known as MixMatch with a very limited number of labelled observations and highly imbalanced labelled datasets. We demonstrate the critical impact of data imbalance to the model's accuracy. Therefore, we propose a simple approach for correcting data imbalance, by re-weighting each observation in the loss function, giving a higher weight to the observations corresponding to the under-represented class. For unlabelled observations, we use the pseudo and augmented labels calculated by MixMatch to choose the appropriate weight. The proposed method improved classi cation accuracy by up to 18%, with respect to the non balanced MixMatch algorithm. We tested our proposed approach with several available datasets using 10, 15 and 20 labelled observations, for binary classi cation (COVID-19 positive and normal cases). For multi-class classi cation (COVID-19 positive, pneumonia and normal cases), we tested 30, 50, 70 and 90 labelled observations. Additionally, a new dataset is included among the tested datasets, composed of chest X-ray images of Costa Rican adult patients. dc.description: The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.
  • Fuzzy Logic Applied to System Monitors
    dc.title: Fuzzy Logic Applied to System Monitors dc.contributor.author: Khan, Noel; Elizondo, David; Deka, Lipika; Molina-Cabello. M. A. dc.description.abstract: System monitors are applications used to monitor other systems (often mission critical) and take corrective actions upon a system failure. Rather than reactively take action after a failure, the potential of fuzzy logic to anticipate and proactively take corrective actions is explored here. Failures adversely affect a system’s non-functional qualities (e.g., availability, reliability, and usability) and may result in a variety of losses such as data, productivity, or safety losses. The detection and prevention of failures necessarily improves a critical system’s non-functional qualities and avoids losses. The paper is self-contained and reviews set and logic theory, fuzzy inference systems (FIS), explores parameterization, and tests the neighborhood of rule thresholds to evaluate the potential for anticipating failures. Results demonstrate detectable gradients in FIS state spaces and means fuzzy logic based system monitors can anticipate rule violations or system failures. dc.description: The Publisher's final version can be found by following the DOI link. Open access article.
  • Improving uncertainty estimations for mammogram classification using semi-supervised learning
    dc.title: Improving uncertainty estimations for mammogram classification using semi-supervised learning dc.contributor.author: Calderon-Ramirez, Saul; Murillo-Hernandez, Diego; Rojas-Salazar, Kevin; Calvo-Valverde, Luis-Alexander; Yang, Shengxiang; Moemeni, Armaghan; Elizondo, David; Lopez-Rubio, Ezequiel; Molina-Cabello, Miguel A. dc.description.abstract: Computer aided diagnosis for mammogram images have seen positive results through the usage of deep learning architectures. However, limited sample sizes for the target datasets might prevent the usage of a deep learning model under real world scenarios. The usage of unlabeled data to improve the accuracy of the model can be an approach to tackle the lack of target data. Moreover, important model attributes for the medical domain as model uncertainty might be improved through the usage of unlabeled data. Therefore, in this work we explore the impact of using unlabeled data through the implementation of a recent approach known as MixMatch, for mammogram images. We evaluate the improvement on accuracy and uncertainty of the model using popular and simple approaches to estimate uncertainty. For this aim, we propose the usage of the uncertainty balanced accuracy metric. dc.description: The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.
  • Are Public Intrusion Datasets Fit for Purpose: Characterising the State of the Art in Intrusion Event Datasets
    dc.title: Are Public Intrusion Datasets Fit for Purpose: Characterising the State of the Art in Intrusion Event Datasets dc.contributor.author: Kenyon, Anthony; Deka, Lipika; Elizondo, David dc.description.abstract: In recent years cybersecurity attacks have caused major disruption and information loss for online organisations, with high profile incidents in the news. One of the key challenges in advancing the state of the art in intrusion detection is the lack of representative datasets. These datasets typically contain millions of time-ordered events (e.g. network packet traces, flow summaries, log entries); subsequently analysed to identify abnormal behavior and specific attacks [1]. Generating realistic datasets has historically required expensive networked assets, specialised traffic generators, and considerable design preparation. Even with advances in virtualisation it remains challenging to create and maintain a representative environment. Major improvements are needed in the design, quality and availability of datasets, to assist researchers in developing advanced detection techniques. With the emergence of new technology paradigms, such as intelligent transport and autonomous vehicles, it is also likely that new classes of threat will emerge [2]. Given the rate of change in threat behavior [3] datasets become quickly obsolete, and some of the most widely cited datasets date back over two decades. Older datasets have limited value: often heavily filtered and anonymised, with unrealistic event distributions, and opaque design methodology. The relative scarcity of (Intrusion Detection System) IDS datasets is compounded by the lack of a central registry, and inconsistent information on provenance. Researchers may also find it hard to locate datasets or understand their relative merits. In addition, many datasets rely on simulation, originating from academic or government institutions. The publication process itself often creates conflicts, with the need to de-identify sensitive information in order to meet regulations such as General Data Protection Act (GDPR) [4]. Another final issue for researchers is the lack of standardised metrics with which to compare dataset quality. In this paper we attempt to classify the most widely used public intrusion datasets, providing references to archives and associated literature. We illustrate their relative utility and scope, highlighting the threat composition, formats, special features, and associated limitations. We identify best practice in dataset design, and describe potential pitfalls of designing anomaly detection techniques based on data that may be either inappropriate, or compromised due to unrealistic threat coverage. Such contributions as made in this paper is expected to facilitate continuous research and development for effectively combating the constantly evolving cyber threat landscape. dc.description: The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.
  • Dealing with scarce labelled data: Semi-supervised deep learning with mix match for Covid-19 detection using chest X-ray images
    dc.title: Dealing with scarce labelled data: Semi-supervised deep learning with mix match for Covid-19 detection using chest X-ray images dc.contributor.author: Calderon-Ramirez, Saul; Giri, Raghvendra; Yang, Shengxiang; Moemeni, Armaghan; Umana, Mario; Elizondo, David; Torrents-Barrena, Jordina; Molina-Cabello, Miguel A. dc.description.abstract: Coronavirus (Covid-19) is spreading fast, infecting people through contact in various forms including droplets from sneezing and coughing. Therefore, the detection of infected subjects in an early, quick and cheap manner is urgent. Currently available tests are scarce and limited to people in danger of serious illness. The application of deep learning to chest X-ray images for Covid-19 detection is an attractive approach. However, this technology usually relies on the availability of large labelled datasets, a requirement hard to meet in the context of a virus outbreak. To overcome this challenge, a semi-supervised deep learning model using both labelled and unlabelled data is proposed. We develop and test a semi-supervised deep learning framework based on the Mix Match architecture to classify chest X-rays into Covid-19, pneumonia and healthy cases. The presented approach was calibrated using two publicly available datasets. The results show an accuracy increase of around 15% under low labelled / unlabelled data ratio. This indicates that our semi-supervised framework can help improve performance levels towards Covid-19 detection when the amount of high-quality labelled data is scarce. Also, we introduce a semi-supervised deep learning boost coefficient which is meant to ease the scalability of our approach and performance comparison. dc.description: The file attached to this record is the author's final peer reviewed version.

Click here to view a full listing of David Elizondo's publications and outputs.

Research interests/expertise

My research interests include both work in the theory and application of Neural Networks. Application areas include transport related problems that led to the development of DIGITS (iTRAQ project).

Areas of teaching

Artificial Neural Networks and Prolog programming.

Qualifications

  • French Qualification: University Full Professor Qualification by the Conseille National des Universites (CNU). Artificial Neural Networks, Theory and Applications - 2008. 
  • French Qualification: Senior Lecturer/Principal Lecturer (Maitre de Conferences) Qualification by the Conseille National des Universites (CNU) - 2003. 
  • PhD in Computer Science from the University Louis Pasteur, Strasbourg, France and IDIAP, Martigny, Switzerland. The Recursive Deterministic Perceptron and some Strategies for Topology Reduction on Neural Networks -1998. 
  • DEA in Computer Science from the University of Montpellier, Montpellier, France, Application of Neural Networks to a control process in a dynamic environment - 1993. 
  • Master of Science in Artificial Intelligence from the University of Georgia, Athens, Georgia, USA, Neural Network Models to Predict Solar Radiation and Plant Phenology - 1992.
  • Bachelor of Science in Computer Science from Knox College, Galesburg, Illinois, USA - 1986.

Courses taught

Artificial Neural Networks and Prolog programming.

Membership of external committees

  • Workshop Organizer for The British Computer Society Specialist Group on Artificial Intelligence (SGAI) International Conference in Artificial Intelligence for 2010.
  • UK Computational Intelligence workshop (UKCI).
  • IEEE International Conference in Artificial Neural Networks (2004,2005, 2006,2007, 2008, 2009).

Membership of professional associations and societies

IEEE Senior Member.

Conference attendance

Organiser and chairman of the following special conference sessions:

  • IEEE-WCCI-2012, Brisbane, Australia. Special session on Computational Intelligence for Privacy. (http://www.ieee-wcci2012.org/)
  • IEEE-WCCI-2010, Barcelona, Spain. Special session on Computational Intelligence for Privacy, Security, Forensics. (http://www.wcci2010.org/)
  • IEEE-ICANN-2008 Prague, Czech Republic. Special session on Constructive Neural Network Algorithms (http://www.icann2008.org/ssession.php). Contacted by Springer to produce a book of extended versions of these papers. The book will be published by January 2009.). Contacted by Springer to produce a book of extended versions of these papers. The book will be published by January 2009.
  • IEEE-ICANN-2005 Warsaw, Poland. Special session on Knowledge Extraction (http://www.ibspan.waw.pl/ICANN-2005/SpecialSession9.pdf)

National Conference Chairman

Consultancy work

Large International Banana producer Company. Banana hand cut optimization using Artificial Intelligence Techniques.

Current research students

2010-2013 John North. Associating Cause and Effect: Applying Computational Intelligence to Post-Incident Security Data. De Montfort University, Symantec.

2010-2014 Harold Kimball. Adaptive Security for Mobile Devices.

2013-2016 Simon Witheridge. Integrated Traffic Management and Air Quality Control.

Externally funded research grants information

“TITLE”, SPONSOR

ROLE

AMOUNT

PERIOD

“Banana Hand cut optimization using Computational Intelligence Techniques”,

Chiquita Brands International Inc., USA.

 

PI

£12000

June 2010

 

“Travel Grant, WCCI-2010, Barcelona, Spain”, Royal Academy of Engineering.

 

PI

£600

 

2010

“Dynamic Traffic Management and Passenger Guidance to Meet the Carbon Challenge”, Transport iNet HECF.

 

PI

 

£45K

2009−2010

“Travel Grant, IJCNN-2009, Atlanta, Georgia”, Royal Academy of Engineering.

 

PI

 

£800

2009

“Travel Grant, ICANN-2008, Prague, Czek Republic”, Royal Academy of Engineering.

 

PI

 

£800

2008

“Travel Grant, ICANN-2007, Porto Portugal”, Royal Academy of Engineering.

 

PI

 

£800

2007

“Design of constructive methods on neural computing systems and its application to data mining in oncology”, Spanish Research Council.

 

CI

 

£225K

2008−2012

“New strategies in the design of neurocomputing systems. Application to the process of oncology data”, Spanish Research Council.

 

CI

 

£90K

2008−2010

“Integrated Traffic Management and Air Quality Control Using Downstream Space Services”, European Space Agency.

 

PI

e500K

(£160K

for

DMU)

 

2011

 

“Innovation Fellowship with the School of Pharmacy”, EMDA, UK.

 

PI

£15K

2011

Internally funded research project information

“TITLE”, SPONSOR

ROLE

AMOUNT

PERIOD

“Associating Cause and Effect: Applying Computational Intelligence to

Post-Incident Security Data”, DMU Research Scholarship, DMU, UK,

Symantec, UK.

 

PI

 

£50K

2011−2014

“Intelligent Transport Systems: Integrated Traffic Management Control”,

DMU Research Scholarship, DMU, UK.

 

CI

 

£50K

2012−2015

“De Montfort Interest Group in Transport Systems (DIGITS)”, DMU RIF.

 

CI

£10K

Jan−Apr 2012

Professional esteem indicators

  • Associate editor for the IEEE Transactions on Neural Networks and Learning Systems Journal (2.95 Impact Factor and in position 12 out of 111 according to the impact factor in the area of Artificial Intelligence)
  • Reviewer of European FP7 research projects (2009)
  • Referee for the Swiss National Science Foundation (2010)
  • Industrial Liaison for the IEEE Computational Intelligence Society (CSI), UKRI Chapter
  • Workshop Organizer for The British Computer Society Specialist Group on Artificial Intelligence (SGAI)
  • International Conference in Artificial Intelligence for 2010
  • Senior Member of the IEEE
  • Industrial Liaison for the IEEE Computational Intelligence Society (CSI), UKRI Chapter.
 David