Skip to content

Dr Mario Gongora

Job: Associate Professor - Faculty Enterprise Lead

Faculty: Computing, Engineering and Media

School/department: School of Computer Science and Informatics

Research group(s): Research in Societal Enhacement (RiSE), Institute of Artificial Intelligence (IAI)

Address: De Montfort University, The Gateway, Leicester, LE1 9BH

T: +44 (0)116 207 8226

E: mgongora@dmu.ac.uk

W: www.tech.dmu.ac.uk/~mgongora/

 

Personal profile

Dr. Mario Gongora received his MSc and PhD from the University of Warwick (UK). He is now Associate Professor in the School of Computer Science and Informatics at De Montfort University (DMU), he is the Faculty Enterprise Lead and a senior member of the Institute of Artificial Intelligence at the Faculty of Computing, Engineering and Media.

 

He is currently focusing in the application of Artificial Intelligence techniques into the fields of Data Analysis of large and complex datasets and modelling Natural Phenomena, including identification, simulation and optimisation.

Dr. Gongora has extensive experience in the analysis and modelling of natural inspired systems and behaviours using computational tools. His experience involves the use of evolutionary computing and machine learning techniques to identify systems from large/incomplete datasets, modelling and emergence of complex behaviour in artificial systems and environments; and use of the models to simulate, predict or optimise the performance of systems in an on-going automated learning cycle.

He runs a spinout company to commercialise applications of his research into robust behaviour simulation and optimisation systems for customers in large Venues and complex processes. This enterprise has additionally benefited from Dr. Gongora’s close contacts with industrial partners.

Dr. Gongora works in close contact with external partners, taking the expertise from the University to Industry and Society. 

Research group affiliations

Dr Gongora leads the Reseacrh in Societal Enhacement (RiSE) team.

Affiliated to the Institue of Artificial Intelligence (IAI).

Publications and outputs

  • Applications of Computational Intelligence-based Systems for Societal Enhancement
    Applications of Computational Intelligence-based Systems for Societal Enhancement Caraffini, Fabio; Chiclana, Francisco; Moodley, Raymond; Gongora, Mario Augusto Editorial of the special issue on the "Applications of Computational Intelligence based Systems for Societal Enhancement" The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.
  • Classification in Dynamic Data Streams with a Scarcity of Labels
    Classification in Dynamic Data Streams with a Scarcity of Labels Fahy, Conor; Yang, Shengxiang; Gongora, Mario Augusto Ensemble techniques are a powerful method for recognising and reacting to changes in non-stationary data. However, most researches into dynamic classification with ensembles assume that the true class label of each incoming point is available or easily obtained. This is unrealistic in most practical applications, especially in high-velocity streams where manually labeling each point is prohibitively expensive. To address this challenge, this paper proposes an algorithm, named Clustering and One-Class Classification Ensemble Learning (COCEL), which incorporates a stream clustering algorithm and an ensemble of one-class classifiers with active learning, for classification in dynamic data streams. The method exploits the intuitive relationship between clusters and one-class classifiers to cope with a small training set (or no training set) and improve with experience, self-modifying its internal state to cope with changes in the data stream. The proposed method is evaluated on synthetic data streams exhibiting concept evolution and concept drift and a collection of high-velocity real data streams where manually labeling each incoming point is infeasible or expensive and labor intensive. Finally, a comparative evaluation with peer stream classification ensembles shows that COCEL can achieve superior or comparative accuracy while typically requiring less than 0.01% of the stream labels. The file attached to this record is the author's final peer reviewed version.
  • Applying fuzzy scenarios for the measurement of operational risk
    Applying fuzzy scenarios for the measurement of operational risk Bonet, Isis; Pena, Alejandro; Lochmuller, Christian; Patiño, Hector Alejandro; Chiclana, Francisco; Gongora, Mario Augusto Operational risk measurement assesses the probability to suffer financial losses in an organisation. The assessment of this risk is based primarily on the organisation’s internal data. However, other factors, such as external data and scenarios are also key elements in the assessment process. Scenarios enrich the data of operational risk events by simulating situations that still have not occurred and therefore are not part of the internal databases of an organisation but which might occur in the future or have already happened to other companies. Internal data scenarios often represent extreme risk events that increase the operational Value at Risk (OpVaR) and also the average loss. In general, OpVaR and the loss distribution are an important part of risk measurement and management. In this paper, a fuzzy method is proposed to add risk scenarios as a valuable data source to the data for operational risk measurement. We compare adding fuzzy scenarios with the possibility of adding non fuzzy or crisp scenarios. The results show that by adding fuzzy scenarios the tail of the aggregated loss distribution increases but that the effect on the expected average loss and on the OpVaR is lesser in its extent. The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.
  • Using Self Organising Maps to Predict and Contain Natural Disasters and Pandemics
    Using Self Organising Maps to Predict and Contain Natural Disasters and Pandemics Moodley, Raymond; Chiclana, Francisco; Caraffini, Fabio; Gongora, Mario Augusto The unfolding COVID-19 pandemic has highlighted the global need for robust predictive and containment tools and strategies. COVID-19 continues to cause widespread economic and social turmoil, and whilst the current focus is on both minimising the spread of the disease and deploying a range of vaccines to save lives, attention will soon turn to future proofing. In line with this, this paper proposes a prediction and containment model that could be used for pandemics and natural disasters. It combines selective lockdowns and protective cordons to rapidly contain the hazard whilst allowing minimally impacted local communities to conduct "business as usual" and/or offer support to highly impacted areas. A flexible, easy to use data analytics model, based on Self Organising Maps, is developed to facilitate easy decision making by governments and organisations. Comparative tests using publicly available data for Great Britain (GB) show that through the use of the proposed prediction and containment strategy, it is possible to reduce the peak infection rate, whilst keeping several regions (up to 25% of GB parliamentary constituencies) economically active within protective cordons. The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.
  • Fuzzy convolutional deep-learning model to estimate the operational risk capital using multi-source risk events
    Fuzzy convolutional deep-learning model to estimate the operational risk capital using multi-source risk events Peña, Alejandro; Patiño, Alejandro; Chiclana, Francisco; Caraffini, Fabio; Gongora, Mario Augusto; Gonzalez-Ruiz, Juan David; Eduardo, Duque-Grisales Operational Risk (OR) is usually caused by losses due to human errors, inadequate or defective internal processes, system failures or external events that affect an organization. According to the Basel II agreement, OR is defined by seven risk events: internal fraud, external fraud, labour relations, clients, damage to fixed assets, technological failures and failures in the execution & administration of processes. However, due to the large amount of qualitative information, the uncertainty and the low frequency at which these risk events are generated in an organization, their modeling is still a technological challenge. This paper takes up this challenge and presents a fuzzy convolutional deep-learning model to estimate, based on the Basel III recommendations, the ORLoss Component(OR-LC) in an organization. The proposed model integrates qualitative information as linguistic random variables, as well as risk events data from different sources using multi-dimensional fuzzy credibility concepts. The results show the stability of the proposed model with respect to the OR-LC estimation from both structural and dimensional point of views, making it an ideal tool for modeling OR from the perspective of: (a) the regulators (Basel Committee on Banking Supervision) by allowing the integration of experts’ criteria into the OR-LC; (b) the insurers by allowing the integration of risk events from different sources; and (c) organizations and financial entities by allowing the a priori evaluation of the OR-LC of new financial products based on technological platforms and electronic channels.
  • Scarcity of labels in non-stationary data streams: A survey
    Scarcity of labels in non-stationary data streams: A survey Fahy, Conor; Yang, Shengxiang; Gongora, Mario Augusto In a dynamic stream there is an assumption that the underlying process generating the stream is non-stationary and that concepts within the stream will drift and change as the stream progresses. Concepts learned by a classification model are prone to change and non-adaptive models are likely to deteriorate and become ineffective over time. The challenge of recognising and reacting to change in a stream is compounded by the scarcity of labels problem. This refers to the very realistic situation in which the true class label of an incoming point is not immediately available (or might never be available) or in situations where manually annotating data points is prohibitively expensive. In a high-velocity stream it is perhaps impossible to manually label every incoming point and pursue a fully-supervised approach. In this article we formally describe the types of change which can occur in a data-stream and then catalogue the methods for dealing with change when there is limited access to labels. We present an overview of the most influential ideas in the field along with recent advancements and we highlight trends, research gaps, and future research directions. The file attached to this record is the author's final peer reviewed version.
  • Deep Clustering for Metagenomics
    Deep Clustering for Metagenomics Gongora, Mario Augusto Metagenomics is an area that is supported by modern next generation sequencing technology, which investigates microorganisms obtained directly from environmental samples, without the need to isolate them. This type of sequencing results in a large number of DNA fragments from different organisms. Thus, the challenge consists in identifying groups of DNA sequences that belong to the same organism. The use of supervised methods for solving this problem is limited, despite the fact that large databases of species sequences are available, by the small number of species that are known. Additionally, by the required computational processing time to analyse segments against species sequences. In order to overcome these problems, a binning process can be used for the reconstruction and identification of a set of metagenomic fragments. The binning process serves as a step of pre-processing to join fragments into groups of the same taxonomic levels. In this work, we propose the application of a clustering model, with a feature extraction process that uses an autoencoder neural network. For the clustering a k-means is used that begins with a k-value which is large enough to obtain very pure clusters. These are reduced through a process of combining various distance functions. The results show that the proposed method outperforms the k-means and other classical methods of feature extraction such as PCA, obtaining 90% of purity.
  • A Robust Decision-Making Framework Based on Collaborative Agents
    A Robust Decision-Making Framework Based on Collaborative Agents Florez-Lozano, Johana; Caraffini, Fabio; Carlos, Parra; Gongora, Mario Augusto Making decisions under uncertainty is very challenging but necessary as most real-world scenarios are plagued by disturbances that can be generated internally, by the hardware itself, or externally, by the environment. Hence, we propose a general decision-making framework which can be adapted to optimally address the most heterogeneous real-world domains without being significantly affected by undesired disturbances. Our paper presents a multi-agent based structure in which agents are capable of individual decision-making but also interact to perform subsequent, and more robust, collaborative decisionmaking processes. The complexity of each software agent can be kept quite low without deterioration of the performance since an intelligent and robust-to-uncertainty decision-making behaviour arises when their locally produced measures of support are shared and exploited collaboratively. We show that by equipping agents with classic computational intelligence techniques, to extract features and generate measures of support, complex hybrid multi-agent software structures capable of handling uncertainty can be easily designed. The resulting multi-agent systems generated with this approach are based on a two-phases decision-making methodology which first runs parallel local decision making processes to then aggregate the corresponding outputs to improve upon the accuracy of the system. To highlight the potential of this approach, we provided multiple implementations of the general framework and compared them over four different application scenarios. Results are promising and show that having a second collaborative decisionmaking process is always beneficial. Open access article. This research received financial support from the internally funded DMU GCRF2020 project "Collaborative methodology for enhancing sustainability in rural communities and the use of land". Project webpages: https://dmu.figshare.com/account/home#/projects/64511 https://sites.google.com/site/facaraff/research/gcrf18.
  • Training Data Set Assessment for Decision-Making in a Multiagent Landmine Detection Platform
    Training Data Set Assessment for Decision-Making in a Multiagent Landmine Detection Platform Florez-Lozano, Johana; Caraffini, Fabio; Parra, Carlos; Gongora, Mario Augusto Real-world problems such as landmine detection require multiple sources of information to reduce the uncertainty of decision-making. A novel approach to solve these problems includes distributed systems, as presented in this work based on hardware and software multi-agent systems. To achieve a high rate of landmine detection, we evaluate the performance of a trained system over the distribution of samples between training and validation sets. Additionally, a general explanation of the data set is provided, presenting the samples gathered by a cooperative multi-agent system developed for detecting improvised explosive devices. The results show that input samples affect the performance of the output decisions, and a decision-making system can be less sensitive to sensor noise with intelligent systems obtained from a diverse and suitably organised training set.
  • A Multi-Agent System for Modelling the Spread of Lethal Wilt in Oil-Palm Plantations
    A Multi-Agent System for Modelling the Spread of Lethal Wilt in Oil-Palm Plantations Fahy, Conor; Caraffini, Fabio; Gongora, Mario Augusto Lethal Wilt (Marchitez Letal) is a disease which affects Elaeis Guineensis, a plant used in the production of palm oil. The disease is increasingly common but the spatial dynamics of the infection spread remain poorly understood. It is particularly dangerous due to the speed at which it spreads and the speed at which infected plants show symptoms and die. Early identification, or even better, accurate prediction of areas at high risk of infection can slow the spread of the disease and limit crop waste. This study is based on data collected over a five-year period from an affected plantation in Colombia. The aim of the study is to analyse the collected data to better understand how the disease spreads and then to model the behaviour. Based on insights from the initial analysis a multi-agent-based system is proposed to model the pattern of infection. The model is comprised of two steps; first Kernel Density Estimation is used to create an estimation of the distribution from which newly infected plants are drawn and this density estimation is then used to direct agents on a biased-walk of the surrounding areas. Results show that the model can approximate the behaviour of the disease and can predict areas which are at high risk of future infection.

Click here to view a full listing of Mario Gongora's publications and outputs.

Research interests/expertise

  • Computational Intelligence: Hybrid Optimisation Systems, Evolutionary Computing.
  • Intelligent Data Analytics for large, complex, disparate and incomplete data sets.
  • Applications of Computational Intelligence and Edge systems to the analysis of Complex or unstructured Processes and Natural systems.

Areas of teaching

Artificial Intelligence

Robotics

Embedded programming

Qualifications

BSc, MSc, PhD 

Honours and awards

Award conferred "International Leaders Making an Impact in Security" in the "International Leaders in Security" category, at the 5th Edition of COLADCA International awards ceremony - 11th Dec 2021

Award conferred "International Project that Leaves a Footprint in Security" as Project Lead of "Artificial Intelligence for analysing Stop and Search and other police activities" in the "Protection of Human Rights and Individual Freedoms" category,  at the 5th Edition of COLADCA International awards ceremony - 11th Dec 2021

Nominated as finalist for the ATC Global Excellence Awards ‘Industry Partnership of the Year’ category (Northrop Grumman Airport Systems, VenueSim and East Midlands Airport), March 2013. Finalists list:
•    US Airways, ACSS, FAA and EUROCONTROL partnership for NextGen/ADS-B avionics
•    AMP Corporation, DCA Low Cost Aircraft terminal (LCAT) KLIA Package A (Systems)
•    Guntermann & Drunck GmbH & EUROCONTROL - Mission-critical applications in Air Traffic Control
•    Honeywell - ITP
•    Middle East Airlines and Air Arabia - ATS and Technical Affairs
•    Northrop Grumman - Northrop Grumman Airport Systems, VenueSim and East Midlands Airport

Winner of the 8th British Computer Society prize for progress toward machine intelligence. SGAI 2009, Peterhouse College, Cambridge, UK. Project: Novel use of sound to guide an autonomous helicopter.

Membership of external committees

Invited as [a permanent] observer and contributor to the ACI World Smart Sec Management Group since Jun 2021.

Advisor for Artificial Intelligence to COLADCA, the International community of Risk Management and Security Industries and experts, with over 15 member countries and over 3000 individual members and international companies. Appointed 1st Feb 2019 to date.

Member by invitation of the Expert Group to support the Smart Security Programme from IATA / ACI (International Air Transport Association & Airports Council International) from 2012 to date, continuous contribution to the Blueprints and Guidelines that are distributed to all airlines and airports in the world.

Projects

  • Artificial Intelligence for analysing Stop and Search and other police activities

  • Artificial intelligence to support the improvement of pupil attendance and engagement at schools

  • Improving the sustainability of oil palm crops through ML and computer vision for classification of fruit in terms of quality and ripeness
  • Sensor fusion using intelligent agents to enhance the effectiveness of artisan land mine detection.

Consultancy work

Current consultancy: Applying Artificial Intelligence to support the East Midlands Chamber in their Local Skills Improve Plan (LSIP) project.

Consultancy fields: Intelligent Data Mining, Automation and Robotics (including telemetry and instrumentation), Computational Intelligence applications (e.g. optimisation, system identification, modelling and simulation)

Past consultancy/commercial projects include GSH (telemetry and automation for intelligent buildings), Rolls Royce (automation and instrumentation), Venuesim (intelligent data mining, modelling and simulation), among others.

Current research students

Currently supervising 3 research students. 

Externally funded research grants information

Active projects:

KTP (joint inter-faculty) East Midlands Chamber of Commerce and De Montfort University,  to create a new Business Research & Intelligence Unit for the East Midlands region. Granted Oct 2020, Recruiting associate to start April 2021 for 24 months.

Some past projects:

Royal Academy of Engineering – Newton Fund, International collaboration grant (IAPP) for an “Intelligent system to improve the sustainability of oil palm crops through the construction of forecasting maps integrating adaptive vegetation indices from multispectral aerial views”, Mar 2018 – Mar 2020.

Venuesim (spinout company created with seed/investment funding from Lachesis), commercialising research outcomes of intelligent data mining, modelling and simulation. Jan 2008 – current.

Intelligent GUI systems, KTP (TSB) funding to develop highly effective and intelligent GUI frameworks to present information from complex systems, in collaboration with Northrop Grumman. June 2012 – May 2014.

Internally funded research project information

Has had funding from various sources (PhD scholarships from EPSRC DTA, RIF, HEIF)

Published patents

US Patent number 6,339,720 “Early warning apparatus for acute Myocardial Infarction in the first six hours of pain”, US government.

US Patent number 5,545,971 “AC voltage regulator”, US Government.

Case studies

Spinout company Venusim resulting from Dr. Gongora’s research in intelligent data mining.

Article in Airport-technology.com which is the only site focused on bringing the latest news about airport projects, trends, products and services for the global airport industry: http://www.airport-technology.com/features/featureartificial-intelligence-predictive-modelling-airport/

Invited by IATA (International Air Transport Association) to be a member in their expert group for the Checkpoint of the future, an international initiative to drive forward and contribute to aviation security science; by bringing together governments, industry and academic experts from across the world.

 

Impact (REF) Case Studies

REF 2020 UoA 11: Support for Operations and Security for the Global Air Transport Industry (Modelling, Forecasting and Optimisation)

This ICS present how it has enhanced the security screening process of millions of passengers travelling daily through nearly 1,200 international airports. This was achieved through the contributions of Dr Mario Gongora as international adviser, where he disseminated the relevant research outcomes supporting security; leading to an active contribution in the development of the Smart Security programme guidelines disseminated across all airports around the world by the International Air Transport Association (IATA) and Airports Council International (ACI) and steering the industry to develop suitable solutions

mario-gongora