News: I am the general chair for the 10th IEEE/ACM International Conference on Utility and Cloud Computing (UCC 2017) being held in Austin,Texas USA from December 5-8, 2017.
I am the general chair for the IEEE International Conference on Fog and Edge Computing (ICFEC 2017) being held in Madrid, Spain from May 14-17, 2017. This is co-located with the 17th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing.
I was the Programme Chair for the 3rd International Symposium on Big Data Science, Engineering and Applications (BDSEA 2016) being held in Shanghai China from December 6-9, 2016.
I am a Professor of distributed systems in the college of engineering and technology at the University of Derby, UK. My areas of research include autonomous and data intensive distributed systems and high performance analytics platforms for continuous processing of streaming data.
I have been part of the EC funded projects in distributed systems and large scale analytics such as Health-e-Child (IP, FP6), neuGrid (STREP, FP7) and TRANSFORM (IP, FP7) where I investigated resource management and optimization issues of large scale distributed systems and provided platforms for high performance data analytics.
I have been investigating large scale distributed systems and analytics platforms for the LHC data in collaboration with CERN Geneva Switzerland for the last fifteen years. Before starting an academic career, I worked for various software multinational companies for around ten years.
I have secured grants from industrial partners, Innovate UK, RCUK and other funding agencies for investigating high performance video analytics systems for producing intelligence and evidence for medical, security, object tracking and forensic science applications.
I am closely working with healthcare providers, hospitals and pharma companies in investigating high performance analytics systems for distributed clinical intelligence and integration, iterative genome analytics and precision medicine.
I am also working in collaboration with rail companies to investigate how rail infrastructures and services can benefit from Internet of Things (IoT) and real time analytics by intelligently analyzing streams of data arriving from rail networks to increase accuracy, reliability and capacity of rail infrastructures and services. In addition, I am also investigating ways to model the rail networks as a distributed Graph System and provide adaptive scheduling and resource management systems.
Thanks to a grant from Innovate UK, I am working with a leading VR provider to enable real time visualization of 3D engineering models and distributed algorithms in a Virtual Reality environment. This work allows distributed parties involved in large scale collaborative engineering projects to identify potential conflicts or required changes at the design stage, rather than during manufacturing, when they’re extremely costly to put right.
I am closely working with logistics companies (Thanks to a grant from Innovate UK) to investigate smart logistical models using innovative IoT technologies and Machine Learning approaches for intelligent stock tracking, warehousing and distributed supply chain management optimization.
Please get in touch with me if you're interested in a research and development project in Distributed Systems, High Performance Data Analytics or in another area of my research interests. Broadly Speaking, I have the research and development interests in the following areas:
- Autonomous Distributed Systems
- High Performance Analytics
- Graph based Distributed Analytics Systems
- Intelligent Cyber Physical Systems and Internet of Things
- Edge and Fog Enhanced Systems
- Blockchain enabled Distributed Systems
VR enabled adaptive visualizations for Engineering Systems
This challenging research project (2018-2020, Innovate UK) aims to enable real time big data visualization in a VR environment.
Smart Logistics & Intelligence Management System (SLIMS)
This project (2018-2020, Innovate UK) aims to investigate innovative IoT (Internet of Things) technologies combined with Machine Learning approaches for intelligent Logistics and supply chain management systems.
Train Network Graph Modelling for Scheduling and Management (2018, Resonate Rail)
This project aims to model the UK rail network as a Graph System and provide an adaptive scheduling and resource management System.
Blockchain System for IoT data Integration and Analytics (2017-2019)
This project (funded by Roche Molecular USA) aims to investigate a blockchain based distributed ledger infrastructure for trusted management of data coming from IoT devices in Healthcare. The project will develop a HyperLedger based infrastructure for an immutable and verifiable record of transactions between patients, investigators and IoT devices.
Clinical and Genomics Data Analytics for Personalized HealthCare (2015-2019)
This project (funded by Hoffmann-La Roche Switzerland) aims to integrate clinical and genomics data coming from clinical trials, real world data and sequencing machines to provide Healthcare analytics for personalized treatments. An In-Memory cloud computing platform will analyze the integrated data for Healthcare analytics, cross study analytics and formulation of statistical evidence.
Video Stream Analytics System for real time Object Classification (2012-2017)
The Stream Cloud project (funded by the Technology Strategy Board) aims to develop an end-to-end solution for batch and real-time analysis of video streams using cloud computing. A software library will be developed to exploit the cloud computing potential for extracting and deriving important features from recorded video streams.
Transform Project (2010-2015)
The underlying concept of Transform (funded under the European Commission Framework 7 Programme) is to develop a rapid learning healthcare system driven by advanced distributed computational infrastructure that can improve both patient safety and the conduct and volume of clinical research in Europe. Providing interoperability between different clinical systems, across national boundaries, and integration of distributed clinical systems and research systems is the main focus of Transform.
NeuGrid Project (2008-2011)
NeuGrid is an EC funded (under the European Commission Framework 7 Programme) Grid-enabled data mining and knowledge discovery project for the European neuroscientists to enable the Alzheimerï¿½s disease analysis. NeuGRID aims to provide a new user-friendly Grid-based research e-Infrastructure enabling the European neuroscience community to carry out research required for the pressing study of degenerative brain diseases.
Health-e-Child Grid Project (2006-2010)
Health-e-Child is an EC funded (under the European Commission Framework 6 Programme) project whose aim is to develop and deploy Grid enabled decision support system for the European paediatricians. The project aims to build a Grid-enabled European network of leading clinical centres that will share and annotate biomedical data, validate systems clinically, and diffuse clinical excellence across Europe by setting up new technologies, clinical workflows, and standards.
DIANA Grid Scheduler (2004-2007)
The Data Intensive and Network Aware (DIANA) Grid Scheduler project investigated Data driven and Network Aware meta-scheduling approaches. The Scheduler takes into account a cost based mechanism to map jobs against the resources when making scheduling decisions across multiple sites. DIANA is a performance-aware and an economy-guided Meta Scheduler. The DIANA meta-schedulers create a peer-to-peer hierarchy of schedulers to accomplish resource management, since existing scheduling hierarchies are not sufficient for Grid systems due to their inability to change with evolving loads and due to the dynamic and volatile nature of the resources.
This project is attempting to build a Grid Operating System. PhantomOS aims at the development of a pervasive general purpose Grid computing platform for both common and existing grid users by converging user-centric computing with eScience-centric grid computing, via a Grid Operating System built on a virtualized infrastructure.
The Clarens Grid-Enabled Web Services Framework is an open source, secure, high-performance "portal" for ubiquitous access to data and computational resources provided by computing grids. Clarens was developed as part of a wide-area network Grid-enabled Analysis Environment for collaborative analysis of data generated by the Compact Muon Solenoid (CMS) detector at the European Organization for Nuclear Research, CERN. JClarens has been developed as a Java-based supplement to the Python-based Clarens Web Services Framework developed at the California Institute of Technology (Caltech).
MAGGIE - Measurement and Analysis of the Global Grid and Internet End-to-End Performance - is a collaborative research project by the Stanford Linear Accelerator Center (SLAC) USA and NUST Institute of Information Technology (NIIT). Originally in 1995 it was for the High Energy Physics community, however, this century it has been more focussed on measuring the Digital Divide from an Internet Performance viewpoint. The project now involves measurement to over 600 sites in over 125 countries. The project is still running and new developments have been made by the concerned researchers in the recent past.
Grid Enabled Analysis Environment (GAE) is aimed at providing an integrated service oriented environment for physicists to support distributed analysis of physics data from the CMS LHC experiment. A Grid Analysis Environment is a combination of software architecture, network and storage infrastructure, and software collaboration and analysis tools to enable many people to make effective use of Data and Computational computer Grids.
Mobile Computing for e-Science: (2002-2005)
This research project was aimed at making the power of Grid computing available to resource-limited devices such as Pocket PC and Palm. The project implemented a set of physics analysis applications (Root and JAS) for handhelds and optimized them for maximum performance on the handheld devices (by tuning the JVM to fulfil resource constraints).
Access Card System for APS (American Payment System) (2002-2003)
An Access Card System was implemented to provide a flexible processing and card management platform, which combines agility and technical design to support the changing demands of the stored value industry. The System is implemented using a Service Oriented Architecture (SOA) and is truly distributed in functionality.
Real Time Exposure and Loss Monitoring System (1999-2001)
The System was implemented to calculate, control, communicate and monitor the LSE exposure and loss mechanism and implement the administrative and financial policies in real time. Using TIBCO Rendezvous and C++ libraries, a truly distributed, low overhead, low traffic, and almost real time even driven and messaging system was implemented to inform the decision makers about the exposure, loss and other indicators during the trading hours.