An Intelligent IoT Sensing System for Rail Vehicle Running States Based on TinyML

Real-time identification of the running state is one of the key technologies for a smart rail vehicle. However, it is a challenge to accurately real-time sense the complex running states of the rail vehicle on an Internet-of-Things (IoT) edge device. Traditional systems usually upload a large amount of real-time data from the vehicle to the cloud for identification, which is laborious and inefficient. In this paper, an intelligent identification method for rail vehicle running state is proposed based on Tiny Machine Learning (TinyML) technology, and an IoT system is developed with small size and low energy consumption. The system uses a Micro-Electro-Mechanical System (MEMS) sensor to collect acceleration data for machine learning training. A neural network model for recognizing the running state of rail vehicles is built and trained by defining a machine learning running state classification model. The trained recognition model is deployed to the IoT edge device at the vehicle side, and an offset time window method is utilized for real-time state sensing. In addition, the sensing results are uploaded to the IoT server for visualization. The experiments on the subway vehicle showed that the system could identify six complex running states in real-time with over 99% accuracy using only one IoT microcontroller. The model with three axes converges faster than the model with one. The model recognition accuracy remained above 98% and 95%, under different installation positions on the rail vehicle and the zero-drift phenomenon of the MEMS acceleration sensor, respectively. The presented method and system can also be extended to edge-aware applications of equipment such as automobiles and ships.

View this article on IEEE Xplore

 

Code Generation Using Machine Learning: A Systematic Review

Recently, machine learning (ML) methods have been used to create powerful language models for a broad range of natural language processing tasks. An important subset of this field is that of generating code of programming languages for automatic software development. This review provides a broad and detailed overview of studies for code generation using ML. We selected 37 publications indexed in arXiv and IEEE Xplore databases that train ML models on programming language data to generate code. The three paradigms of code generation we identified in these studies are description-to-code, code-to-description, and code-to-code. The most popular applications that work in these paradigms were found to be code generation from natural language descriptions, documentation generation, and automatic program repair, respectively. The most frequently used ML models in these studies include recurrent neural networks, transformers, and convolutional neural networks. Other neural network architectures, as well as non-neural techniques, were also observed. In this review, we have summarized the applications, models, datasets, results, limitations, and future work of 37 publications. Additionally, we include discussions on topics general to the literature reviewed. This includes comparing different model types, comparing tokenizers, the volume and quality of data used, and methods for evaluating synthesized code. Furthermore, we provide three suggestions for future work for code generation using ML.

View this article on IEEE Xplore

 

Combining Citation Network Information and Text Similarity for Research Article Recommender Systems

Researchers often need to gather a comprehensive set of papers relevant to a focused topic, but this is often difficult and time-consuming using existing search methods. For example, keyword searching suffers from difficulties with synonyms and multiple meanings. While some automated research-paper recommender systems exist, these typically depend on either a researcher’s entire library or just a single paper, resulting in either a quite broad or a quite narrow search. With these issues in mind, we built a new research-paper recommender system that utilizes both citation information and textual similarity of abstracts to provide a highly focused set of relevant results. The input to this system is a set of one or more related papers, and our system searches for papers that are closely related to the entire set. This framework helps researchers gather a set of papers that are closely related to a particular topic of interest, and allows control over which cross-section of the literature is located. We show the effectiveness of this recommender system by using it to recreate the references of review papers. We also show its utility as a general similarity metric between scientific articles by performing unsupervised clustering on sets of scientific articles. We release an implementation, ExCiteSearch (bitbucket.org/mmmontemore/excitesearch), to allow researchers to apply this framework to locate relevant scientific articles.

View this article on IEEE Xplore

 

Novel Multi Center and Threshold Ternary Pattern Based Method for Disease Detection Method Using Voice

Smart health is one of the most popular and important components of smart cities. It is a relatively new context-aware healthcare paradigm influenced by several fields of expertise, such as medical informatics, communications and electronics, bioengineering, ethics, to name a few. Smart health is used to improve healthcare by providing many services such as patient monitoring, early diagnosis of disease and so on. The artificial neural network (ANN), support vector machine (SVM) and deep learning models, especially the convolutional neural network (CNN), are the most commonly used machine learning approaches where they proved to be performance in most cases. Voice disorders are rapidly spreading especially with the development of medical diagnostic systems, although they are often underestimated. Smart health systems can be an easy and fast support to voice pathology detection. The identification of an algorithm that discriminates between pathological and healthy voices with more accuracy is needed to obtain a smart and precise mobile health system. The main contribution of this paper consists of proposing a multiclass-pathologic voice classification using a novel multileveled textural feature extraction with iterative feature selector. Our approach is a simple and efficient voice-based algorithm in which a multi-center and multi threshold based ternary pattern is used (MCMTTP). A more compact multileveled features are then obtained by sample-based discretization techniques and Neighborhood Component Analysis (NCA) is applied to select features iteratively. These features are finally integrated with MCMTTP to achieve an accurate voice-based features detection. Experimental results of six classifiers with three diagnostic diseases (frontal resection, cordectomy and spastic dysphonia) show that the fused features are more suitable for describing voice-based disease detection.

*Published in the IEEE Electronics Packaging Society Section within IEEE Access.

View this article on IEEE Xplore

 

Machine Learning Empowered Spectrum Sharing in Intelligent Unmanned Swarm Communication Systems: Challenges, Requirements and Solutions

The unmanned swarm system (USS) has been seen as a promising technology, and will play an extremely important role in both the military and civilian fields such as military strikes, disaster relief and transportation business. As the “nerve center” of USS, the unmanned swarm communication system (USCS) provides the necessary information transmission medium so as to ensure the system stability and mission implementation. However, challenges caused by multiple tasks, distributed collaboration, high dynamics, ultra-dense and jamming threat make it hard for USCS to manage limited spectrum resources. To tackle with such problems, the machine learning (ML) empowered intelligent spectrum management technique is introduced in this paper. First, based on the challenges of the spectrum resource management in USCS, the requirement of spectrum sharing is analyzed from the perspective of spectrum collaboration and spectrum confrontation. We found that suitable multi-agent collaborative decision making is promising to realize effective spectrum sharing in both two perspectives. Therefore, a multi-agent learning framework is proposed which contains mobile-computing-assisted and distributed structures. Based on the framework, we provide case studies. Finally, future research directions are discussed.

View this article on IEEE Xplore

Harnessing Artificial Intelligence Capabilities to Improve Cybersecurity

 

Cybersecurity is a fast-evolving discipline that is always in the news over the last decade, as the number of threats rises and cybercriminals constantly endeavor to stay a step ahead of law enforcement. Over the years, although the original motives for carrying out cyberattacks largely remain unchanged, cybercriminals have become increasingly sophisticated with their techniques. Traditional cybersecurity solutions are becoming inadequate at detecting and mitigating emerging cyberattacks. Advances in cryptographic and Artificial Intelligence (AI) techniques (in particular, machine learning and deep learning) show promise in enabling cybersecurity experts to counter the ever-evolving threat posed by adversaries. Here, we explore AI’s potential in improving cybersecurity solutions, by identifying both its strengths and weaknesses. We also discuss future research opportunities associated with the development of AI techniques in the cybersecurity field across a range of application domains.

View this article on IEEE Xplore

A Study on the Elimination of Thermal Reflections

 

Recently, thermal cameras have been used in various surveillance and monitoring systems. In particular, in camera-based surveillance systems, algorithms are being developed for detecting and recognizing objects from images acquired in dark environments. However, it is difficult to detect and recognize an object due to the thermal reflections generated in the image obtained from a thermal camera. For example, thermal reflection often occurs on a structure or the floor near an object, similar to shadows or mirror reflections. In this case, the object and the areas of thermal reflection overlap or are connected to each other and are difficult to separate. Thermal reflection also occurs on nearby walls, which can be detected as artifacts when an object is not associated with this phenomenon. In addition, the size and pixel value of the thermal reflection area vary greatly depending on the material of the area and the environmental temperature. In this case, the patterns and pixel values of the thermal reflection and the object are similar to each other and difficult to differentiate. These problems reduce the accuracy of object detection and recognition methods. In addition, no studies have been conducted on the elimination of thermal reflection of objects under different environmental conditions. Therefore, to address these challenges, we propose a method of detecting reflections in thermal images based on deep learning and their elimination via post-processing. Experiments using a self-collected database (Dongguk thermal image database (DTh-DB), Dongguk items and vehicles database (DI&V-DB)) and an open database showed that the performance of the proposed method is superior compared to that of other state-of-the-art approaches.

View this article on IEEE Xplore

Machine Learning Designs, Implementations and Techniques

Submission Deadline: 15 February 2020

IEEE Access invites manuscript submissions in the area of Machine Learning Designs, Implementations and Techniques.

Most modern machine learning research is devoted to improving the accuracy of prediction. However, less attention is paid to deployment of machine and deep learning systems, supervised /unsupervised techniques for mining healthcare data, and time series similarity and irregular temporal data analysis. Most deployments are in the cloud, with abundant and scalable resources, and a free choice of computation platform. However, with the advent of intelligent physical devices—such as intelligent robots or self-driven cars—the resources are more limited, and the latency may be strictly bounded.

To address these questions, the focus of this Special Section in IEEE Access is on machine and deep learning designs, implementations and techniques, including both system level topics and other research questions related to the general use and framework of machine learning algorithms.

The topics of interest include, but are not limited to:

  • Real time implementation of machine and deep learning,
  • System level implementation, considering full pipeline from raw data until the decision layer
  • Novel and innovative applications with strong emphasis on design and implementation
  • Novel approaches for Temporal / Spatial/Spatio-Temporal Association analysis
  • Pattern discovery from Time stamped Temporal and Interval databases
  • High performance data mining in cloud
  • Novel approaches for handling Uncertain and Imbalanced data
  • Supervised/Unsupervised techniques for mining healthcare data
  • Deep learning for translational bio-informatics
  • Periodic/Sequential pattern mining
  • Evolutionary algorithms
  • Privacy-Preserving Data mining
  • Time series similarity and Irregular temporal data analysis
  • Mining Text Web and Social network data
  • Imputation techniques for Temporal data
  • Causality and Event Processing
  • Applications of Data Mining in Anomaly and Intrusion detection
  • Applications to medical informatics

 

We also highly recommend the submission of multimedia with each article as it significantly increases the visibility, downloads, and citations of articles.

 

Associate Editor:  Shadi A. Aljawarneh, Jordan University of Science and Technology, Jordan

Guest Editors:

    1. Oguz Bayat, Altinbas University, Turkey
    2. Juan A. Lara, Madrid Open University, Udima, Spain
    3. Robert P. Schumaker, University of Texas at Tyler, USA

 

Relevant IEEE Access Special Sections:

  1. Visual Analysis for CPS Data
  2. Emerging Approaches to Cyber Security
  3. Data-Enabled Intelligence for Digital Health


IEEE Access Editor-in-Chief:
  Prof. Derek Abbott, University of Adelaide

Article submission: Contact Associate Editor and submit manuscript to:
http://ieee.atyponrex.com/journal/ieee-access

For inquiries regarding this Special Section, please contact:  saaljawarneh@just.edu.jo, shadi.jawarneh@yahoo.com.

Most Popular Article of 2017: Machine Learning With Big Data: Challenges and Approaches

The Big Data revolution promises to transform how we live, work, and think by enabling process optimization, empowering insight discovery and improving decision making. The realization of this grand potential relies on the ability to extract value from such massive data through data analytics; machine learning is at its core because of its ability to learn from data and provide data driven insights, decisions, and predictions. However, traditional machine learning approaches were developed in a different era, and thus are based upon multiple assumptions, such as the data set fitting entirely into memory, what unfortunately no longer holds true in this new context. These broken assumptions, together with the Big Data characteristics, are creating obstacles for the traditional techniques. Consequently, this paper compiles, summarizes, and organizes machine learning challenges with Big Data. In contrast to other research that discusses challenges, this work highlights the cause-effect relationship by organizing challenges according to Big Data Vs or dimensions that instigated the issue: volume, velocity, variety, or veracity. Moreover, emerging machine learning approaches and techniques are discussed in terms of how they are capable of handling the various challenges with the ultimate objective of helping practitioners select appropriate solutions for their use cases. Finally, a matrix relating the challenges and approaches is presented. Through this process, this paper provides a perspective on the domain, identifies research gaps and opportunities, and provides a strong foundation and encouragement for further research in the field of machine learning with Big Data.

View this article on IEEE Xplore

Theory, Algorithms, and Applications of Sparse Recovery

Submission Deadline: 31 December 2018

IEEE Access invites manuscript submissions in the area of Theory, Algorithms, and Applications of Sparse Recovery.

Sparse recovery is a fundamental problem in the fields of compressed sensing, signal de-noising, statistical model selection, and more. The key idea of sparse recovery lies in that a suitably high dimensional sparse signal can be inferred from very few linear observations. Recent years have witnessed a great development of the sparse recovery theory and fruitful applications in the general field of information processing, including communications channel estimation, dictionary leaning, data compression, optical imaging, machine learning etc. Extensions to the recovery of low-rank matrices and higher order tensors from incomplete linear information have also been developed, and remarkable achievements have been achieved.

This Special Section is devoted to both the current state-of-the-art advances and new theory, algorithms and applications of sparse recovery, with the goals to highlight new achievements and developments, and to feature outstanding open issues and promising new directions and extensions, on the theory, algorithms, and applications. Both survey papers and papers of original contributions that enhance the existing body of sparse recovery are also highly encouraged. The topics of interest include, but are not limited to:

  • Fundamental limit of sparse recovery algorithms
  • Sparse recovery with phase-less sampling matrices
  • Trade-off between sparse recovery effectiveness and efficiency
  • Greedy methods for phase-less sparse recovery
  • Design and optimization for deterministic sampling matrices
  • Theory/algorithm/applications of sparse signal recovery
  • Theory/algorithm/applications of low-rank matrix recovery
  • Theory /algorithm/applications of tensor recovery
  • Efficient hardware implementation of sparse recovery algorithms
  • Sparse recovery for machine learning problems

We also highly recommend the submission of multimedia with each article as it significantly increases the visibility, downloads, and citations of articles.

Associate Editor: Jinming Wen, University of Toronto, Canada

Guest Editors:

  1. Jian Wang, Fudan University, China
  2. Bo Li, Nuance Communication, Canada
  3. Xin Yuan, Nokia Bell Labs, USA
  4. Kezhi Li, Imperial College London, UK

 

Relevant IEEE Access Special Sections:

  1. Advances in Channel Coding for 5G and Beyond
  2. Trends, Perspectives and Prospects of Machine Learning Applied to Biomedical Systems in  Internet of Medical Things
  3. Smart Caching, Communications, Computing and Cybersecurity for Information-Centric

 

IEEE Access Editor-in-Chief: Michael Pecht, Professor and Director, CALCE, University of Maryland

Paper submission: Contact Associate Editor and submit manuscript to:
http://ieee.atyponrex.com/journal/ieee-access

For inquiries regarding this Special Section, please contact: jinming.wen@mail.mcgill.ca