- Home
- A-Z Publications
- International Journal of Sensors Wireless Communications and Control
- Previous Issues
- Volume 10, Issue 4, 2020
International Journal of Sensors Wireless Communications and Control - Volume 10, Issue 4, 2020
Volume 10, Issue 4, 2020
-
-
A Collaborative Edge-Cloud Internet of Things Based Framework for Securing the Indian Healthcare System
Authors: Syed R. Zahra and Mohammad Ahsan ChishtiToday, 73 years after the independence and twenty years after the turn of the century, “Health for All” which should have been accomplished by now, remains a far-fetched and an elusive dream. Instead, the people of India are bequeathed a triple burden of disease: sustaining the weight of transmittable infections, expanding burden of nontransferable illnesses, and a healthcare system not efficient enough to handle them both. At present, India is home to one-third of the poor population around the world. After a high population growth rate, unregulated and inefficient healthcare is the major cause for this abjection and poverty. The global position of India vis-à-vis the health indicators like Infant Mortality Rate (IMR), Crude Birth Rate (CBR), Crude Death Rate (CDR) and life expectancy is shocking, shameful and on a downward trend. The objective of this paper was to identify the major issues in the Indian healthcare system and offer Internet of Things (IoT) based solutions. The underdevelopment of health and health services in India is brought about by the same determinants that cause underdevelopment in the first place. This paper discusses these causes and major issues responsible for the dilapidated state of Indian healthcare and offers IoT based solutions for dealing with each of these issues. Moreover, a collaborative edge/cloud IoT based framework has been proposed for remedying the Indian healthcare. The presented solutions could be used for making healthcare and thereby, health, a reality for all.
-
-
-
Exploring the Applications of Machine Learning in Healthcare
Authors: Tausifa J. Saleem and Mohammad Ahsan ChishtiThe rapid progress in domains like machine learning, and big data has created plenty of opportunities in data-driven applications particularly healthcare. Incorporating machine intelligence in healthcare can result in breakthroughs like precise disease diagnosis, novel methods of treatment, remote healthcare monitoring, drug discovery, and curtailment in healthcare costs. The implementation of machine intelligence algorithms on the massive healthcare datasets is computationally expensive. However, consequential progress in computational power during recent years has facilitated the deployment of machine intelligence algorithms in healthcare applications. Motivated to explore these applications, this paper presents a review of research works dedicated to the implementation of machine learning on healthcare datasets. The studies that were conducted have been categorized into following groups (a) disease diagnosis and detection, (b) disease risk prediction, (c) health monitoring, (d) healthcare related discoveries, and (e) epidemic outbreak prediction. The objective of the research is to help the researchers in this field to get a comprehensive overview of the machine learning applications in healthcare. Apart from revealing the potential of machine learning in healthcare, this paper will serve as a motivation to foster advanced research in the domain of machine intelligence-driven healthcare.
-
-
-
An Enhanced Multiple Linear Regression Model for Seasonal Rainfall Prediction
Authors: Pundra C. Shaker Reddy and Alladi SureshbabuAims & Background: India is a country which has exemplary climate circumstances comprising of different seasons and topographical conditions like high temperatures, cold atmosphere, and drought, heavy rainfall seasonal wise. These utmost varieties in climate make us exact weather prediction is a challenging task. Majority people of the country depend on agriculture. Farmers require climate information to decide the planting. Weather prediction turns into an orientation in farming sector to deciding the start of the planting season and furthermore quality and amount of their harvesting. One of the variables are influencing agriculture is rainfall. Objectives & Methods: The main goal of this project is early and proper rainfall forecasting, that helpful to people who live in regions which are inclined natural calamities such as floods and it helps agriculturists for decision making in their crop and water management using big data analytics which produces high in terms of profit and production for farmers. In this project, we proposed an advanced automated framework called Enhanced Multiple Linear Regression Model (EMLRM) with MapReduce algorithm and Hadoop file system. We used climate data from IMD (Indian Metrological Department, Hyderabad) in 1901 to 2002 period. Results: Our experimental outcomes demonstrate that the proposed model forecasting the rainfall with better accuracy compared with other existing models. Conclusion: The results of the analysis will help the farmers to adopt effective modeling approach by anticipating long-term seasonal rainfall.
-
-
-
Integrating Employee Value Model with Churn Prediction
Background: In recent years, human resource management is a crucial role in every companies or organization's operation. Loyalty employee or Churn employee influence the operation of the organization. The impact of Churn employees is difference because of their role in organization. Objective: Thus, we define two Employee Value Models (EVMs) of organizations or companies based on employee features that are popular of almost companies. Methods: Meanwhile, with the development of Artificial intelligent, machine learning is possible to give predict data-based models having high accuracy.Thus, integrating Churn prediction, EVM and machine learning such as support vector machine, logistic regression, random forest is proposed in this paper. The strong points of each model are used and weak points are reduced to help the companies or organizations avoid high value employee leaving in the future. The process of prediction integrating Churn, value of employee and machine learning are described detail in 6 steps. The pros of integrating model gives the more necessary results for company than Churn prediction model but the cons is complexity of model and algorithms and speed of computing. Results: A case study of an organization with 1470 employee positions is carried out to demonstrate the whole integrating churn predict, EVM and machine learning process. The accuracy of the integrating model is high from 82% to 85%. Moreover, the some results of Churn and value employee are analyzed. Conclusion: This paper is proposing upgrade models for predicting an employee who may leave an organization and integration of two models including employee value model and Churn prediction is feasible.
-
-
-
Automated Diagnostic Hybrid Lesion Detection System for Diabetic Retinopathy Abnormalities
Authors: Charu Bhardwaj, Shruti Jain and Meenakshi SoodBackground: Early diagnosis, monitoring disease progression, and timely treatment of Diabetic Retinopathy (DR) abnormalities can efficiently prevent visual loss. A prediction system for the early intervention and prevention of eye diseases is important. The contrast of raw fundus image is also a hindrance in effective manual lesion detection technique. Methods: In this research paper, an automated lesion detection diagnostic scheme has been proposed for early detection of retinal abnormalities of red and yellow pathological lesions. The algorithm of the proposed Hybrid Lesion Detection (HLD) includes retinal image pre-processing, blood vessel extraction, optical disc localization and detection stages for detecting the presence of diabetic retinopathy lesions. Automated diagnostic systems assist the ophthalmologists practice manual lesion detection techniques which are tedious and time-consuming. Detailed statistical analysis is performed on the extracted shape, intensity and GLCM features and the optimal features are selected to classify DR abnormalities. Exhaustive statistical investigation of the proposed approach using visual and empirical analysis resulted in 31 significant features. Results: The results show that the HLD approach achieved good classification results in terms of three statistical indices: accuracy, 98.9%; sensitivity, 97.8%; and specificity, 100% with significantly less complexity. Conclusion: The proposed technique with optimal features demonstrates improvement in accuracy as compared to state of the art techniques using the same database.
-
-
-
Low Complexity Adaptive Nonlinear Models for the Diagnosis of Periodontal Disease
Authors: Anurag Satpathy, Ganapati Panda, Rajasekhar Gogula and Renu SharmaBackground / Objective: The paper addresses a specific clinical problem of diagnosis of periodontal disease with an objective to develop and evaluate the performance of low complexity Adaptive Nonlinear Models (ANM) using nonlinear expansion schemes and describes the basic structure and development of ANMs in detail. Methods: Diagnostic data pertaining to periodontal findings of teeth obtained from patients have been used as inputs to train and validate the proposed models. Result: Results obtained from simulations experiments carried out using various nonlinear expansion schemes have been compared in terms of various performance measures such as Mean Absolute Percentage Error (MAPE), matching efficiency, sensitivity, specificity, false positive rate, false negative rate and diagnostic accuracy. Conclusion: The ANM with seven trigonometric expansion scheme demonstrates the best performance in terms of all measures yielding a diagnostic accuracy of 99.11% compared to 94.64% provided by adaptive linear model.
-
-
-
TLBO-FLN: Teaching-Learning Based Optimization of Functional Link Neural Networks for Stock Closing Price Prediction
Authors: Sarat C. Nayak, Subhranginee Das and Mohammad Dilsad AnsariBackground and Objective: Stock closing price prediction is enormously complicated. Artificial Neural Networks (ANN) are excellent approximation algorithms applied to this area. Several nature-inspired evolutionary optimization techniques are proposed and used in the literature to search the optimum parameters of ANN based forecasting models. However, most of them need fine-tuning of several control parameters as well as algorithm specific parameters to achieve optimal performance. Improper tuning of such parameters either leads toward additional computational cost or local optima. Methods: Teaching Learning Based Optimization (TLBO) is a newly proposed algorithm which does not necessitate any parameters specific to it. The intrinsic capability of Functional Link Artificial Neural Network (FLANN) to recognize the multifaceted nonlinear relationship present in the historical stock data made it popular and got wide applications in the stock market prediction. This article presents a hybrid model termed as Teaching Learning Based Optimization of Functional Neural Networks (TLBO-FLN) by combining the advantages of both TLBO and FLANN. Results and Conclusion: The model is evaluated by predicting the short, medium, and long-term closing prices of four emerging stock markets. The performance of the TLBO-FLN model is measured through Mean Absolute Percentage of Error (MAPE), Average Relative Variance (ARV), and coefficient of determination (R2); compared with that of few other state-of-the-art models similarly trained and found superior.
-
-
-
Intuitionistic Fuzzy Score Function Based Multi-Criteria Decision Making Method for Selection of Cloud Service Provider
Authors: Sonal Agrawal and Pradeep TripathiAims & Background: Cloud Computing (CC) has received great attention from the scholarly researchers and IT companies. CC is a standard that offers services through the Internet. The standard has been manipulated by existing skills (such as collect, peer-to-peer and grid computing) and currently accepted by approximately all major associations. Various associations like as Microsoft and Facebook have revealed momentous investments in CC and currently offer services with top levels of reliability. The well-organized and precise evaluation of cloud-based communication network is an essential step in assurance both the business constancy and the continuous open services. Objectives & Methods: To select and rank the CC service providers, we introduce an Improved Score Function (ISF) based Multi-Criteria Decision-Making (MCDM) approach. The proposed approach is developed to solve the MCDM problems with partly unknown weight. To do this, the criteria preferences are given in terms of Intuitionistic Fuzzy Sets (IFSs). Numerical example is illustrated to show the effectiveness of the proposed approach over the previous ones. Results: A decision making problem of cloud computing service provider has been considered for signifying the developed technique and finishes with the outcomes coincide with the already developed methods which confirms the solidity of the developed method. Conclusion: For future, we plan to implement the proposed technique on various decision making problems, clustering and multi-objective problems. Also, we plan to extend our method under different uncertain atmosphere by using other MCDM methods.
-
-
-
Big Data Analysis for Trend Recognition Using Machine Learning Techniques
More LessBackground: Machine learning is one of the most popular research areas today. It relates closely to the field of data mining, which extracts information and trends from large datasets. Aims: The objective of this paper is to (a) illustrate big data analytics for the Indian derivative market and (b) identify trends in the data. Methods: Based on input from experts in the equity domain, the data are verified statistically using data mining techniques. Specifically, ten years of daily derivative data is used for training and testing purposes. The methods that are adopted for this research work include model generation using ARIMA, Hadoop framework which comprises mapping and reducing for big data analysis. Results: The results of this work are the observation of a trend that indicates the rise and fall of price in derivatives , generation of time-series similarity graph and plotting of frequency of temporal data. Conclusion: Big data analytics is an underexplored topic in the Indian derivative market and the results from this paper can be used by investors to earn both short-term and long-term benefits.
-
-
-
Comparative Analysis of Load Balancing Algorithms for Cloud Computing in IoT
Authors: Mohammad I. Bala and Mohammad Ahsan ChishtiBackground: Cloud computing is a widely adopted computing paradigm and its importance has increased multi-folds in the recent past due to the inception of Internet of Things (IoT). Objectives: Efficient load balancing techniques are required to optimize the use of the cloud resources although load balancing in cloud is known to be a NP-hard problem. Methods: This work focuses on multiple load balancing algorithms whose performance has been analysed and compared under varying load conditions. Results: Comparative analysis of 5 algorithms is given, among which max-min algorithm is found to be the best performing algorithm with approximately 28% better job finish time and 23% higher throughput than the worst performing algorithm (FCFS). Conclusion: Simulations have been performed in CloudSim under varying input loads and the performance has been analysed under multiple scenarios. All the simulations have pointed towards the superiority of Max-min algorithm over other algorithms. This work will prompt the researchers to further investigate into load balancing algorithms so that better results are achieved.
-
-
-
Malicious apps Identification in Android Devices Using Machine Learning Algorithms
Authors: Ravinder Ahuja, Vineet Maheshwari, Siddhant Manglik, Abiha Kazmi, Rishika Arora and Anuradha GuptaBackground & Objective: In this paper, malicious apps detection system is implemented using machine learning algorithms. For this 330 permission based features of 558 android applications are taken into consideration. Methods: The main motto of this work is to develop a model which can effectively detect the malicious and benign apps. In this we have used six feature selection techniques which will extract important features from 330 permission based features of 558 apps and further fourteen classification algorithms are applied using Python language. Results: In this paper, an efficient model for detecting malicious apps has been proposed. Conclusion: Proposed model is able to detect malicious apps approx. 3% better than existing system.
-
-
-
Adaptive Deep Neural Networks for the Internet of Things
Authors: Mohammad K. Pandit, Roohie Naaz Mir and Mohammad Ahsan ChishtiBackground: Deep neural networks have become the state of the art technology for real- world classification tasks due to their ability to learn better feature representations at each layer. However, the added accuracy that is associated with the deeper layers comes at a huge cost of computation, energy and added latency. Objective: The implementations of such architectures in resource constraint IoT devices are computationally prohibitive due to its computational and memory requirements. These factors are particularly severe in IoT domain. In this paper, we propose the Adaptive Deep Neural Network (ADNN) which gets split across the compute hierarchical layers i.e. edge, fog and cloud with all splits having one or more exit locations. Methods: At every location, the data sample adaptively chooses to exit from the NN (based on confidence criteria) or get fed into deeper layers housed across different compute layers. Design of ADNN, an adaptive deep neural network which results in fast and energy- efficient decision making (inference). Joint optimization of all the exit points in ADNN such that the overall loss is minimized. Results: Experiments on MNIST dataset show that 41.9% of samples exit at the edge location (correctly classified) and 49.7% of samples exit at fog layer. Similar results are obtained on fashion MNIST dataset with only 19.4% of the samples requiring the entire neural network layers. With this architecture, most of the data samples are locally processed and classified while maintaining the classification accuracy and also keeping in check the communication, energy and latency requirements for time sensitive IoT applications. Conclusion: We investigated the approach of distributing the layers of the deep neural network across edge, fog and the cloud computing devices wherein data samples adaptively choose the exit points to classify themselves based on the confidence criteria (threshold). The results show that the majority of the data samples are classified within the private network of the user (edge, fog) while only a few samples require the entire layers of ADNN for classification.
-
-
-
An Improved Intelligent Approach to Enhance the Sentiment Classifier for Knowledge Discovery Using Machine Learning
Authors: Midde V. Naik, D. Vasumathi and A.P. Siva KumarAims: The proposed research work is on an evolutionary enhanced method for sentiment or emotion classification on unstructured review text in the big data field. The sentiment analysis plays a vital role for current generation of people for extracting valid decision points about any aspect such as movie ratings, education institute or politics ratings, etc. The proposed hybrid approach combined the optimal feature selection using Particle Swarm Optimization (PSO) and sentiment classification through Support Vector Machine (SVM). The current approach performance is evaluated with statistical measures, such as precision, recall, sensitivity, specificity, and was compared with the existing approaches. The earlier authors have achieved an accuracy of sentiment classifier in the English text up to 94% as of now. In the proposed scheme, an average accuracy of sentiment classifier on distinguishing datasets outperformed as 99% by tuning various parameters of SVM, such as constant c value and kernel gamma value in association with PSO optimization technique. The proposed method utilized three datasets, such as airline sentiment data, weather, and global warming datasets, that are publically available. The current experiment produced results that are trained and tested based on 10- Fold Cross-Validations (FCV) and confusion matrix for predicting sentiment classifier accuracy. Background: The sentiment analysis plays a vital role for current generation people for extracting valid decisions about any aspect such as movie rating, education institute or even politics ratings, etc. Sentiment Analysis (SA) or opinion mining has become fascinated scientifically as a research domain for the present environment. The key area is sentiment classification on semi-structured or unstructured data in distinguish languages, which has become a major research aspect. User-Generated Content [UGC] from distinguishing sources has been hiked significantly with rapid growth in a web environment. The huge user-generated data over social media provides substantial value for discovering hidden knowledge or correlations, patterns, and trends or sentiment extraction about any specific entity. SA is a computational analysis to determine the actual opinion of an entity which is expressed in terms of text. SA is also called as computation of emotional polarity expressed over social media as natural text in miscellaneous languages. Usually, the automatic superlative sentiment classifier model depends on feature selection and classification algorithms. Methods: The proposed work used Support vector machine as classification technique and particle swarm optimization technique as feature selection purpose. In this methodology, we tune various permutations and combination parameters in order to obtain expected desired results with kernel and without kernel technique for sentiment classification on three datasets, including airline, global warming, weather sentiment datasets, that are freely hosted for research practices. Results: In the proposed scheme, The proposed method has outperformed with 99.2% of average accuracy to classify the sentiment on different datasets, among other machine learning techniques. The attained high accuracy in classifying sentiment or opinion about review text proves superior effectiveness over existing sentiment classifiers. The current experiment produced results that are trained and tested based on 10- Fold Cross-Validations (FCV) and confusion matrix for predicting sentiment classifier accuracy. Conclusion: The objective of the research issue sentiment classifier accuracy has been hiked with the help of Kernel-based Support Vector Machine (SVM) based on parameter optimization. The optimal feature selection to classify sentiment or opinion towards review documents has been determined with the help of a particle swarm optimization approach. The proposed method utilized three datasets to simulate the results, such as airline sentiment data, weather sentiment data, and global warming data that are freely available datasets.
-
-
-
Cluster-based Ensemble Classification Approach for Anomaly Detection in the Internet of Things
Authors: Mostafa Hosseini and Hamidreza S. BrojeniBackground & Objective: The next generation of the internet where physical things or objects are going to interact with each other without human interventions is called the Internet of Things (IoT). Its presence can improve the quality of human lives in different domains and environments such as agriculture, smart homes, intelligent transportation systems, and smart grids. In the lowest layer of the IoT architecture (i.e., the perception layer), there are a variety of sensors which are responsible for gathering data from their environment to provide service for customers. However, these collected data are not always accurate and may be infected with anomalies for some reasons such as limited sensor’s resources and environmental influences. Accordingly, anomaly detection can be used as a preprocessing phase to prevent sending inappropriate data for the processing. Methods: Since distributed characteristic and its heterogeneous elements complicate the application of anomaly detection techniques, in this paper, a cluster-based ensemble classification approach has been presented. > Results & Conclusion: Will possessing low complexity, the proposed method has high accuracy in detecting anomalies. This method has been tested on the data collected from sensors in the Intel Berkley research laboratory which is one of the free and available datasets in the domain of IoT. The results indicated that the proposed technique could achieve an accuracy of 99.9186%, a positive detection rate of 99.7459%, while reducing false positive rate and misclassification rate to 0.0025% and 0.0813% respectively.
-
-
-
Elephant Intrusion Warning System Using IoT and 6LoWPAN
Authors: Prasanna V. Theerthagiri and Menakadevi ThangaveluBackground: This work proposes an automatic IoT based elephant intrusion warning system specifically designed for detecting elephant intrusions into Krishnagiri villages that border wildlife reserves and alerting threatened communities of the location. Objective: The first one is to find the spectral energy magnitude of the elephant's vocal communication signal and the other to determine highest pitch frequency produced by elephants. Methods: Sensors are used to identify the elephant intrusion through IoT based system. Result: This work designs an automatic IoT based elephant intrusion warning system maintainable by personnel without technical knowledge. Conclusion: The ultrasonic sensor detects the elephant’s movements and alerts the distance from the sensor to the elephant and location of the intrusion. The PIR sensor detects the elephant’s intrusion in the wildlife fences and informs to the forest officials and people.
-
-
-
Imposing Packet Relaying for Mobile Adhoc Networks Using Genetic Algorithm
More LessBackground: Packet forwarding is an essential network operation in wireless networks to establish communication among wireless devices. In mobile wireless networks, data transmission occurs in the form of packet relaying. In dynamic environmental networks, relaying of the packet is a more complex process and much essential activity. Objective: It requires the co-operation of intermediate nodes in the network. Specifically, in Mobile Adhoc Networks (MANET) it is the most tedious job because of its dynamic topology, limited energy, and other resource constraints. In this paper, a genetic algorithm is adopted for stimulating the packet relaying, such that to assist the co-operation between various nodes in the network. Methods: The genetic algorithm is a metaheuristic process-based evolutionary algorithms. It intends to produce high-quality optimized solutions to any given complex problems. The current research work had carried out an extensive investigation and comparison of existing relevant genetic algorithm based algorithms. Results: The experimental results are evaluated based on the methodology of the genetic algorithm, the number of nodes, robustness, scalability, packet delivery ratio, average energy consumption, and other parameters.
-
-
-
Outage Analysis in Underlay OFDMA Based Cooperative Cognitive Radio Networks
Authors: Rupali Sawant and Shikha NemaBackground: Efficient resource allocation in Cooperative Cognitive Radio Network (CCRN) is necessary in order to meet the challenges in future wireless networks. With proper resource allocation, the Quality of Service (QoS) comprising of outage probability and data rate are evaluated in this paper and sufficiently improved with proper subcarrier allocation. > Objectives: Another important parameter is Signal to Interference Ratio (SIR) which should be above a threshold called minimum protection ratio to maintain the required QoS. Results: The network considered is Orthogonal Frequency Division Multiple Access (OFDMA) based Hybrid Cooperative Cognitive Radio Network (HCCRN) in downlink in which licensed as well as unlicensed resources are used by cognitive user depending on it's availability keeping the interference constraint in limit. The number of subcarriers required is different for every user depending upon its distance from the base station to satisfy the requirement of data rate which depends on the experienced SIR. To avoid outage of users at the boundary of a cell, it is necessary to allocate more number of subcarriers. Conclusion: It is observed that for a given user position and outage probability, as the number of subcarrier allocation in a subchannel increases high data rates can be achieved. This analysis can be useful in allocation of subcarriers to users depending upon their position.
-
-
-
VLSI Implementation and Software Co-Simulation of Digital Image Watermarking with Increased Security
Authors: Vardhana M. and Anil K. BhatBackground: Security is one of the fundamental and essential factors, which has to be addressed in the field of communication. Communication refers to the exchange of useful information between two or more nodes. Sometimes it is required to exchange some of the confidential information such as a company's logo, which needs to be hidden from the third person. The data that is being exchanged between these nodes has to be kept confidential and secured from unintended users. The three fundamental components of security are confidentiality, integrity and authentication. The data that is being exchanged has to be confidential, and only the authorized party should have access to the information that is being exchanged. One of the key methods for securing the data is encryption. Objective: The main objective of this paper was to address the problem of data hiding and security in communication systems. There is a need for having hardware resources for having high speed data security and protection. Methods: In this paper, we implemented image watermarking using LSB technique to hide a secret image, and employed encryption using Advanced Encryption Standard, to enhance the security of the image. An image is a two dimensional signal, with each pixel value representing the intensity level. The secure transmission of the image along the channel is a challenging task, because of the reason that, any individual can access it, if no security measures are taken. Results: In this paper, hardware realization of image watermarking/encryption and dewatermarking/ decryption is implemented using Very Large Scale Integration. The design is verified by means of co-simulation using MATLAB and Xilinx. The paper also presents the performance parameters of the design, with respect to speed, area and power. Conclusion: An efficient method of digital watermarking has been implemented with increased security and performance parameters are presented.
-