- Home
- A-Z Publications
- Recent Advances in Computer Science and Communications
- Previous Issues
- Volume 14, Issue 1, 2021
Recent Advances in Computer Science and Communications - Volume 14, Issue 1, 2021
Volume 14, Issue 1, 2021
-
-
Mining of Closed High Utility Itemsets: A Survey
Authors: Kuldeep Singh, Shashank S. Singh, Ashish K. Luhach, Ajay Kumar and Bhaskar BiswasFinding High Utility Itemsets (HUIs) is one of the major problems in the area of frequent itemsets mining. However, HUIs mine lots of redundant itemsets which degrade the performance and importance of high utility itemsets mining. For overcoming this limitation, closed HUIs mining has been proposed. Closed high utility itemsets mining finds complete and non-redundant itemsets. In this paper, we give recent studies on closed high utility itemsets mining algorithms. The main goal of this survey is to provide recent studies and future research opportunities. We give taxonomy of closed high utility itemsets mining algorithms. This paper provides a rough outline of the recent work and gives a general view of closed high utility itemsets mining field.
-
-
-
Influence Maximization on Social Networks: A Study
Authors: Shashank S. Singh, Kuldeep Singh, Ajay Kumar and Bhaskar BiswasInfluence Maximization, which selects a set of k users (called seed set) from a social network to maximize the expected number of influenced users (called influence spread), is a key algorithmic problem in social influence analysis. In this paper, we give recent studies on influence maximization algorithms. The main goal of this survey is to provide recent studies and future research opportunities. We give taxonomy of influence maximization algorithms with the comparative theoretical analysis. This paper provides a theoretical analysis of influence maximization problem based on algorithm design perspective and also provides the performance analysis of existing algorithms.
-
-
-
A Novel Bat Algorithm as ‘Range Determination’
Authors: Shabnam Sharma, Sahil Verma and Kiran JyotiBackground: Bat Algorithm is one of the swarm intelligence techniques inspired from the echolocation of bats. In this work, many variants of Bat Algorithm are studied which are developed by various researchers. Despite its drawback of getting trapped in local optima, it is preferred over other swarm intelligence techniques. Considering the performance of Bat Algorithm and to extend the existing work, biological behavior of bats is explored in this research work. Objective: One of the characteristics of real bats, i.e. range determination, was adopted to propose a new variant of Bat Algorithm. Methods: The proposed algorithm computed “distance” using cross correlation of emitted pulse and received echo. Results: The performance of Range Determiner-Bat Algorithm (RD-Bat Algorithm) was compared with Standard Bat Algorithm on the basis of best, median, mean, worst and standard deviation values. Conclusion: Experimental results of proposed algorithm outperformed the standard Bat Algorithm.
-
-
-
Source Redundancy Management and Host Intrusion Detection in Wireless Sensor Networks
Authors: Vijander Singh, Gourav Sharma, Ramesh C. Poonia, Narendra K. Trivedi and Linesh RajaBackground: Intrusion Detection System (IDS) is a Software application which gives the facility to monitor the traffic of network, event or activities on network and finds out any malicious operation if present. Objective: In this paper, a new protocol was developed that can detect the Wireless Network Attack based on the reference of TCP/IP Model. In the proposed system the new feature is integrated in the IDS which are built in the router itself. Methods: If any intruder tries to connect with router, intruder has to authenticate himself/herself. To find the authentication key the intruder attacks on the router to matches the authentication key with the key which he/she has. The intruder has a file with the multiple different keys in it and with that file intruder applies a brute-force attack on the router, the brute-force checks every key of the file by applying them on the router when a key matches with the authenticated key the brute-force software inform the intruder about the key matching. The IDS of the router will checks the rapid tries arriving from the same MAC address, if any MAC address tries the false key many of time than the IDS will identify the MAC as intruder and inform the system administrator about the intrusion by popping up a message on the system of the administrator. Results: Simulation of the two different scenarios is done by using the Network simulator (NS 2) and NAM (Network animator). In scenario 1 the node 1 is intruder and the IDS protocols have figure it out. The intruder is labeled as 2. In scenario 2 node 1 is the sentinel node and it gets connected to router after authentication. Conclusion: The mechanism can detect a false node in the network which is major threat in WSNs. Result has been evaluated the performance of IDS protocol by using Ad-hoc On Demand Distance Vector (AODV) Routing Protocol for routing.
-
-
-
SVM-PCA Based Handwritten Devanagari Digit Character Recognition
Authors: Aditya Khamparia, Sanjay K. Singh and Ashish K/ LuhachObjective: The blended fusion of Support Vector Machine (SVM) and Principal Component Analysis (PCA) have been widely used in recognizing handwritten digit characters of Devanagari script. The feature information from the character is extracted using its skeleton structure which optimally reduce data dimensionality using PCA. There is ample information available on handwritten charac-ter recognition on Indian and Non-Indian scripts but very few article emphasized on recognition of Devanagari scripts. Therefore, this paper presents an efficient handwritten Devanagari character recognition system based on block based feature extraction and PCA-SVM classifier. Methods: We have collected samples of handwritten Devanagari characters from different handwritten experts for classification. Results: For experimental work, total of 100 images having Devanagari digit characters been used for the purpose of training and testing. The proposed system achieves a maximum recognition accuracy of 96.6 % and 96.5% for 5 & 10 fold validations with 70% training and 30% testing data using block based feature and SVM classifier having different kernels. Conclusion: The obtained results achieve maximum accuracy using SVM classifier for digit character recognition. In future deep learning networks will be considered for accuracy enhancement and precision.
-
-
-
Localization and Tracking of Mobile Jammer Sensor Node Detection in Multi-Hop Wireless Sensor Network
Authors: K.P. Porkodi, I. Karthika and Hemant K. GianeyBackground: The jammer in a wireless sensor network is located and tracked with open access and shared nature of the wireless medium. The existing algorithms mainly track the stationary jammer. Mobile jammer often moves from one place to another becoming difficult. Mobile jammer location tracker algorithm is proposed to find the location of a mobile jammer with four steps selection i.e., initial examining node, determination of supporting node, trilateration localization and examining group handover. Objective: In this research paper an algorithm is proposed for finding the location of mobile jammer in wireless sensor network. Finding location faces a huge difficulty due to non-supportive working between the multi hop wireless sensor network and jammer. The existing algorithms are used to find the location of stationary jammers only. Mobile jammers frequently change their position from time to time. Therefore jamming increases between the node to node communications in the multi hop wireless sensor network. Methods: The multi hop wireless sensor network is deployed with n number of stationary nodes in a particular area. The omni directional antenna is fitted with those stationary nodes and the transmission powers of the nodes are also constant. The direction-finding table is maintained and keep on updating for an interval of each node. The position of the nodes is tracked with the GPS devices or by existing location finding algorithm. The nodes in the multi hop wireless sensor network are installed in A*A square area uniformly and randomly. The transmitting node is tp node. The received signal to noise ratio is threshold STNR. Neighbor list records the neighbor node of the multi hop wireless sensor networks. Results and Conclusion: The proposed idea of localization and tracking system of mobile jammer is estimated efficiently by the multi hop wireless sensor network with the simulator MATLAB. The nodes in multi hop wireless sensor network are installed in A*A area square uniformly and randomly.
-
-
-
A Stack Autoencoders Based Deep Neural Network Approach for Cervical Cell Classification in Pap-Smear Images
Authors: Sanjay K. Singh and Anjali GoyalBackground: Early detection of cervical cancer may give life to women all over the world. Pap-smear test and Human papillomavirus test are techniques used for the detection and prevention of cervical cancer. Objective: In this paper, pap-smear images are analysed and cells are classified using stacked autoencoder based deep neural network. Pap-smear cells are classified into 2 classes and 4 classes. Twoclass classification includes classification of cells in normal and abnormal cells while four-class classification includes classification of cells in normal cells , mild dysplastic cells, moderate dysplastic cells and severe dysplastic cells. Methods: The features are extracted by deep neural networks based on their architecture. Proposed deep neural networks consist of three stacked auto encoders with hidden sizes 512, 256 and 128, respectively. Softmax used as the outer layer for the classification of pap smear cells. Results: Average accuracy achieved for 2-class classification among normal and abnormal cells is 98.2 % while for 4-class classification among normal, mild, moderate and severe dysplastic cells is 93.8 % respectively. Conclusion: The proposed approach avoids image segmentation and feature extraction applied by previous works. This study highlights deep learning as an important tool for cells classification of pap-smear images. The accuracy of the proposed method may vary with the different combination of hidden size and number of autoencoders.
-
-
-
IMSM: An Interval Migration Based Approach for Skew Mitigation in MapReduce
Authors: Balraj Singh and Harsh K. VermaBackground: Extreme growth of data necessitates the need for high-performance computing. MapReduce is among the most sought-after platform for processing large-scale data. Research work and analysis of the existing system has revealed its performance bottlenecks and areas of concern. MapReduce has the problem of skew on its processing nodes. This paper proposes an algorithm for MapReduce to balance the load and eliminate the skew on Map tasks. It reduces the execution time of job by lowering the completion time of the slowest task. Methods: The proposed method performs one-time settlement of load balancing among the Map tasks by analyzing the expected completion time of the Map tasks and redistributes the load. It uses intervals to migrate the overloaded or slows tasks and append them on the under loaded tasks. Results: Experiments revealed an improvement of up to 1.3x by implementing the proposed strategy. Comparison of the proposed technique with other relevant strategies exhibits a better distribution of load among Map tasks and lower level of the skew. Evaluation is done using different workloads. Conclusion: A significant improvement is observed in the performance and reduced completion time of job.
-
-
-
Greedy Load Balancing Energy Efficient Routing Scheme for Wireless Sensor Networks
Authors: Priti Maratha and Kapil GuptaBackground: Despite so many constraints, the limited battery power of the sensor nodes is the core issue in Wireless Sensor Networks. This compels how to extend the lifetime of the network as long as possible. One of the ways to solve the problem is to balance the relay traffic load to extend the lifetime. Objective: In this paper, a load balancing algorithm is suggested that selects the best possible relay node so that uniform consumption of the battery power of the sensor nodes can be ensured. Methods: After random deployment, sensor nodes collect information about their neighbors and their expected load. The selection of new next hop starts from maximum hop count. Next hop of the nodes having a single parent is set first. Remaining nodes select their next hop in the non-increasing order of their load. Results: Simulation results verify that packet delivery ratio for proposed work is up to 50% till 72% of total time duration and no nodes getting dead till 48% of total time duration, while for others, nodes start getting dead around 36% of total time duration. Also, it is proved that the solution obtained by proposed work can be at most 1.5 times imbalanced as compared to the optimal solution which implies our solution is quite near to the optimal one. Conclusion: Load balancing done in our work has shown more positive results in comparison to others in terms of network lifetime and first node death and which is also verified with F-test with α- value to be 0.05.
-
-
-
Graph-Based Application Partitioning Approach for Computational Offloading in Mobile Cloud Computing
Authors: Robin P. Mathur and Manmohan SharmaBackground: Computational offloading is emerging as a popular field in Mobile Cloud Computing (MCC). Modern applications are power and compute-intensive which leads to the energy, storage and processing issues in mobile devices. Using the offloading concept, a mobile device can offload its computation to the cloud servers and receives back the results on the device. Objective: The main objective of the work is to provide a solution of an important question that arises in the offloading scenario is that which part of the application needs to be offloaded remotely and which part would run locally. Methods: In order to identify remote and local code, the application needs to be partitioned. In this paper, the graph partitioning approach is considered which is based upon the spectral graph partitioning with the Kernighan Lin algorithm. An application is assumed to be a graph and each node of the graph is assumed as a method. Results: Experimental results show that the proposed hybrid approach performs optimally in partitioning the application. The results indicate that considering the combination of spectral approach with the Kernighan Lin algorithm performs optimally as compared to random and multilevel partitioning in a mobile cloud scenario. Conclusion: The proposed technique gave better results than the existing techniques in terms of edge cut which is less, concluding minimum communication cost among components and thus save energy of the mobile device.
-
-
-
Big Data Analysis on Job Trends Using R
Background: Nowadays, the demand for data science-related job positions have seen a huge increase due to the recent data explosion incurred by the industries and organizations globally. The necessity to harness and utilize the amount of information hidden inside these huge datasets for effective decision-making has become the need of the hour. However, this scenario is where a data analyst or a data scientist comes into play. They are domain experts who have the skillset and expertise to extract hidden meaning from data and convert them into useful insights. This work illustrates the use of data mining and advanced data analysis techniques such as data aggregation, summarization along with data visualization using R tool to understand and analyse the job trends in the United States of America (USA) and then drill down to analyse job trends for data science-related job positions from year 2011 to 2016. Objective: This paper discusses the general job trends in the US and how the job seekers are migrating from one place to another place using Visa for different titles, majorly for business analytics. Methods: Analytics is done using R programming, different functions of the programming on various parameters and inference is drawn on the result. Results & Conclusion: The aim of this analysis is to predict the job trends in line with demand, region, employers, wages in USD.
-
-
-
Role of Self Phase Modulation and Cross Phase Modulation on Quality of Signal in Optical Links
Authors: Karamjit Kaur and Anil KumarBackground: In WDM networks, there is a crucial need to monitor signal degradation factors in order to maintain the quality of transmission. This is more critical in dynamic optical networks as non-linear impairments are network state dependent. Moreover, PLIs are accumulative in nature, so the overall impact is increased tremendously as the length of signal path is increased. The interactions between different impairments along the path also influence their overall impact. Objective: Among the different impairments, the present work focuses on phase modulations owing to the intensities of signals themselves as well as the neighboring signals. It includes the influence of SPM, SPM and XPM, system parameters like signal power, wavelength and fiber parameters like attenuation coefficient, dispersion coefficient and their influence on Q-value and BER. Methods: The analysis is done through a single and two-channel transmitter system with varied power, wavelengths and system parameters. The corresponding optical spectrums are analysed. Results and Conclusion: It has been found that SPM and XPM pose broadening effect on spectrum without any effect on temporal distributions. The magnitude of signal power is among the parameters significantly influencing the broadening of spectrum. The higher the power, the more the magnitude of broadening. It has been found that in order to neglect the impact of input power; its magnitude must be kept below 20 mW. Also, the dispersion and attenuation value need to be carefully as they pose counteracting effect to SPM and XPM for certain values and hence can be used as compensation measures without any additional cost.
-
-
-
Impact of System Parameters of Optical Fiber Link on Four Wave Mixing
Authors: Anil Kumar and Karamjit KaurBackground: The invention of WDM technology in optical communication system has completely revolutionized the telecomm industry through its high data carrying capacity and efficiency of transmission. Advanced optical modulation formats with high spectral efficiency, advanced components like Reconfigurable Optical Add Drop Multiplexers (ROADMS), OXC, and large bandwidth requirements contributed significantly in existence of dynamic, flexible translucent and transparent networks. In these networks, it is common practice to increase the power levels as much as possible to overcome the power penalty effects and better transmission, but this introduces several non-linear impairments in the link and hence degrades the quality of signal flowing. These impairments arise when several high strength optical fields of different wavelengths interact with molecular vibrations and acoustic waves. The different non-linear impacts include Self Phase Modulation (SPM), Cross Phase Modulation (XPM), Four Wave Mixing (FWM) and scattering effects like Stimulated Raman Scattering (SRS), Stimulated Brillouin Scattering (SBS). The main cause of these impairments is variation in refractive index of fiber (also called Kerr effect) due to intensity of signal flowing through fiber. Due to the degradation impact posed by these impairments, it is crucial to analyze their cause, their influence on system performance and mitigation techniques so as to improve the overall quality of transmission. The monitoring of impairments is quite a challenging task due to their dependency on time, present state of network, signals flowing in adjoining channels and fibers. Objective: The present work aims to identify and describe the role of FWM in optical networks. The mathematical model of FWM is studied to know the parameters influencing the overall impact on system performance. The power of optical source, channel spacing, distance of transmission and presence of dispersion are considered as key factors influencing FWM power being developed. Their impact on FWM power and hence, FWM efficiency is calculated. In addition, the influence of FWM on Quality of transmission is quantified in terms of BER and Q-factor. Methods: The analysis is done through a two-channel transmitter system with varied power, channel spacing, distance of transmission and presence of other degradation factors (dispersion) is taken into account. The corresponding optical spectrums are analysed. Result: In this paper, the non-linear impairment FWM posing degradation effect on the signal quality has been discussed. The basics involved are presented along with the mathematical model. It has been found that FWM results in power transfer from one channel to generation of new waves which may lead to power depletion and interference. The new waves generated depend on the number of wavelengths travelling in the fiber and channel spacing. The influence of FWM on system performance is presented in terms of BER and Q-value. Conclusion: It has been concluded that the increased power of transmission and decreased channel spacing are the crucial factors increasing the magnitude of FWM and need to be closely monitored. On the other hand, increased distance of propagation and presence of certain level of dispersion leads to decrease in FWM power. Therefore, if selected carefully, they may act as source of FWM mitigation without requiring any external compensating device.
-
-
-
TraCard: A Tool for Smart City Resource Management Based on Novel Framework Algorithm
Authors: Gurpreet S. Saini, Sanjay K. Dubey and Sunil K. BhartiBackground: Fuel is the most important source of energy essential for humanity. It plays a vital role in the economy of a country. Dependency on non-renewable resources like petroleum brings about manipulation of the oil market and an increase in the price of these resources. Excessive use of non-renewable resources leads to global warming. These resources exist in a finite quantity in nature, which makes renewable resources appealing. Objective: Petroleum being a non-renewable energy mode requires monitoring and dynamic pricing based upon the consumption to reduce excessive usage. This dynamic price control on the basis of consumption developed upon the rules of Novel algorithm for resource allocation in the system will orient the end user to move towards developing alternate resources for power generation in the field of renewable energy like wind, hydro, solar, etc. This orientation is one of the key steps for developing a smart city with efficient resource allocation as the key factor of development. Methods: In order to manage resources efficiently, the following processes are incorporated. Provide End-user a unique ID card (Named as Tra-Card). Develop data of usage (Fuel, Health Resources, and Supply Chain Management) for initial days. Automated Management of Resources using Novel Framework Algorithm with fuzzy based stable marriage algorithm). Develop Customer Management Policy as per the criterion set by governments. Generate Data and rules for use at individual points of work. Results: The process will help the government make efficient policies and produce real time deliverables over complete control and handling of fuel management; the most important of human life need. This will also allow government’s to provide a solution for end to end charging by making fuel an item of luxury and taxing the people who use it as a luxury product. Discussion: Fuel being primary requisite for human survival needs strict monitoring for irregular usage and laundering. As, the fuel has become an item of luxury for few and item for survival of few “it must be taxed as per the usage” of end-user. The Data is experimental and developed using a multiplication factor α which is multiplied to the base fuel price once it crosses a certain range of Kilometer travelled by the user with his car. Conclusion: The model proposed in the paper captures the necessity of development of an efficient method that considers the finiteness of fossil fuels by monitoring the distribution of fuel and its consumption. The purpose of this project is to save energy with aim of AI Engine managed logistics and goal of creating Energy-Efficient survival of the human species.
-
-
-
Resource Efficient Deployment and Data Aggregation in Pervasive IoT Applications (Smart Agriculture)
Authors: Saniya Zahoor and Roohie NaazAims: Internet of Things (IoT) is the evolution of the Internet designed to sense, collect, analyze and distribute the data via IoT devices that form its core component. An important aspect of pervasive IoT applications is its resource-constrained devices. Most of the real-time Edge-IoT applications generate a huge amount of data, which add to the resource consumption in these devices. To save resources in such applications, efficient node deployment and data aggregation techniques can be used. This paper presents the design and modeling of node deployment and data aggregation in Edge-IoT applications along with the homogeneous and heterogeneous network scenarios for smart agriculture. Objectives: To save resources in such applications, efficient node deployment and data aggregation techniques can be used. This paper presents the design and modeling of node deployment and data aggregation in Edge-IoT applications along with the homogeneous and heterogeneous network scenarios for smart agriculture. Methods: For heterogeneous scenarios, we propose a clustering approach, Superior Aggregator Resource Efficient Clustering (SAREC), to address the resource constraints in pervasive Edge-IoT applications. The comparison of homogeneous and heterogeneous networks is based on LEACH and SAREC protocols, respectively. Results: The results show that SAREC is 25% more efficient in energy utilization and network lifetime than LEACH. The results also show that SAREC is more efficient in terms of storage and processing time as compared to LEACH. Conclusion: Node deployment is an important aspect in determining the architecture, which plays an important role in resource management in pervasive applications of IoT. The IoT nodes are distributed in a selected geographical location and the topology of the network is pre-decided to form an Edge-IoT network. In such an environment, the nodes are deployed to sense, aggregate and analyze the data. This paper presents a pervasive Edge-IoT network along with the mathematical modeling consisting of deployment and aggregation models. An Edge-IoT network for smart agriculture has been deployed and analysis of resource utilization has been performed in homogeneous and heterogeneous scenarios of the network. The resource limitations in pervasive IoT network motivated us to develop a SAREC approach for such Edge-IoT applications that optimizes the use of resources. The comparison of the proposed SAREC protocol is made with respect to LEACH protocol on the basis of energy, network lifetime, number of alive nodes, storage and processing time. The results show that SAREC protocol is 25% more efficient in energy utilization and network lifetime than LEACH. It is also evident from the results that the SAREC is more efficient in terms of storage and processing time as compared to LEACH.
-
-
-
An Efficient Attribute Reduction and Fuzzy Logic Classifier for Heart Disease and Diabetes Prediction
Authors: Thippa R. Gadekallu and Xiao-Z. GaoIntroduction: Over the past decade Heart and diabetes disease prediction are major research works in the past decade. For prediction of the Heart and Diabetes diseases, a model using an approach based on rough sets for reducing the attributes and for classification, fuzzy logic system is proposed in this paper. Methods: The overall process of prediction is split into two main steps, 1) Using rough set theory and hybrid firefly and BAT algorithms, feature reduction is done 2) Fuzzy logic system classifies the disease datasets. Reduction of attributes is carried out by rough sets and Hybrid BAT and Firefly optimization algorithm. Results & Discussion: Then the classification of datasets is carried out by the fuzzy system which is based on the membership function and fuzzy rules. The experimentation is performed on several heart disease datasets available in UCI Machine learning repository like datasets of Hungarian, Cleveland, and Switzerland and diabetes dataset collected from a hospital in India. The experimentation results show that the proposed prediction algorithm outperforms existing approaches by achieving better accuracy, specificity, and sensitivity.
-
-
-
An Automated Technique for Optic Disc Detection in Retinal Fundus Images Using Elephant Herding Optimization Algorithm
Authors: Jyotika Pruthi, Shaveta Arora and Kavita KhannaBackground: Glaucoma and diabetic retinopathy are known to be the prime reasons for causing irrevocable blindness in the world. However, the complete vision loss can be obstructed through regular screening of the eye to detect the disorder at an early stage. Objective: In this paper, we have presented a novel nonlinear optimization technique that helps in automatic detection of the optic disc boundary from retinal fundus images thereby satisfying the anatomical constraints by using elephant herding optimization algorithm. Methods: In our approach, median filter has been used for the noise removal in retinal images. The pre-processed image is passed to the metaheuristic algorithm known as Elephant Herding Optimization algorithm. Results: The proposed technique for optic disc segmentation has been applied and tested on four standard publicly available datasets namely DRIVE, DIARETDB1, STARE and DRIONS-DB. The ground truth of the optic disc boundary has been collected from two experts of glaucoma: Expert A and Expert B who are specialists of glaucoma. The quantitative and qualitative analysis has been done to evaluate the performance of optic disc segmentation techniques. Conclusion: The proposed technique for optic disc detection helps to obtain the smooth boundaries in retinal fundus images. The aim has been successfully achieved by proposing an approach using EHO to achieve optimized solution. The effectiveness of the approach has been evaluated on for benchmark datasets and the acquired results have shown the accuracy values to be 100% for DRIVE, 100% for DIARETDB1, 99.25% for STARE and 99.99% for DRIONS-DB.
-
-
-
Deep Learning Based Deep Level Tagger for Malayalam
Authors: Ajees A. P. and Sumam M. IdiculaBackground: POS tagging is the process of identifying the correct grammatical category of words based on its meaning and context in a text document. It is one of the preliminary steps in the processing of natural language text. If any error happens in POS tagging the same will be propagated to whole NLP applications. Hence it must be handled in a genuine and precise way. Aim: The purpose of this study is to develop a deep level tagger for Malayalam which indicates the semantics of nouns and verbs in a text document. Methods: The proposed model is a two-tier architecture consisting of deep learning as well as rulebased approaches. The first tier consists of a tagging model, which is trained by a tagged corpus of 287,000 words. To improve the depth of tagging a suffix stripper is also used which can provide morhological features to the shallow machine learning model. Results: The system is trained on 2,30,000 words and tested on 57,000 words. The accuracy of tagging for the phase-1 architecture is 92.03%. Similarly the accuracy of phase-2 architecture is 98.11%. The overall accuracy of tagging is 91.82%. Conclusion: The exclusive feature of the proposed tagger is its depth in tagging the noun words. This deep level information can be used in various semantic processing applications of the natural language text like anaphora resolution, text summarization, machine translation, etc.
-
-
-
On Energy-constrained Quickest Path Problem in Green Communication Using Intuitionistic Trapezoidal Fuzzy Numbers
Authors: Ashutosh Sharma, Rajiv Kumar and Rakesh K. BajajObjective: A new variant of energy-constrained Quickest Path Problem (QPP) is being addressed in Intuitionistic Trapezoidal Fuzzy environment. The consideration of energy with QPP enables to compute the path for continuity aware critical applications. This new variant problem called as Energy-constrained Intuitionistic Trapezoidal Fuzzy Quickest Path Problem (EITFQPP) has been considered and the computations for quickest path have been carried out. Methods: As one of the important feature of the proposed model, the weight parameters associated with a given link, e.g., delay, capacity, energy and data which are completely unknown and therefore, based on the real life situations, they may be assumed as intuitionistic trapezoidal fuzzy numbers. An algorithm to solve the EITFQPP for transmitting the fuzzy data in the network, where the nodes are associated with a sufficient amount of energy (imprecise) for continuous data flow, has been proposed. Results: In order to illustrate the implementation of the proposed algorithm, a numerical example of a benchmark network has been provided. The proposed algorithm successfully provides a quickest path using shortest paths computations under the energy constrained approach. Conclusion: The illustration through a numerical example shows the effectiveness of consideration of energy in the selections of set of paths. Finally, some of the possible directions for the future research are also discussed.
-
-
-
K-means Clustering-based Radio Neutron Star Pulsar Emission Mechanism
Authors: Shubham Shrimali, Amritanshu Pandey and Chiranji L. ChowdharyAim: The aim of this paper is to work on K-means clustering-based radio neutron star pulsar emission mechanism. Background: The pulsars are a rare type of neutron stars that produce radio rays. Such types of rays are detectable on earth and have attracted scientists because of their concern with space-time, interstellar medium, and states of matter. During the rotation of pulsar rays, rays are emitted in the whole sky and after crossing the threshold value, the pattern of radio emission broadband is detected. As rotation speed of pulsar increases then accordingly the types of the patterns are produced periodically. Every pulsar emits different patterns that are a little bit different from each other which fully depends on its rotation. The detected signals are known as a candidates. It can be determined by the length of observation and it is average of all the rotations of the pulsar. Objective: The main objectives of this radio neutron star pulsar emission mechanism are: (a) Decision Tree Classifier (2) K-means Clustering (3) Neural Networks. Methods: The Pulsar Emission Data was broken down into two sets of data: Training Data and Testing Data. The Training Data is used to train the Decision Tree, the algorithm, K-means clustering and Neural Networks to allow it to identify, which attributes (Training Labels) are useful for identification of Neutron Pulsar Emissions. Results: The analysis used multiple machine learning algorithms. It was concluded that using neural networks is the best possible method to detect pulsar emissions from neutron stars. The best result achieved was 98% using Neural Networks. Conclusion: There are so many benefits of pulsar rays in different technologies. Earth can detect pulsar ray from low orbit. Earth can completely absorb X-ray in the atmosphere and due to this reason we can say that the wavelength is limited to those who do not have an atmosphere like space. According to our result, it can be concluded that the algorithm can be successfully used for detecting the pulsar signals.
-
-
-
Cloud Based Extensible Business Management Suite and Data Predictions
Authors: Rajesh Dukiya and Kumaresan PerumalIntroduction: The present research work was carried out using an enhanced market prediction algorithm in the cloud-based business, handing out the CRM system. The approach that is being predicted will handle and forecast the business with a higher degree of precision and constancy, the outcome solution of which is found to be compatible with any organization. Methods: The CRM framework was proposed in order to overcome the lack of multi-module integration and functionalities. The developed application is a complete tool for the purpose of managing the business over cloud. The business owner can host this application on any user interface PHP- and MySQL-enabled server. An optimized regression model was developed to predict the future of business and relationship with the customer. For the experimental test, a sample dataset was collected from the UCI machine learning repository. Results: The proposed prediction algorithm is an extension of the time series–based forecasting model. The projected system is tested with the income and expense cluster data of one year that were projected for analysis and the result shows better accuracy and constancy than the traditional opportunity creation driven model. Conclusion: It has been observed that the proposed Cloud-Based CRM Framework supports the organizations to achieve business predictions and maintain healthy relations with their clients and has worked best for the companies that have to maintain customer end relations. In addition, the customized CRM tool helps business managers and entrepreneurs to define innovative business strategies. The CRM application influences customer satisfaction on a large scale and is dominant for the supply chain system. Overall, the business model indicates that corporate organizations could grab business insights in a cost-efficient way with upcoming forecasting of sales. Discussion: Customer Relationship Management (CRM) is a technology used by the corporate sector to manage potential customers and their experiences in the view of the seller to consumer through purchase and post-purchase. In this era of advanced information technology, the sales departments of multi-product and service-end businesses are struggling hard in order to attain the signal view of the business. The key criterion for business-boosting decisions includes a single dashboard interface with a description of relevant information. The biggest challenge for businesses with umbrella views is to combine the data from various places and obtain market forecasts from the description. The deficiency in data collection and group management causes market loss or a lack of consumer interest in the product.
-
-
-
A Multi-Layer LSTM-Time-Density-Softmax (LDS) Approach for Protein Structure Prediction Using Deep Learning
Authors: Gururaj Tejeshwar and Siddesh G. MatIntroduction: The primary structure of the protein is a polypeptide chain made up of a sequence of amino acids. What happens due to interaction between the atoms of the backbone is that it forms within a polypeptide folded structure, which is very much within the secondary structure. These alignments can be made more accurate by the inclusion of secondary structure information. Objective: It is difficult to identify the sequence information embedded in the secondary structure of the protein. However, Deep learning methods can be used for solving the identification of the sequence information in the protein structures. Methods: The scope of the proposed work is to increase the accuracy of identifying the sequence information in the primary structure and the tertiary structure, thereby increasing the accuracy of the predicted Protein Secondary Structure (PSS). In this proposed work, homology is eliminated by a Recurrent Neural Network (RNN) based network that consists of three layers, namely bi-directional Long Short Term Memory (LSTM), time distributed layer and Softmax layer. Results: The proposed LDS model achieves an accuracy of approximately 86% for the prediction of the three-state secondary structure of the protein. Conclusion: The gap between the number of protein primary structures and secondary structures is huge and increasing. Machine learning is trying to reduce this gap. In most of the other pre attempts in predicting the secondary structure of proteins, the data is divided according to the homology of the proteins. This limits the efficiency of the predicting model and the inputs given to such models. Hence, in our model, homology has not been considered while collecting the data for training or testing out model. This has led to our model to not be affected by the homology of the protein fed to it and hence remove that restriction, so any protein can be fed to it.
-
-
-
A Review of Feature Extraction from ECG Signals and Classification/ Detection for Ventricular Arrhythmias
Authors: Rajeshwari M. R and Kavitha K. SHigh cholesterol, high blood pressure, diabetes, depression, obesity, smoking, poor diet, alcohol consumption, and no exercise are literally the major causes which have taken the life of many people in the world. All the parameters effect is the major slow points for Sudden Cardiac Death (SCD). As per the surveys conducted, there are 1 in 4 deaths caused due to a heart attack in the U.S alone. Ventricular Tachycardia (VT) is the deadly arrhythmias which can lead to SCD. Prediction of SCD using ECG signal derivative is a popular area of research. There are many papers published in this research. The recent development of a new algorithm on this topic helps to further research. In this work, we perform the overview of the ECG signal which is a way of measuring heartbeat rate and other features. Feature extraction of content areas in ECG and Classification algorithms for VF. We would see technique and methods based on ECG signal derivative by research in order to detect and predict SCD.
-
-
-
A Sentiment Score and a Rating Based Numeric Analysis Recommendations System: A Review
Authors: Lakshmi Holla and Kavitha K. S.Online purchases have been significantly increasing in the web world. Some of the big giants who have dominated the E-Commerce market worldwide are Amazon, FlipKart, Walmart and many others. Data generation has increased exponentially and analysis of this kind of dynamic data poses a major challenge. Further, facilitating consumer satisfaction by recommending the right product is another main challenge. This involves a significant number of factors like review ratings, normalization, early rating, sentiment computations of a sentence consisting of conjunctions, categorizing the sentiment score as positive, negative and neutral score for a given product review. Finally, the product which has the highest positive and least negative score must be suggested for the end user. In this paper, we discuss the work done under rating based numerical analysis methods which considers the transactions done by the end user. In the second part of the paper, we present an overview of sentiment score computed and its significance in improving the efficiency of recommendation systems. The main objective of this review is to understand and analyze different methods used to improve the efficiency of the current recommendation systems, thereby enhancing the credibility of product recommendations.
-
-
-
Predicting Election Results from Twitter Using Machine Learning Algorithms
Authors: Yadala Sucharitha, Yellasiri Vijayalata and Valurouthu K. PrasadIntroduction: In the present scenario, the social media network plays a significant role in sharing information between individuals. This incorporates information about news and events that are presently occurring in the worldwide. Anticipating election results is presently turning into a fascinating research topic through social media. In this article, we proposed a strategy to anticipate election results by consolidating sub-event discovery and sentimental analysis in micro-blogs to break down as well as imagine political inclinations uncovered by those social media users. Methods: This approach discovers and investigates sentimental data from micro-blogs to anticipate the popularity of contestants. In general, many organizations and media houses conduct a pre-poll review and obtain expert’s perspectives to anticipate the result of the election, but for our model, we use Twitter data to predict the result of an election by gathering information from Twitter and evaluate it to anticipate the result of the election by analyzing the sentiment of twitter information about the contestants. Results: The number of seats won by the first, second, and third party in AP Assembly Election 2019 has been determined by utilizing Positive Sentiment Scores (PSS’s) of the parties. The actual results of the election and our predicted values of the proposed model are compared, and the outcomes are very close to actual results. We utilized machine learning-based sentimental analysis to discover user emotions in tweets, anticipate sentiment score, and then convert this sentiment score to parties' seat score. Comprehensive experiments are conducted to check out the performance of our model based on a Twitter dataset. Conclusion: Our outcomes state that the proposed model can precisely forecast the election results with accuracy (94.2 %) over the given baselines. The experimental outcomes are very closer to actual election results and contrasted with conventional strategies utilized by various survey agencies for exit polls and approval of results demonstrated that social media data can foresee with better exactness. Discussion: In the future, we might want to expand this work into different areas and nations of the reality where Twitter is picking up prevalence as a political battling tool, and where politicians and individuals are turning towards micro-blogs for political communication and data. We would likewise expand this research into various fields other than general elections and from politicians to state organizations.
-
-
-
CBIR-CNN: Content-Based Image Retrieval on Celebrity Data Using Deep Convolution Neural Network
Authors: Pushpendra Singh, P.N. Hrisheekesha and Vinai Kumar. SinghBackground: Finding a region of interest in an image and content-based image analysis has been a challenging task for the last two decades. With the advancement in image processing, computer vision and a huge amount of image data generation, the Content-Based Image Retrieval System (CBIR) has attracted several researchers as a common technique to manage this huge amount of data. It is an approach of searching user interest based on visual information present in an image. The requirement of high computation power and huge memory limits the deployment of the CBIR technique in real-time scenarios. Objective: In this paper, an advanced deep learning model is applied to the CBIR on facial image data. We designed a deep convolution neural network architecture where activation of the convolution layer is used for feature representation and included max-pooling as a feature reduction technique. Furthermore, our model uses partial feature mapping as image descriptor to incorporate the property that facial image contains repeated information. Methods: Existing CBIR approaches primarily consider colour, texture and low-level features for mapping and localizing image segments. While deep learning has shown high performance in numerous fields of research, its application in CBIR is still very limited. Human face contains significant information to be used in a content driven task and applicable to various applications of computer vision and multimedia systems. In this research work, a deep learning-based model has been discussed for Content-Based Image Retrieval (CBIR). In CBIR, there are two important things 1) classification and 2) retrieval of image based on similarity. For the classification purpose, a fourconvolution layer model has been proposed. For the calculation of the similarity, Euclidian distance measure has been used between the images. Results: The proposed model is completely unsupervised, and it is fast and accurate in comparison to other deep learning models applied for CBIR over the facial dataset. The proposed method provided satisfactory results from the experiment, and it outperforms other CNN-based models such as VGG16, Inception V3, ResNet50, and MobileNet. Moreover, the performance of the proposed model has been compared with pre-trained models in terms of accuracy, storage space and inference time. Conclusion: The experimental analysis over the dataset has shown promising results with more than 90% classification accuracy.
-
-
-
A Novel Approach to Discover Ontology Alignment
Authors: Archana Patel and Sarika JainBackground: The rise of knowledge-rich applications has made ontologies as a common reference point to link the legacy IT systems. The interoperability and integration of two disparate systems in the same domain demand for the resolution of the heterogeneity problem. The major source of heterogeneity lies in the classical representation scheme of ontologies. Objective: Our objective is to present a novel approach to discover ontology alignment by exploiting the comprehensive knowledge structure, where every entity is represented and stored as a knowledge unit. Methods: We have created the dataset ourselves by using protégé tool because no dataset is available based on the idea of comprehensive knowledge structure. Results: The proposed approach always detects correct alignments and achieves optimal or near to optimal performance (in term of precision) in case of equivalence relationship. Conclusion: The aim of this paper is not to make a full-fledged matching/alignment tool, but to emphasize the importance of distinctive features of an entity while performing entity matching. The matchers are therefore used as black boxes and may be filled based on user’s choice.
-
-
-
Energy Efficient Clustering and Routing Algorithm for WSN
Authors: Mohit Kumar, Sonu Mittal and Amir K. AkhtarThis paper presents a novel Energy Efficient Clustering and Routing Algorithm (EECRA) for WSN. It is a clustering-based algorithm that minimizes energy dissipation in wireless sensor networks. The proposed algorithm takes into consideration energy conservation of the nodes through its inherent architecture and load balancing technique. In the proposed algorithm the role of intercluster transmission is not performed by gateways instead a chosen member node of respective cluster is responsible for data forwarding to another cluster or directly to the sink. Our algorithm eases out the load of the gateways by distributing the transmission load among chosen sensor node which acts as a relay node for inter-cluster communication for that round. Grievous simulations show that EECRA is better than PBCA and other algorithms in terms of energy consumption per round and network lifetime. Objective: The objective of this research lies in its inherent architecture and load balancing technique. The sole purpose of this clustering-based algorithm is that it minimizes energy dissipation in wireless sensor networks. Methods: This algorithm is tested with 100 sensor nodes and 10 gateways deployed in the target area of 300m × 300m. The round assumed in this simulation is same as in LEACH. The performance metrics used for comparisons are (a) network lifetime of gateways and (b) energy consumption per round by gateways. Our algorithm gives superior result compared to LBC, EELBCA and PBCA. Results: The simulation was performed on MATLAB version R2012b. The performance of EECRA is compared with some existing algorithms like PBCA, EELBCA and LBCA. The comparative analysis shows that the proposed algorithm outperforms the other existing algorithms in terms of network lifetime and energy consumption. Conclusion: The novelty of this algorithm lies in the fact that the gateways are not responsible for inter-cluster forwarding, instead some sensor nodes are chosen in every cluster based on some cost function and they act as a relay node for data forwarding. Note the algorithm does not address the hot-spot problem. Our next endeavor will be to design an algorithm with consideration of hot-spot problem.
-
-
-
A Fast Parallel Classification Model for the Diagnosis of Cancer
Authors: Divya Jain and Vijendra SinghBackground: In this era of voluminous data, there is a need to process data speedily in less time. Moreover, it is also essential to diminish the dimensionality of data as well as to apply parallel computations for classification. SVM is a prominent classification tool and is currently one of the most popular state-of-the-art models for solving various classification problems that makes use of parallel computations to speed up its processing. Objective: To develop a fast, promising classification system using optimized SVM classifier with hybridized dimensionality reduction for the diagnosis of cancer disease. Methods: The proposed approach comprises of two stages – the first stage presents a hybrid approach to reduce the dimensionality of cancer datasets, and the second stage presents an efficient classification method to optimize the SVM parameters and improve its accuracy. To lessen the execution time, the proposed approach uses GPUs to concurrently run different processes on machine workers. Results: The proposed method with the combination of dimensionality reduction & parallel classification using optimized SVM classifier is found to give excellent results based on ‘Classification Accuracy’, ‘Selected Features’ and ‘Execution Time’. Conclusion: The experimental findings with benchmark datasets indicate that the proposed diagnostic model yields significant improvement in terms of execution time when compared to the conventional approach. The proposed model can assist doctors and medical professionals in the quick selection of most significant risk factors for the diagnosis and prognosis of cancer diseases.
-
-
-
Test Case Prioritization Based on Early Fault Detection Technique
Authors: Dharmveer K. Yadav and Sandip DuttaBackground: In Regression testing when any changes made to already tested program it should not affect to other part of program. When some part of code is modified then it is necessary to validate the modified code. In the life cycle of software development, regression testing is an important and expensive activity. Objective: Our objective is to save regression testing time by reducing the cost and time of retesting of the modified code as well as affected components of the program. Objective is to prioritize the test case for object oriented program based on early fault detection. Methods: A new model is proposed using fuzzy inference system for regression testing. This paper presents a fuzzy inference system based on the use of inputs like fault detection rate, execution time and requirement coverage. The proposed system allows tester to prioritize test case based on linguistic rules. Results: In this paper performance is measured by using Average Percentage Fault Detection metric. We analysed for both prioritized and non-prioritized test suite using average percentage fault detection metric. Conclusion: We developed fuzzy expert model which helps to takes better decision than other expert system for regression testing. We conclude that, prioritization of test case will decrease the regression testing time.
-
-
-
Automation of Data Flow Class Testing Using Hybrid Evolutionary Algorithms
Authors: Neetu Jain and Rabins PorwalBackground: Software testing is a time consuming and costly process. Recent advances in complexity of software have gained attention among researchers towards the automation of generation of test data. Objective: This paper focuses on the structural testing of object oriented paradigm based software and proposes a hybrid approach to automate the class testing applying heuristic algorithms. Methods: The proposed algorithm performs data flow testing of classes applying all defuses adequacy criteria by automatically generating test cases. A nested 2-step methodology is applied using meta-heuristic genetic algorithm and its two variant (GA-variant1 and Ga-variant2) to produce optimized method sequences. Results: An experiment is performed applying proposed algorithm on six test classes. The results suggest that proposed approach with GA-variant1 is better than other techniques in terms of Average d-u coverage and Average iterations.
-
-
-
A Robust Real Time Object Detection and Recognition Algorithm for Multiple Objects
Authors: Garv Modwel, Anu Mehra, Nitin Rakesh and Krishna K. MishraBackground: Object detection algorithm scans every frame in the video to detect the objects present which is time consuming. This process becomes undesirable while dealing with real time system, which needs to act with in a predefined time constraint. To have quick response we need reliable detection and recognition for objects. Methods: To deal with the above problem a hybrid method is being implemented. This hybrid method combines three important algorithms to reduce scanning task for every frame. Recursive Density Estimation (RDE) algorithm decides which frame need to be scanned. You Look at Once (YOLO) algorithm does the detection and recognition in the selected frame. Detected objects are being tracked through Speed Up Robust Feature (SURF) algorithm to track the objects in subsequent frames. Results: Through the experimental study, we demonstrate that hybrid algorithm is more efficient compared to two different algorithm of same level. The algorithm is having high accuracy and low time latency (which is necessary for real time processing). Conclusion: The hybrid algorithm is able to detect with a minimum accuracy of 97 percent for all the conducted experiments and time lag experienced is also negligible, which makes it considerably efficient for real time application.
-