- Home
- A-Z Publications
- Recent Advances in Computer Science and Communications
- Previous Issues
- Volume 14, Issue 1, 2021
Recent Advances in Computer Science and Communications - Volume 14, Issue 1, 2021
Volume 14, Issue 1, 2021
-
-
An Automated Technique for Optic Disc Detection in Retinal Fundus Images Using Elephant Herding Optimization Algorithm
Authors: Jyotika Pruthi, Shaveta Arora and Kavita KhannaBackground: Glaucoma and diabetic retinopathy are known to be the prime reasons for causing irrevocable blindness in the world. However, the complete vision loss can be obstructed through regular screening of the eye to detect the disorder at an early stage. Objective: In this paper, we have presented a novel nonlinear optimization technique that helps in automatic detection of the optic disc boundary from retinal fundus images thereby satisfying the anatomical constraints by using elephant herding optimization algorithm. Methods: In our approach, median filter has been used for the noise removal in retinal images. The pre-processed image is passed to the metaheuristic algorithm known as Elephant Herding Optimization algorithm. Results: The proposed technique for optic disc segmentation has been applied and tested on four standard publicly available datasets namely DRIVE, DIARETDB1, STARE and DRIONS-DB. The ground truth of the optic disc boundary has been collected from two experts of glaucoma: Expert A and Expert B who are specialists of glaucoma. The quantitative and qualitative analysis has been done to evaluate the performance of optic disc segmentation techniques. Conclusion: The proposed technique for optic disc detection helps to obtain the smooth boundaries in retinal fundus images. The aim has been successfully achieved by proposing an approach using EHO to achieve optimized solution. The effectiveness of the approach has been evaluated on for benchmark datasets and the acquired results have shown the accuracy values to be 100% for DRIVE, 100% for DIARETDB1, 99.25% for STARE and 99.99% for DRIONS-DB.
-
-
-
Deep Learning Based Deep Level Tagger for Malayalam
Authors: Ajees A. P. and Sumam M. IdiculaBackground: POS tagging is the process of identifying the correct grammatical category of words based on its meaning and context in a text document. It is one of the preliminary steps in the processing of natural language text. If any error happens in POS tagging the same will be propagated to whole NLP applications. Hence it must be handled in a genuine and precise way. Aim: The purpose of this study is to develop a deep level tagger for Malayalam which indicates the semantics of nouns and verbs in a text document. Methods: The proposed model is a two-tier architecture consisting of deep learning as well as rulebased approaches. The first tier consists of a tagging model, which is trained by a tagged corpus of 287,000 words. To improve the depth of tagging a suffix stripper is also used which can provide morhological features to the shallow machine learning model. Results: The system is trained on 2,30,000 words and tested on 57,000 words. The accuracy of tagging for the phase-1 architecture is 92.03%. Similarly the accuracy of phase-2 architecture is 98.11%. The overall accuracy of tagging is 91.82%. Conclusion: The exclusive feature of the proposed tagger is its depth in tagging the noun words. This deep level information can be used in various semantic processing applications of the natural language text like anaphora resolution, text summarization, machine translation, etc.
-
-
-
On Energy-constrained Quickest Path Problem in Green Communication Using Intuitionistic Trapezoidal Fuzzy Numbers
Authors: Ashutosh Sharma, Rajiv Kumar and Rakesh K. BajajObjective: A new variant of energy-constrained Quickest Path Problem (QPP) is being addressed in Intuitionistic Trapezoidal Fuzzy environment. The consideration of energy with QPP enables to compute the path for continuity aware critical applications. This new variant problem called as Energy-constrained Intuitionistic Trapezoidal Fuzzy Quickest Path Problem (EITFQPP) has been considered and the computations for quickest path have been carried out. Methods: As one of the important feature of the proposed model, the weight parameters associated with a given link, e.g., delay, capacity, energy and data which are completely unknown and therefore, based on the real life situations, they may be assumed as intuitionistic trapezoidal fuzzy numbers. An algorithm to solve the EITFQPP for transmitting the fuzzy data in the network, where the nodes are associated with a sufficient amount of energy (imprecise) for continuous data flow, has been proposed. Results: In order to illustrate the implementation of the proposed algorithm, a numerical example of a benchmark network has been provided. The proposed algorithm successfully provides a quickest path using shortest paths computations under the energy constrained approach. Conclusion: The illustration through a numerical example shows the effectiveness of consideration of energy in the selections of set of paths. Finally, some of the possible directions for the future research are also discussed.
-
-
-
K-means Clustering-based Radio Neutron Star Pulsar Emission Mechanism
Authors: Shubham Shrimali, Amritanshu Pandey and Chiranji L. ChowdharyAim: The aim of this paper is to work on K-means clustering-based radio neutron star pulsar emission mechanism. Background: The pulsars are a rare type of neutron stars that produce radio rays. Such types of rays are detectable on earth and have attracted scientists because of their concern with space-time, interstellar medium, and states of matter. During the rotation of pulsar rays, rays are emitted in the whole sky and after crossing the threshold value, the pattern of radio emission broadband is detected. As rotation speed of pulsar increases then accordingly the types of the patterns are produced periodically. Every pulsar emits different patterns that are a little bit different from each other which fully depends on its rotation. The detected signals are known as a candidates. It can be determined by the length of observation and it is average of all the rotations of the pulsar. Objective: The main objectives of this radio neutron star pulsar emission mechanism are: (a) Decision Tree Classifier (2) K-means Clustering (3) Neural Networks. Methods: The Pulsar Emission Data was broken down into two sets of data: Training Data and Testing Data. The Training Data is used to train the Decision Tree, the algorithm, K-means clustering and Neural Networks to allow it to identify, which attributes (Training Labels) are useful for identification of Neutron Pulsar Emissions. Results: The analysis used multiple machine learning algorithms. It was concluded that using neural networks is the best possible method to detect pulsar emissions from neutron stars. The best result achieved was 98% using Neural Networks. Conclusion: There are so many benefits of pulsar rays in different technologies. Earth can detect pulsar ray from low orbit. Earth can completely absorb X-ray in the atmosphere and due to this reason we can say that the wavelength is limited to those who do not have an atmosphere like space. According to our result, it can be concluded that the algorithm can be successfully used for detecting the pulsar signals.
-
-
-
Cloud Based Extensible Business Management Suite and Data Predictions
Authors: Rajesh Dukiya and Kumaresan PerumalIntroduction: The present research work was carried out using an enhanced market prediction algorithm in the cloud-based business, handing out the CRM system. The approach that is being predicted will handle and forecast the business with a higher degree of precision and constancy, the outcome solution of which is found to be compatible with any organization. Methods: The CRM framework was proposed in order to overcome the lack of multi-module integration and functionalities. The developed application is a complete tool for the purpose of managing the business over cloud. The business owner can host this application on any user interface PHP- and MySQL-enabled server. An optimized regression model was developed to predict the future of business and relationship with the customer. For the experimental test, a sample dataset was collected from the UCI machine learning repository. Results: The proposed prediction algorithm is an extension of the time series–based forecasting model. The projected system is tested with the income and expense cluster data of one year that were projected for analysis and the result shows better accuracy and constancy than the traditional opportunity creation driven model. Conclusion: It has been observed that the proposed Cloud-Based CRM Framework supports the organizations to achieve business predictions and maintain healthy relations with their clients and has worked best for the companies that have to maintain customer end relations. In addition, the customized CRM tool helps business managers and entrepreneurs to define innovative business strategies. The CRM application influences customer satisfaction on a large scale and is dominant for the supply chain system. Overall, the business model indicates that corporate organizations could grab business insights in a cost-efficient way with upcoming forecasting of sales. Discussion: Customer Relationship Management (CRM) is a technology used by the corporate sector to manage potential customers and their experiences in the view of the seller to consumer through purchase and post-purchase. In this era of advanced information technology, the sales departments of multi-product and service-end businesses are struggling hard in order to attain the signal view of the business. The key criterion for business-boosting decisions includes a single dashboard interface with a description of relevant information. The biggest challenge for businesses with umbrella views is to combine the data from various places and obtain market forecasts from the description. The deficiency in data collection and group management causes market loss or a lack of consumer interest in the product.
-
-
-
A Multi-Layer LSTM-Time-Density-Softmax (LDS) Approach for Protein Structure Prediction Using Deep Learning
Authors: Gururaj Tejeshwar and Siddesh G. MatIntroduction: The primary structure of the protein is a polypeptide chain made up of a sequence of amino acids. What happens due to interaction between the atoms of the backbone is that it forms within a polypeptide folded structure, which is very much within the secondary structure. These alignments can be made more accurate by the inclusion of secondary structure information. Objective: It is difficult to identify the sequence information embedded in the secondary structure of the protein. However, Deep learning methods can be used for solving the identification of the sequence information in the protein structures. Methods: The scope of the proposed work is to increase the accuracy of identifying the sequence information in the primary structure and the tertiary structure, thereby increasing the accuracy of the predicted Protein Secondary Structure (PSS). In this proposed work, homology is eliminated by a Recurrent Neural Network (RNN) based network that consists of three layers, namely bi-directional Long Short Term Memory (LSTM), time distributed layer and Softmax layer. Results: The proposed LDS model achieves an accuracy of approximately 86% for the prediction of the three-state secondary structure of the protein. Conclusion: The gap between the number of protein primary structures and secondary structures is huge and increasing. Machine learning is trying to reduce this gap. In most of the other pre attempts in predicting the secondary structure of proteins, the data is divided according to the homology of the proteins. This limits the efficiency of the predicting model and the inputs given to such models. Hence, in our model, homology has not been considered while collecting the data for training or testing out model. This has led to our model to not be affected by the homology of the protein fed to it and hence remove that restriction, so any protein can be fed to it.
-
-
-
A Review of Feature Extraction from ECG Signals and Classification/ Detection for Ventricular Arrhythmias
Authors: Rajeshwari M. R and Kavitha K. SHigh cholesterol, high blood pressure, diabetes, depression, obesity, smoking, poor diet, alcohol consumption, and no exercise are literally the major causes which have taken the life of many people in the world. All the parameters effect is the major slow points for Sudden Cardiac Death (SCD). As per the surveys conducted, there are 1 in 4 deaths caused due to a heart attack in the U.S alone. Ventricular Tachycardia (VT) is the deadly arrhythmias which can lead to SCD. Prediction of SCD using ECG signal derivative is a popular area of research. There are many papers published in this research. The recent development of a new algorithm on this topic helps to further research. In this work, we perform the overview of the ECG signal which is a way of measuring heartbeat rate and other features. Feature extraction of content areas in ECG and Classification algorithms for VF. We would see technique and methods based on ECG signal derivative by research in order to detect and predict SCD.
-
-
-
A Sentiment Score and a Rating Based Numeric Analysis Recommendations System: A Review
Authors: Lakshmi Holla and Kavitha K. S.Online purchases have been significantly increasing in the web world. Some of the big giants who have dominated the E-Commerce market worldwide are Amazon, FlipKart, Walmart and many others. Data generation has increased exponentially and analysis of this kind of dynamic data poses a major challenge. Further, facilitating consumer satisfaction by recommending the right product is another main challenge. This involves a significant number of factors like review ratings, normalization, early rating, sentiment computations of a sentence consisting of conjunctions, categorizing the sentiment score as positive, negative and neutral score for a given product review. Finally, the product which has the highest positive and least negative score must be suggested for the end user. In this paper, we discuss the work done under rating based numerical analysis methods which considers the transactions done by the end user. In the second part of the paper, we present an overview of sentiment score computed and its significance in improving the efficiency of recommendation systems. The main objective of this review is to understand and analyze different methods used to improve the efficiency of the current recommendation systems, thereby enhancing the credibility of product recommendations.
-
-
-
Predicting Election Results from Twitter Using Machine Learning Algorithms
Authors: Yadala Sucharitha, Yellasiri Vijayalata and Valurouthu K. PrasadIntroduction: In the present scenario, the social media network plays a significant role in sharing information between individuals. This incorporates information about news and events that are presently occurring in the worldwide. Anticipating election results is presently turning into a fascinating research topic through social media. In this article, we proposed a strategy to anticipate election results by consolidating sub-event discovery and sentimental analysis in micro-blogs to break down as well as imagine political inclinations uncovered by those social media users. Methods: This approach discovers and investigates sentimental data from micro-blogs to anticipate the popularity of contestants. In general, many organizations and media houses conduct a pre-poll review and obtain expert’s perspectives to anticipate the result of the election, but for our model, we use Twitter data to predict the result of an election by gathering information from Twitter and evaluate it to anticipate the result of the election by analyzing the sentiment of twitter information about the contestants. Results: The number of seats won by the first, second, and third party in AP Assembly Election 2019 has been determined by utilizing Positive Sentiment Scores (PSS’s) of the parties. The actual results of the election and our predicted values of the proposed model are compared, and the outcomes are very close to actual results. We utilized machine learning-based sentimental analysis to discover user emotions in tweets, anticipate sentiment score, and then convert this sentiment score to parties' seat score. Comprehensive experiments are conducted to check out the performance of our model based on a Twitter dataset. Conclusion: Our outcomes state that the proposed model can precisely forecast the election results with accuracy (94.2 %) over the given baselines. The experimental outcomes are very closer to actual election results and contrasted with conventional strategies utilized by various survey agencies for exit polls and approval of results demonstrated that social media data can foresee with better exactness. Discussion: In the future, we might want to expand this work into different areas and nations of the reality where Twitter is picking up prevalence as a political battling tool, and where politicians and individuals are turning towards micro-blogs for political communication and data. We would likewise expand this research into various fields other than general elections and from politicians to state organizations.
-
-
-
CBIR-CNN: Content-Based Image Retrieval on Celebrity Data Using Deep Convolution Neural Network
Authors: Pushpendra Singh, P.N. Hrisheekesha and Vinai Kumar. SinghBackground: Finding a region of interest in an image and content-based image analysis has been a challenging task for the last two decades. With the advancement in image processing, computer vision and a huge amount of image data generation, the Content-Based Image Retrieval System (CBIR) has attracted several researchers as a common technique to manage this huge amount of data. It is an approach of searching user interest based on visual information present in an image. The requirement of high computation power and huge memory limits the deployment of the CBIR technique in real-time scenarios. Objective: In this paper, an advanced deep learning model is applied to the CBIR on facial image data. We designed a deep convolution neural network architecture where activation of the convolution layer is used for feature representation and included max-pooling as a feature reduction technique. Furthermore, our model uses partial feature mapping as image descriptor to incorporate the property that facial image contains repeated information. Methods: Existing CBIR approaches primarily consider colour, texture and low-level features for mapping and localizing image segments. While deep learning has shown high performance in numerous fields of research, its application in CBIR is still very limited. Human face contains significant information to be used in a content driven task and applicable to various applications of computer vision and multimedia systems. In this research work, a deep learning-based model has been discussed for Content-Based Image Retrieval (CBIR). In CBIR, there are two important things 1) classification and 2) retrieval of image based on similarity. For the classification purpose, a fourconvolution layer model has been proposed. For the calculation of the similarity, Euclidian distance measure has been used between the images. Results: The proposed model is completely unsupervised, and it is fast and accurate in comparison to other deep learning models applied for CBIR over the facial dataset. The proposed method provided satisfactory results from the experiment, and it outperforms other CNN-based models such as VGG16, Inception V3, ResNet50, and MobileNet. Moreover, the performance of the proposed model has been compared with pre-trained models in terms of accuracy, storage space and inference time. Conclusion: The experimental analysis over the dataset has shown promising results with more than 90% classification accuracy.
-
-
-
A Novel Approach to Discover Ontology Alignment
Authors: Archana Patel and Sarika JainBackground: The rise of knowledge-rich applications has made ontologies as a common reference point to link the legacy IT systems. The interoperability and integration of two disparate systems in the same domain demand for the resolution of the heterogeneity problem. The major source of heterogeneity lies in the classical representation scheme of ontologies. Objective: Our objective is to present a novel approach to discover ontology alignment by exploiting the comprehensive knowledge structure, where every entity is represented and stored as a knowledge unit. Methods: We have created the dataset ourselves by using protégé tool because no dataset is available based on the idea of comprehensive knowledge structure. Results: The proposed approach always detects correct alignments and achieves optimal or near to optimal performance (in term of precision) in case of equivalence relationship. Conclusion: The aim of this paper is not to make a full-fledged matching/alignment tool, but to emphasize the importance of distinctive features of an entity while performing entity matching. The matchers are therefore used as black boxes and may be filled based on user’s choice.
-
-
-
Energy Efficient Clustering and Routing Algorithm for WSN
Authors: Mohit Kumar, Sonu Mittal and Amir K. AkhtarThis paper presents a novel Energy Efficient Clustering and Routing Algorithm (EECRA) for WSN. It is a clustering-based algorithm that minimizes energy dissipation in wireless sensor networks. The proposed algorithm takes into consideration energy conservation of the nodes through its inherent architecture and load balancing technique. In the proposed algorithm the role of intercluster transmission is not performed by gateways instead a chosen member node of respective cluster is responsible for data forwarding to another cluster or directly to the sink. Our algorithm eases out the load of the gateways by distributing the transmission load among chosen sensor node which acts as a relay node for inter-cluster communication for that round. Grievous simulations show that EECRA is better than PBCA and other algorithms in terms of energy consumption per round and network lifetime. Objective: The objective of this research lies in its inherent architecture and load balancing technique. The sole purpose of this clustering-based algorithm is that it minimizes energy dissipation in wireless sensor networks. Methods: This algorithm is tested with 100 sensor nodes and 10 gateways deployed in the target area of 300m × 300m. The round assumed in this simulation is same as in LEACH. The performance metrics used for comparisons are (a) network lifetime of gateways and (b) energy consumption per round by gateways. Our algorithm gives superior result compared to LBC, EELBCA and PBCA. Results: The simulation was performed on MATLAB version R2012b. The performance of EECRA is compared with some existing algorithms like PBCA, EELBCA and LBCA. The comparative analysis shows that the proposed algorithm outperforms the other existing algorithms in terms of network lifetime and energy consumption. Conclusion: The novelty of this algorithm lies in the fact that the gateways are not responsible for inter-cluster forwarding, instead some sensor nodes are chosen in every cluster based on some cost function and they act as a relay node for data forwarding. Note the algorithm does not address the hot-spot problem. Our next endeavor will be to design an algorithm with consideration of hot-spot problem.
-
-
-
A Fast Parallel Classification Model for the Diagnosis of Cancer
Authors: Divya Jain and Vijendra SinghBackground: In this era of voluminous data, there is a need to process data speedily in less time. Moreover, it is also essential to diminish the dimensionality of data as well as to apply parallel computations for classification. SVM is a prominent classification tool and is currently one of the most popular state-of-the-art models for solving various classification problems that makes use of parallel computations to speed up its processing. Objective: To develop a fast, promising classification system using optimized SVM classifier with hybridized dimensionality reduction for the diagnosis of cancer disease. Methods: The proposed approach comprises of two stages – the first stage presents a hybrid approach to reduce the dimensionality of cancer datasets, and the second stage presents an efficient classification method to optimize the SVM parameters and improve its accuracy. To lessen the execution time, the proposed approach uses GPUs to concurrently run different processes on machine workers. Results: The proposed method with the combination of dimensionality reduction & parallel classification using optimized SVM classifier is found to give excellent results based on ‘Classification Accuracy’, ‘Selected Features’ and ‘Execution Time’. Conclusion: The experimental findings with benchmark datasets indicate that the proposed diagnostic model yields significant improvement in terms of execution time when compared to the conventional approach. The proposed model can assist doctors and medical professionals in the quick selection of most significant risk factors for the diagnosis and prognosis of cancer diseases.
-
-
-
Test Case Prioritization Based on Early Fault Detection Technique
Authors: Dharmveer K. Yadav and Sandip DuttaBackground: In Regression testing when any changes made to already tested program it should not affect to other part of program. When some part of code is modified then it is necessary to validate the modified code. In the life cycle of software development, regression testing is an important and expensive activity. Objective: Our objective is to save regression testing time by reducing the cost and time of retesting of the modified code as well as affected components of the program. Objective is to prioritize the test case for object oriented program based on early fault detection. Methods: A new model is proposed using fuzzy inference system for regression testing. This paper presents a fuzzy inference system based on the use of inputs like fault detection rate, execution time and requirement coverage. The proposed system allows tester to prioritize test case based on linguistic rules. Results: In this paper performance is measured by using Average Percentage Fault Detection metric. We analysed for both prioritized and non-prioritized test suite using average percentage fault detection metric. Conclusion: We developed fuzzy expert model which helps to takes better decision than other expert system for regression testing. We conclude that, prioritization of test case will decrease the regression testing time.
-
-
-
Automation of Data Flow Class Testing Using Hybrid Evolutionary Algorithms
Authors: Neetu Jain and Rabins PorwalBackground: Software testing is a time consuming and costly process. Recent advances in complexity of software have gained attention among researchers towards the automation of generation of test data. Objective: This paper focuses on the structural testing of object oriented paradigm based software and proposes a hybrid approach to automate the class testing applying heuristic algorithms. Methods: The proposed algorithm performs data flow testing of classes applying all defuses adequacy criteria by automatically generating test cases. A nested 2-step methodology is applied using meta-heuristic genetic algorithm and its two variant (GA-variant1 and Ga-variant2) to produce optimized method sequences. Results: An experiment is performed applying proposed algorithm on six test classes. The results suggest that proposed approach with GA-variant1 is better than other techniques in terms of Average d-u coverage and Average iterations.
-
-
-
A Robust Real Time Object Detection and Recognition Algorithm for Multiple Objects
Authors: Garv Modwel, Anu Mehra, Nitin Rakesh and Krishna K. MishraBackground: Object detection algorithm scans every frame in the video to detect the objects present which is time consuming. This process becomes undesirable while dealing with real time system, which needs to act with in a predefined time constraint. To have quick response we need reliable detection and recognition for objects. Methods: To deal with the above problem a hybrid method is being implemented. This hybrid method combines three important algorithms to reduce scanning task for every frame. Recursive Density Estimation (RDE) algorithm decides which frame need to be scanned. You Look at Once (YOLO) algorithm does the detection and recognition in the selected frame. Detected objects are being tracked through Speed Up Robust Feature (SURF) algorithm to track the objects in subsequent frames. Results: Through the experimental study, we demonstrate that hybrid algorithm is more efficient compared to two different algorithm of same level. The algorithm is having high accuracy and low time latency (which is necessary for real time processing). Conclusion: The hybrid algorithm is able to detect with a minimum accuracy of 97 percent for all the conducted experiments and time lag experienced is also negligible, which makes it considerably efficient for real time application.
-