- Home
- A-Z Publications
- Recent Advances in Computer Science and Communications
- Previous Issues
- Volume 18, Issue 1, 2025
Recent Advances in Computer Science and Communications - Volume 18, Issue 1, 2025
Volume 18, Issue 1, 2025
-
-
Graphical User Interface for Handwritten Mathematical Expression Employing RNN-based Encoder-decoder Model
Authors: Shruti Jain, Monika Bharti and Samanvaya TripathiAimScientific, technical, and educational research domains all heavily rely on handwritten mathematical expressions. The extensive use of online handwritten mathematical expression recognition is a consequence of the availability of strong computational touch-screen appliances, such as the recent development of deep neural networks as superior sequence recognition models.
BackgroundFurther investigation and enhancement of these technologies are vital to tackle the contemporary obstacles presented by the widespread adoption of remote learning and work arrangements as a result of the global health crisis.
ObjectiveHandwritten document processing has gained more attention in the last ten years due to notable developments in deep neural network-based computer vision models and sequence recognition, as well as the widespread proliferation of touch and pen-enabled smartphones and tablets. It comes naturally to people to write by hand in daily interactions.
MethodIn this article, authors implemented Hand written expressions using RNN-based encoder for the CROHME dataset. Later, the proposed model was validated using CNN-based encoder and end-to-end encoder decoder techniques. The proposed model is also validated on other datasets.
ResultsThe RNN-based encoder model yields 82.78%, while the CNN-based encoder model and end-to-end encoder-decoder technique results in 81.38% and 80.73%, respectively.
Conclusion1.6% accuracy improvement was attained over CNN-based encoder while 2.4% accuracy improvement over end-to-end encoder-decoder. CROHME dataset 2019 version results in better accuracy than other datasets.
-
-
-
An Improved Aquila Optimizer with Local Escaping Operator and Its Application in UAV Path Planning
Authors: Jiahao Zhang, Zhengming Gao, Suruo Li and Juan ZhaoBackgroundWith the development of intelligent technology, Unmanned aerial vehicles (UAVs) are widely used in military and civilian fields. Path planning is the most important part of UAV navigation system. Its purpose is to find a smooth and feasible path from the start to the end.
ObjectiveIn order to obtain a better flight path, this paper presents an improved Aquila optimizer combing the opposition-based learning and the local escaping operator, named LEOAO, to deal with the UAV path planning problem in three-dimensional environments.
MethodsUAV path planning is modelled as a constrained optimization problem in which the cost function consists of one objective: path length and four constraints: safe distance, flight height, turning angle and climbing/diving angle. In this paper, the LEOAO is introduced to find the optimal path by minimizing the cost function, and B-Spline is invited to represent a smooth path. The local escaping operator is used to enhance the search ability of the algorithm.
ResultsTo test the performance of LEOAO, two scenarios are applied based on basic terrain function. Experiments show that the proposed LEOAO outperforms other algorithms such as the grey wolf optimizer, whale optimization algorithm, including the original Aquila optimizer.
ConclusionThe proposed algorithm combines the opposition-based learning and local escaping operator. The opposition-based learning algorithm has the ability to accelerate convergence. And the introduction of LEO effectively balances the exploration and exploitation abilities of the algorithm and improves the quality of the population. Finally, the improved Aquila optimizer obtains a better path.
-
- Computer and Information Science, Networking, Computer Science
-
-
-
Role of Artificial Intelligence in VLSI Design: A Review
Authors: Garima Thakur and Shruti JainArtificial intelligence (AI) related technologies are being employed more and more in a range of industries to increase automation and improve productivity. The increasing volumes of data and advancements in high-performance computing have led to a sharp increase in the application of these methods in recent years. AI technology has been widely applied in the field of hardware design, notably in the design of digital and analogue integrated circuits (ICs), to address challenges such as rising networked devices, aggressive time-to-market, and ever-increasing design complexity. However, very little attention has been paid to the issues and problems related to the design of integrated circuits. The authors of this article review the state-of-the-art in AI for circuit design and optimization. AI offers knowledge-based technologies that give challenges a foundation and structure. A technology known as AI makes it possible for machines to mimic human behavior. Data in all formats, including unstructured, semi-structured, and structured, can be processed by AI. It is crucial to incorporate all of the features and levels of the many CAD programmes into a single, cohesive environment for creation, as was mentioned in the section that came before this one. Consequently, the application of AI automation helps to enhance the effectiveness and efficiency of CAD's performance.
-
-
-
-
A Cost-Minimized Task Migration Assignment Mechanism in Blockchain Based Edge Computing System
Authors: Binghua Xu, Yan Jin and Lei YuBackgroundCloud computing is usually introduced to execute computing intensive tasks for data processing and data mining. As a supplement to cloud computing, edge computing is provided as a new paradigm to effectively reduce processing latency, energy consumption cost and bandwidth consumption for time-sensitive tasks or resource-sensitive tasks. To better meet such requirements during task assignment in edge computing systems, an intelligent task migration assignment mechanism based on blockchain is proposed, which jointly considers the factors of resource allocation, resource control and credit degree.
MethodsIn this paper, an optimization problem is firstly constructed to minimize the total cost of completing all tasks under constraints of delay, energy consumption, communication, and credit degree. Here, the terminal node mines computing resources from edge nodes to complete task migration. An incentive method based on blockchain is provided to mobilize the activity of terminal nodes and edge nodes, and to ensure the security of the transaction during migration. The designed allocation rules ensure the fairness of rewards for successfully mining resource. To solve the optimization problem, an intelligent migration algorithm that utilizes a dual “actor-reviewer” neural network on inverse gradient update is proposed which makes the training process more stable and easier to converge.
ResultsCompared to the existing two benchmark mechanisms, the extensive simulation results indicate that the proposed mechanism based on neural network can converge at a faster speed and achieve the minimal total cost.
ConclusionTo satisfy the requirements of delay and energy consumption for computing intensive tasks in edge computing scenarios, an intelligent, blockchain based task migration assignment mechanism with joint resource allocation and control is proposed. To realize this mechanism effectively, a dual “actor-reviewer” neural network algorithm is designed and executed.
-
-
-
Extensive Review of Literature on Explainable AI (XAI) in Healthcare Applications
More LessArtificial Intelligence (AI) techniques are widely being used in the medical fields or various applications including diagnosis of diseases, prediction and classification of diseases, drug discovery, etc. However, these AI techniques are lacking in the transparency of the predictions or decisions made due to their black box-type operations. The explainable AI (XAI) addresses such issues faced by AI to make better interpretations or decisions by physicians. This article explores XAI techniques in the field of healthcare applications, including the Internet of Medical Things (IoMT). XAI aims to provide transparency, accountability, and traceability in AI-based systems in healthcare applications. It can help in interpreting the predictions or decisions made in medical diagnosis systems, medical decision support systems, smart wearable healthcare devices, etc. Nowadays, XAI methods have been utilized in numerous medical applications over the Internet of Things (IoT), such as medical diagnosis, prognosis, and explanations of the AI models, and hence, XAI in the context of IoMT and healthcare has the potential to enhance the reliability and trustworthiness of AI systems.
-
-
-
A Prospective Metaverse Paradigm Based on the Reality-Virtuality Continuum and Digital Twins
Authors: Abolfazl Zare and Aliakbar JalaliAfter decades of introducing the concept of virtual reality, the expansion, and significant advances of technologies and innovations, such as 6g, edge computing, the internet of things, robotics, artificial intelligence, blockchain, quantum computing, and digital twins, the world is on the cusp of a new revolution. By moving through the three stages of the digital twin, digital native, and finally surrealist, the metaverse has created a new vision of the future of human and societal life so that we are likely to face the next generation of societies (perhaps society 6) in the not too distant future. However, until then, the reality has been that the metaverse is still in its infancy, perhaps where the internet was in 1990. There is still no single definition, few studies have been conducted, there is no comprehensive and complete paradigm or clear framework, and due to the high financial volume of technology giants, most of these studies have focused on profitable areas such as gaming and entertainment. The motivation and purpose of this article are to introduce a prospective metaverse paradigm based on the revised reality-virtuality continuum and provide a new supporting taxonomy with the three dimensions of interaction, immersion, and extent of world knowledge to develop and strengthen the theoretical foundations of the metaverse and help researchers. Furthermore, there is still no comprehensive and agreed-upon conceptual framework for the metaverse. To this end, by reviewing the research literature, discovering the important components of technological building blocks, especially digital twins, and presenting a new concept called meta-twins, a prospective conceptual framework based on the revised reality-virtuality continuum with a new supporting taxonomy was presented.
-