-
CiteScore
1.53
Impact Factor
Volume 2, Issue 2, IECE Transactions on Sensing, Communication, and Control
Volume 2, Issue 2, 2025
Submit Manuscript Edit a Special Issue
IECE Transactions on Sensing, Communication, and Control, Volume 2, Issue 2, 2025: 66-74

Free to Read | Research Article | 15 April 2025
Smart Ground Robot for Real-Time Detection of Tomato Diseases Using Deep Learning and IoT Technologies
1 Electronic Engineering Department, Sir Syed University of Engineering and Technology (SSUET), Karachi, Pakistan
2 Department of Underwater Acoustic Engineering, Harbin Engineering University, Harbin 150001, China
3 Department of Electrical, Electronics and Computer Science Engineering, University of Catania, 95125 Catania, Italy
* Corresponding Author: Faizan Zahid, [email protected]
Received: 03 October 2024, Accepted: 26 March 2025, Published: 15 April 2025  
Abstract
This study presents an intelligent automated system for real-time detection and classification of tomato diseases using a Convolutional Neural Network (CNN) integrated within an Internet of Things (IoT) based unmanned ground vehicle (UGV). The CNN was trained and evaluated using a dataset comprising over 20,000 images of tomato leaves categorized into ten distinct diseases—Late Blight, Early Blight, Septoria Leaf Spot, Tomato Yellow Leaf Curl Virus, Bacterial Spot, Target Spot, Tomato Mosaic Virus, Leaf Mold, Spider Mites Two-Spotted Spider Mite, Powdery Mildew—and healthy leaves. The developed CNN architecture, optimized for lightweight deployment on edge devices like Raspberry Pi 4, achieved an overall accuracy of approximately 83%, with notable variations across classes in precision, recall, and F1-score. Specifically, high precision scores (above 80%) were obtained for diseases such as Bacterial Spot, Late Blight, and Tomato Yellow Leaf Curl Virus, while moderate scores in diseases exhibiting subtle visual symptoms underscored areas for future refinement. The UGV autonomously navigates tomato fields, captures high-resolution images of leaves, and conducts on-site real-time disease classification, significantly reducing the labor, human error, and time associated with traditional manual inspections. Comprehensive quantitative analyses, including confusion matrices and visual assessments of classified samples, validate the practical viability and robustness of the proposed system, although certain misclassifications highlight opportunities to enhance training data diversity and model generalizability in future work. The integration of deep learning and IoT technologies demonstrated in this study substantially advances precision agriculture, improving disease management practices and promoting sustainable agricultural productivity.

Keywords
automated systems
agricultural robotics
internet of things (IoT)
deep learning
tomato disease detection
raspberry Pi 4

1. Introduction

The global agriculture industry is undergoing significant transformations driven by the rapid development of the Internet of Things (IoT) and advanced technological integration. IoT has enabled substantial improvements in agricultural efficiency, cost reductions, service accessibility, and operational management, even in remote or resource-limited regions [1]. Particularly in precision agriculture, IoT applications such as real-time monitoring, greenhouse automation, and predictive analytics have gained momentum, allowing for optimized and sustainable cultivation of diverse crops, including vineyards, bananas, olives, and corn [2]. However, despite the evident benefits, several challenges remain, especially related to network infrastructure, data management, and the adoption of open-source IoT technologies in agriculture [3].

The need for automation in agriculture has intensified due to rising global food demands, population growth, and urban migration, which have reduced available agricultural workforce and land area [4]. This scenario has propelled the agricultural sector towards increased reliance on artificial intelligence (AI) and robotic systems. Recent advancements in machine learning (ML), especially deep learning (DL), have significantly improved automation in tasks such as disease detection, weed-crop discrimination, fruit counting, and land cover classification [5]. Convolutional neural networks (CNNs), a particular DL architecture, have consistently demonstrated superior accuracy over traditional ML methods like Random Forest (RF) and Support Vector Machine (SVM) across various agricultural applications [6].

Despite these advancements, agricultural practices in developing countries remain heavily dependent on labor-intensive methods requiring continuous manual monitoring, highlighting a critical gap in technological adoption and automation capabilities. To address this gap, recent studies propose IoT-driven agricultural systems that automate critical processes based on environmental conditions, providing real-time feedback directly to farmers through smartphones and cloud platforms [7]. Moreover, the rise of IoT-enabled smart agriculture ecosystems, incorporating technologies like wireless sensor networks, big data analytics, and cloud computing, signifies a broader trend towards sustainable agricultural practices designed to address food security challenges arising from population growth, resource scarcity, and environmental unpredictability [8].

The urgency to mitigate the impacts of agricultural diseases, which substantially threaten global food production, further accentuates the need for technological solutions. Plant viruses alone account for significant economic losses and pose risks to environmental health, food security, and supply chain stability [9]. Consequently, developing autonomous, real-time disease detection systems employing advanced CNN models embedded in IoT-based robotic platforms presents a promising direction. These systems offer enhanced accuracy, operational efficiency, and scalability, making disease management practices more robust and sustainable, particularly for high-value and vulnerable crops like tomatoes [10].

Automated plant disease detection, particularly leaf disease identification, has increasingly emerged as an essential area of research due to its potential economic impact on agricultural productivity [11]. Techniques leveraging multispectral and hyperspectral imaging combined with advanced machine learning (ML) and deep learning (DL) algorithms, such as Convolutional Neural Networks (CNNs), ResNet, and VGG architectures, have proven effective in accurately identifying leaf diseases across various plant species. These computational methods typically outperform traditional ML classifiers, including Support Vector Machines (SVMs) and Random Forests (RF), offering significant accuracy improvements and demonstrating robust performance metrics like precision, recall, and F1-score [12].

Early and precise detection of crop diseases using DL methods has shown remarkable success in real-time agricultural applications. Notably, advanced CNN models, when integrated with powerful embedded hardware platforms have achieved impressive real-world classification accuracies, validating their suitability for field deployment [13]. Despite these technological advancements, several open issues persist, including model generalizability across diverse crops, computational efficiency for real-time inference, and the scarcity of comprehensive, publicly available datasets. Addressing these gaps requires datasets that encompass various crop types, disease classes, and environmental conditions to enhance the robustness and effectiveness of disease detection models [14].

In response to the necessity for more efficient computational performance suitable for edge computing environments, researchers have developed lightweight CNN architectures. For instance, the VGG-ICNN model has demonstrated outstanding results. Its significantly reduced number of parameters positions this architecture as an optimal solution for real-time disease detection tasks in resource-constrained agricultural environments [15]. Concurrently, advancements in agricultural autonomous navigation technologies, tailored for the complexity and unpredictability of farming environments, underline the importance of integrating precise, efficient navigation systems into automated agricultural equipment. Future research in this domain highlights key trends such as multi-dimensional perception, selective autonomous navigation technologies, multi-agent cooperative systems, and fault diagnostic capabilities, all essential to enhancing the practicality and reliability of autonomous agricultural vehicles [16].

IoT-based solutions offer promising approaches to tackling these challenges through innovative applications such as smart irrigation, precision farming, crop health monitoring, pest management, agricultural drones, and supply chain management [17]. Nevertheless, widespread adoption of IoT in agriculture necessitates addressing fundamental issues related to connectivity, scalability, data privacy, cost management, and enhancing awareness among stakeholders. Effective collaboration between farmers, technology providers, academia, and policymakers is crucial to unlocking the full potential of IoT-driven agricultural practices, ultimately contributing to sustainable agricultural productivity and resilience amidst global challenges such as climate change and resource scarcity [18].

Table 1 Key hardware components and specifications.
Component Specification Purpose
Raspberry Pi 4 Model B ARM Cortex-A72 Quad-Core 1.5 GHz CPU, 4GB RAM, Bluetooth 5.0, Dual-band Wi-Fi (2.4 GHz/5 GHz) CNN inference, image capturing, and cloud communication
ESP32 Microcontroller Dual-core, 240 MHz, Integrated Wi-Fi and Bluetooth (BLE) Motor control, sensor interfacing, obstacle avoidance
Web Camera A4Tech 925HD, Resolution: 1920×1080 pixels, 30 fps Capturing high-resolution leaf images
Ultrasonic Sensor Waterproof, Detection range: up to 60 cm Real-time obstacle detection
8-Channel Relay Module 8 separate relay switches, 5V DC operating voltage Control and isolation of DC motors
DC Motors 8 motors, Operating voltage: 12V DC Robot mobility and navigation
Buck Converters Input: 12V DC, Output: 5V DC, Max Current: 3A Voltage regulation and power management
DC Battery 12V, 7Ah rechargeable battery Primary power source for field operations
Mechanical Frame and Tires Custom-made frame, 8 robotic tires, and 2 shock absorbers Robust navigation through agricultural fields

This study proposes an autonomous, intelligent system leveraging CNNs integrated with IoT devices (a Raspberry Pi 4-powered unmanned ground vehicle, UGV) for real-time tomato disease detection. The CNN model is trained on a publicly available dataset consisting of over 20,000 images capturing ten common tomato diseases and healthy plants [19]. The developed system achieves robust disease classification accuracy and near real-time performance suitable for practical agricultural deployment, addressing the previously mentioned limitations of traditional methods.

The contributions of this work include:

  • Designing a low-cost Unmanned Ground Vehicle (UGV) specifically for tomato disease detection tasks, enabling accessibility and practical feasibility for small-scale farmers.

  • Developing a computationally efficient and robust Convolutional Neural Network (CNN) model capable of accurately identifying and classifying major tomato leaf diseases.

  • Integrating IoT and edge-computing technologies by deploying the developed CNN model onto a Raspberry Pi 4-based UGV, enabling near real-time disease identification and classification with minimal computational resources.

  • Successfully identifying and classifying major tomato diseases, thus demonstrating the system's potential applicability in precision agriculture.

The remainder of this paper is structured as follows: Section 2 presents detailed methodology and system architecture; Section 3 reports experimental results and discussions, followed by concluding remarks and suggestions for future work in Section 4.

2. Methodology

This section details the systematic approach taken to develop, train, evaluate, and deploy the intelligent automated tomato disease detection system. The methodology includes Hardware Description and Integration, dataset description, CNN model architecture, model training, performance evaluation metrics, and system implementation.

2.1 Hardware Description

A low-cost autonomous UGV was designed featuring a Raspberry Pi 4 as the computational core, an ESP32 microcontroller for motor and sensor control, ultrasonic distance sensors for obstacle avoidance, and a high-resolution web camera (A4Tech 925HD) for image capturing. Table 1 provides the summary of the key hardware components and their specifications.

Figure 1 presents a block diagram showing the relationship between different hardware components. The block diagram depicts a robotic system integrating various sensors and control units. At the core, an ESP32 microcontroller connects to a relay that controls motors, facilitating motion. The ESP32 receives inputs from an ultrasonic sensor for distance measurement and two cameras for visual feedback. Power is supplied to the entire system, including the ESP32, cameras, and Raspberry Pi 4. The Raspberry Pi 4 processes additional camera data, enhancing visual and computational capabilities. This configuration allows the robot to navigate, capture real-time data, and interact with its environment through sensor feedback and motor control, ensuring precise operation.

fig1.jpg
Figure 1 Block diagram of the robot.

Figure 2 shows the detailed structural design of the Robot. The robot is meant especially for use in agriculture for tomato disease detection. Emphasizing stability and adaptability, the 8x8 robotic car design lets it negotiate easily in agricultural fields. Equipped with shock absorbers and dampers, the robot reduces the effect of uneven terrain, therefore enabling smooth mobility that protects the delicate onboard equipment vital for disease diagnosis. Small tires help the robot to be more maneuverable and to minimize soil compaction so it may pass exactly between rows of tomato plants.

fig2.jpg
Figure 2 Structure of the robot.

2.2 Dataset Description

The CNN model was trained and evaluated on a publicly available Tomato Leaf Disease Classification dataset, comprising over 20,000 labeled images across ten distinct tomato diseases and one healthy class: Late Blight, Early Blight, Septoria Leaf Spot, Tomato Yellow Leaf Curl Virus, Bacterial Spot, Target Spot, Tomato Mosaic Virus, Leaf Mold, Spider Mites Two-Spotted Spider Mite, Powdery Mildew, and Healthy Leaves. Images were captured from diverse environmental conditions, including both controlled laboratory settings and natural outdoor environments. Dataset images were resized to a uniform resolution of 128×128 pixels for optimal computational efficiency and consistency in training and inference.

2.3 CNN Model Architecture

A lightweight CNN model architecture was developed for computational efficiency suitable for deployment on resource-constrained edge devices. The model consists of four convolutional layers with increasing filter depths (32, 64, 128, and 256 filters respectively), each followed by batch normalization and ReLU activation functions. Then, there are max pooling layers following convolutional blocks for dimensionality reduction. The global average pooling layer is used for minimizing overfitting and reducing model complexity. Fully connected dense layers are used with dropout regularization and final dense layer for multi-class classification into 11 categories using softmax activation. Figure 3 presents the detailed CNN model architecture summary, highlighting layer types, dimensions, and parameter count clearly.

fig3.jpg
Figure 3 Model summary.

2.4 Training and Evaluation Procedure

The dataset was split into 80% training and 20% validation subsets. The CNN was trained using categorical cross-entropy loss, the Adam optimizer (learning rate 0.001), and trained over 15 epochs. Model performance was evaluated using comprehensive quantitative metrics, including accuracy, precision, recall, F1-score, and a confusion matrix for detailed class-wise analysis. The inference speed was benchmarked against MobileNetV2 to validate real-time performance.

2.5 System Operation Workflow

Figure 4 presents the operational workflow of the system. The system enables semi-autonomous navigation through agricultural fields, capturing high-resolution tomato leaf images, which are resized to 128×128 pixels onboard. A real-time CNN model performs disease classification onsite, with the results uploaded to Firebase cloud storage. These classification results are then displayed via user-friendly mobile or web interfaces, ensuring accessibility for end-users.

fig4.jpg
Figure 4 Operational flowchart.

3. Results and Discussion

This section presents a detailed evaluation of the proposed intelligent automated tomato disease detection system, focusing on the developed CNN model's performance metrics and the robot's operational efficacy.

The proposed lightweight CNN model demonstrated strong classification capabilities, achieving an overall accuracy of approximately 83%. The performance was further comprehensively assessed using precision, recall, and F1-score, providing detailed class-wise insights into the model's effectiveness. Figure 5 shows the precision, recall, and F1-score for each class.

fig5.jpg
Figure 5 Precision, Recall, and F1 Score.

Specifically, the CNN model achieved precision exceeding 80% for diseases such as Bacterial Spot, Late Blight, and Tomato Yellow Leaf Curl Virus, demonstrating robust performance in distinguishing clearly visible diseases. However, relatively moderate performance was observed for diseases such as Spider Mites Two-Spotted Spider Mite and Target Spot, highlighting challenges in recognizing subtle visual symptoms. These results underscore the need for further dataset diversification and targeted model enhancements.

Figure 6 illustrates the training accuracy and loss curves over 15 epochs, showing stable convergence behavior. The model accuracy steadily increased, whereas the loss consistently decreased, indicating successful learning and minimal overfitting.

fig6.jpg
Figure 6 Training accuracy and loss.

Figure 7 presents the confusion matrix. To gain deeper insights into model performance, a normalized confusion matrix was evaluated. It revealed robust class predictions for "healthy" leaves, with minimal misclassification. However, significant confusion was noted between visually similar disease classes (e.g., Leaf Mold and Target Spot), guiding areas for future improvement.

fig7.jpg
Figure 7 Confusion matrix.

The real-time applicability was evaluated by measuring the inference time per image. The proposed lightweight CNN model exhibited an average inference time of 0.101 seconds per image, outperforming the MobileNetV2 benchmark, which recorded 0.111 seconds per image. Hence, the designed model demonstrated approximately 9% faster inference, confirming its suitability for near real-time field deployment.

Figure 8 presents sample visual classification outputs obtained from the CNN model. The model produced accurate classifications for visually distinct diseases, as shown, whereas subtle cases posed challenges, aligning closely with confusion matrix results.

fig8.jpg
Figure 8 Classified images.

The low-cost UGV platform demonstrated effective operational capabilities. Equipped with an 8-channel relay module for motor control and a waterproof Ultrasonic Sensor for obstacle detection, the robot navigated test environments smoothly, maintaining stable performance in controlled field trials.

The integration of energy-efficient hardware components, such as Raspberry Pi 4 and buck converters, significantly enhanced energy efficiency during robotic operation [20]. This design ensures lower energy consumption, enhancing system sustainability and economic viability for small-scale farmers.

Despite promising results, limitations exist, including occasional misclassifications between visually similar disease classes. Future improvements involve enhancing dataset diversity, optimizing CNN architecture further, and conducting more extensive field trials for improved reliability and generalizability.

4. Conclusion

This study presented an intelligent, automated tomato disease detection system combining a lightweight Convolutional Neural Network (CNN) with a low-cost IoT-based unmanned ground vehicle (UGV). The developed CNN model was trained on a publicly available dataset of over 20,000 tomato leaf images and successfully classified ten common tomato diseases and healthy leaves with an overall accuracy of approximately 83%. Comprehensive quantitative evaluations, including precision, recall, F1-score, and confusion matrix analysis, demonstrated strong performance for diseases with distinct visual features (precision exceeding 80%), while highlighting classification challenges among diseases exhibiting subtle symptom variations. Real-time applicability was demonstrated by achieving an average inference time of 0.101 seconds per image, outperforming the benchmark MobileNetV2 model by approximately 9%. The proposed low-cost UGV design, integrating an 8-channel relay module and waterproof ultrasonic sensors for reliable obstacle avoidance and autonomous navigation, further enhances practical deployment feasibility. Despite promising results, limitations, such as occasional misclassification of visually similar diseases, were identified. Future improvements will include enhancing dataset diversity, refining CNN architectures, and conducting more extensive field evaluations to enhance reliability and generalizability. Overall, the integration of deep learning and IoT technologies demonstrated here significantly contributes toward practical precision agriculture solutions, promoting sustainable disease management practices and enhancing economic outcomes in agriculture.


Data Availability Statement
Data will be made available on request.

Funding
This work was supported without any funding.

Conflicts of Interest
The authors declare no conflicts of interest.

Ethical Approval and Consent to Participate
Not applicable.

References
  1. Naseer, A., Shmoon, M., Shakeel, T., Ur Rehman, S., Ahmad, A., & Gruhn, V. (2024). A Systematic Literature review of the IoT in agriculture-global adoption, innovations, security privacy challenges. IEEE Access, 12, 60986-61021.
    [CrossRef]   [Google Scholar]
  2. Dobre, A. E., Drăghici, B. G., Ciobanu, B., Stan, O. P., & Miclea, L. C. (2024). Smart Agriculture: Farm Management through IoT with Predictive and Precision Monitoring. In 2024 IEEE International Conference on Automation, Quality and Testing, Robotics (AQTR) (pp. 1-6). IEEE.
    [CrossRef]   [Google Scholar]
  3. Kassim, M. R. M. (2020). IoT applications in smart agriculture: Issues and challenges. In 2020 IEEE Conference on Open Systems (ICOS) (pp. 19-24). IEEE.
    [CrossRef]   [Google Scholar]
  4. Al-Maruf, A., Pervez, A. K., Sarker, P. K., Rahman, M. S., & Ruiz-Menjivar, J. (2022). Exploring the factors of farmers’ rural–urban migration decisions in Bangladesh. Agriculture, 12(5), 722.
    [CrossRef]   [Google Scholar]
  5. Qu, H. R., & Su, W. H. (2024). Deep learning-based weed–crop recognition for smart agricultural equipment: A review. Agronomy, 14(2), 363.
    [CrossRef]   [Google Scholar]
  6. Saleem, M. H., Potgieter, J., & Arif, K. M. (2021). Automation in agriculture by machine and deep learning techniques: A review of recent developments. Precision Agriculture, 22(6), 2053-2091.
    [CrossRef]   [Google Scholar]
  7. Tanveer, S. A., Sree, N. M. S., Bhavana, B., & Varsha, D. H. (2022). Smart agriculture system using IoT. In 2022 IEEE World Conference on Applied Intelligence and Computing (AIC) (pp. 482-486). IEEE.
    [CrossRef]   [Google Scholar]
  8. Quy, V. K., Van Hau, N., Van Anh, D., Minh Quy, N., Tien Ban, N., Lanza, S., Randazzo, G., & Muzirafuti, A. (2022). IoT-enabled smart agriculture: architecture, applications, and challenges. Applied Sciences, 12(7), 3396.
    [CrossRef]   [Google Scholar]
  9. Ristaino, J. B., Anderson, P. K., Bebber, D. P., Brauman, K. A., Cunniffe, N. J., Fedoroff, N. V., ... & Wei, Q. (2021). The persistent threat of emerging plant disease pandemics to global food security. Proceedings of the National Academy of Sciences, 118(23), e2022239118.
    [CrossRef]   [Google Scholar]
  10. Hilaire, J., Tindale, S., Jones, G., Pingarron-Cardenas, G., Bačnik, K., Ojo, M., & Frewer, L. J. (2022). Risk perception associated with an emerging agri-food risk in Europe: plant viruses in agriculture. Agriculture & Food Security, 11(1), 21.
    [CrossRef]   [Google Scholar]
  11. Shoaib, M., Shah, B., Ei-Sappagh, S., Ali, A., Ullah, A., Alenezi, F., ... & Ali, F. (2023). An advanced deep learning models-based plant disease detection: A review of recent research. Frontiers in Plant Science, 14, 1158933.
    [CrossRef]   [Google Scholar]
  12. Sarkar, C., Gupta, D., Gupta, U., & Hazarika, B. B. (2023). Leaf disease detection using machine learning and deep learning: Review and challenges. Applied Soft Computing, 145, 110534.
    [CrossRef]   [Google Scholar]
  13. Gajjar, R., Gajjar, N., Thakor, V. J., Patel, N. P., & Ruparelia, S. (2022). Real-time detection and identification of plant leaf diseases using convolutional neural networks on an embedded platform. The Visual Computer, 38, 2923–2938.
    [CrossRef]   [Google Scholar]
  14. Shafik, W., Tufail, A., Namoun, A., De Silva, L. C., & Apong, R. A. A. H. M. (2023). A systematic literature review on plant disease detection: Motivations, classification techniques, datasets, challenges, and future trends. IEEE Access, 11, 59174-59203.
    [CrossRef]   [Google Scholar]
  15. Thakur, P. S., Sheorey, T., & Ojha, A. (2023). VGG-ICNN: A Lightweight CNN model for crop disease identification. Multimedia Tools and Applications, 82(1), 497-520.
    [CrossRef]   [Google Scholar]
  16. Xie, B., Jin, Y., Faheem, M., Gao, W., Liu, J., Jiang, H., Cai, L., & Li, Y. (2023). Research progress of autonomous navigation technology for multi-agricultural scenes. Computers and Electronics in Agriculture, 211, 107963.
    [CrossRef]   [Google Scholar]
  17. Dhanaraju, M., Chenniappan, P., Ramalingam, K., Pazhanivelan, S., & Kaliaperumal, R. (2022). Smart farming: Internet of Things (IoT)-based sustainable agriculture. Agriculture, 12(10), 1745.
    [CrossRef]   [Google Scholar]
  18. Kumar, V., Sharma, K. V., Kedam, N., Patel, A., Kate, T. R., & Rathnayake, U. (2024). A comprehensive review on smart and sustainable agriculture using IoT technologies. Smart Agricultural Technology, 100487.
    [CrossRef]   [Google Scholar]
  19. Motwani, A. (2022). Tomato leaves dataset. Kaggle. Retrieved from https://www.kaggle.com/datasets/ashishmotwani/tomato
    [Google Scholar]
  20. Mikołajczyk, T., Mikołajewski, D., Kłodowski, A., Łukaszewicz, A., Mikołajewska, E., Paczkowski, T., ... & Skornia, M. (2023). Energy sources of mobile robot power systems: A systematic review and comparison of efficiency. Applied Sciences, 13(13), 7547.
    [CrossRef]   [Google Scholar]

Cite This Article
APA Style
Farooq, F., Muneer, M. H., Babar, M. & Zahid, F. (2025). Smart Ground Robot for Real-Time Detection of Tomato Diseases Using Deep Learning and IoT Technologies. IECE Transactions on Sensing, Communication, and Control, 2(2), 66–74. https://doi.org/10.62762/TSCC.2024.593301

Article Metrics
Citations:

Crossref

0

Scopus

0

Web of Science

0
Article Access Statistics:
Views: 289
PDF Downloads: 40

Publisher's Note
IECE stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions
Institute of Emerging and Computer Engineers (IECE) or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
IECE Transactions on Sensing, Communication, and Control

IECE Transactions on Sensing, Communication, and Control

ISSN: 3065-7431 (Online) | ISSN: 3065-7423 (Print)

Email: [email protected]

Portico

Portico

All published articles are preserved here permanently:
https://www.portico.org/publishers/iece/

Copyright © 2025 Institute of Emerging and Computer Engineers Inc.