Next Article in Journal
NICE: A Web-Based Tool for the Characterization of Transient Noise in Gravitational Wave Detectors
Previous Article in Journal
A Process for Monitoring the Impact of Architecture Principles on Sustainability: An Industrial Case Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Revolutionizing Coffee Farming: A Mobile App with GPS-Enabled Reporting for Rapid and Accurate On-Site Detection of Coffee Leaf Diseases Using Integrated Deep Learning

by
Eric Hitimana
1,*,
Martin Kuradusenge
1,
Omar Janvier Sinayobye
1,
Chrysostome Ufitinema
2,
Jane Mukamugema
2,
Theoneste Murangira
3,
Emmanuel Masabo
1,
Peter Rwibasira
2,
Diane Aimee Ingabire
1,
Simplice Niyonzima
1,
Gaurav Bajpai
4,
Simon Martin Mvuyekure
5 and
Jackson Ngabonziza
6
1
Department of Computer and Software Engineering, University of Rwanda, Kigali P.O. Box 3900, Rwanda
2
Department of Biology, University of Rwanda, Kigali P.O. Box 3900, Rwanda
3
Department of Computer Science, University of Rwanda, Kigali P.O. Box 2285, Rwanda
4
Directorate of Grants and Partnerships, Kampala International University, Kansanga, Kampala P.O. Box 20000, Uganda
5
Crop Innovation and Technology Transfer, Traditional Export Crops Programme, Rwanda Agriculture Board, Kigali P.O. Box 5016, Rwanda
6
Bank of Kigali Plc, Kigali P.O. Box 175, Rwanda
*
Author to whom correspondence should be addressed.
Software 2024, 3(2), 146-168; https://doi.org/10.3390/software3020007
Submission received: 22 February 2024 / Revised: 4 April 2024 / Accepted: 7 April 2024 / Published: 16 April 2024
(This article belongs to the Special Issue Automated Testing of Modern Software Systems and Applications)

Abstract

:
Coffee leaf diseases are a significant challenge for coffee cultivation. They can reduce yields, impact bean quality, and necessitate costly disease management efforts. Manual monitoring is labor-intensive and time-consuming. This research introduces a pioneering mobile application equipped with global positioning system (GPS)-enabled reporting capabilities for on-site coffee leaf disease detection. The application integrates advanced deep learning (DL) techniques to empower farmers and agronomists with a rapid and accurate tool for identifying and managing coffee plant health. Leveraging the ubiquity of mobile devices, the app enables users to capture high-resolution images of coffee leaves directly in the field. These images are then processed in real-time using a pre-trained DL model optimized for efficient disease classification. Five models, Xception, ResNet50, Inception-v3, VGG16, and DenseNet, were experimented with on the dataset. All models showed promising performance; however, DenseNet proved to have high scores on all four-leaf classes with a training accuracy of 99.57%. The inclusion of GPS functionality allows precise geotagging of each captured image, providing valuable location-specific information. Through extensive experimentation and validation, the app demonstrates impressive accuracy rates in disease classification. The results indicate the potential of this technology to revolutionize coffee farming practices, leading to improved crop yield and overall plant health.

1. Introduction

Coffee is one of the most traded products worldwide. Sustainable coffee cultivation is crucial for both economic stability and environmental conservation. According to the International Coffee Organization’s 2020 report, during the 2019/2020 season, global coffee production reached an estimated 169.34 million bags, priced at approximately 169.25 US cents per pound. East Africa contributed 17.12 million bags, constituting 11% of the world’s coffee production. The global revenue from the coffee trade in 2016 was estimated to be around USD 30.7 billion [1]. The industry can bring in more than USD 100 billion globally, with USD 1.8 billion of that amount coming from sales in Africa. More than 120 million people are employed in the coffee industry; of these, 67–70% are smallholder farmers in developing nations [2]. According to the United Nations Conference on Trade and Development (UNCTAD) report, the coffee sector is the source of 25% of foreign exchange earnings in over 50 developing nations [1].
A significant quantity of Rwanda’s coffee production is destined for export, with the local market accounting for a mere fraction, representing less than 1% of the country’s total coffee output [3,4]. A total of 60% of Rwandan coffee goes to the European Union, 20% to the US, and the remaining portion goes to Asia [3]. Agricultural exports have notably increased from 2013/2014 to 2019/2020, and the National Agricultural Export Development Board (NAEB) has set lofty goals to treble exports by 2023/2024. Rwanda’s agricultural exports have increased, thanks in large part to the NAEB, whose efforts have seen revenue rise from USD 225 million in 2013–2014 to USD 516 million in 2017–2018. By 2024, they hope to export goods worth USD 1 billion, with coffee being a major contributor to this growth [3]. To achieve this, enhancements in both coffee production and quality ranging from 30% to 69% have been realized. This progress has been made possible through the establishment of numerous coffee washing stations across various regions of the country, as well as the provision of technical support to farmers by the NAEB and private investors [3].
Even though in different countries, various initiatives have been established to increase the production of this cash crop, its production is greatly affected by pests and different diseases. They may attack different parts of a coffee plant, such as root and trunk diseases that lead to the overall weakening of coffee trees. It results in impairing their ability to uptake water and minerals while interfering with the transfer of substances between the roots and the shoots [5]. There are also dieback diseases where numerous pathogens can infect young coffee branches, resulting in the withering and decline of the emerging stems responsible for carrying the next season’s crop [6]. Coffee berries can be affected by coffee berry disease (CBD), instigated by Colletotrichum coffeanum. It is especially a devastating ailment that targets developing coffee berries, resulting in their decay or premature shedding from the plant before the formation of coffee beans inside [7]. Lastly, foliage diseases are a group of diseases that primarily affect the leaves of coffee plants. These diseases are caused by various pathogens, including fungi, bacteria, and viruses. They can lead to the development of symptoms such as leaf spots, wilting, discoloration, and defoliation, which can significantly impact the health and productivity of coffee trees. The common examples of coffee foliage diseases include coffee leaf rust and coffee leaf spot [8,9].
Coffee leaf rust (CLR) is the most significant threat to Arabica coffee, with this coffee species being the most vulnerable when compared to others. It affects both the quality and quantity of the coffee [10]. CLR is caused by the fungus Hemileia vastatrix. It is the most widespread and devastating coffee disease in Rwanda [11]. The CLR is identifiable through the appearance of small, yellow–orange to rust-colored spots or pustules on the upper leaf surfaces. This results in premature leaf drop, leading to defoliation. As the disease advances, it impairs the plant’s photosynthetic capabilities, potentially affecting its overall health and yield. Severe infections can stunt coffee plant growth, impacting both the quantity and the quality of coffee beans [12,13,14,15,16,17].
Another common foliage disease is Cercospora coffeicola, which is a fungal pathogen that causes coffee leaf spots. It is characterized by the appearance of small, dark, and irregularly shaped lesions on coffee leaves. These lesions can coalesce, leading to extensive damage to the foliage. Severe infections of Cercospora coffeicola can result in defoliation, reduced photosynthesis, and decreased coffee plant health and productivity [18,19].
Managing and preventing these diseases is essential for maintaining the health and yield of coffee crops. Coping with coffee leaf diseases, such as coffee leaf rust and coffee leaf spot, requires a combination of preventive measures and management strategies such as fungicide application, implementing good agricultural practices, including proper spacing between coffee plants to improve airflow, pruning and thinning of branches, planting disease-resistant coffee varieties, nutrient management, etc. [14,20]. More importantly, regular monitoring of coffee plants for disease symptoms is essential. Early detection allows for timely treatment and the prevention of disease spread. The frequent inspections by the farmers or other agents in charge of agriculture are challenged by different factors. It requires skilled labor to inspect each coffee plant, and this can be time-consuming and costly, especially on large plantations. In addition, coffee plantations are often situated on challenging terrain, making it difficult to access and inspect all areas regularly. Also, skilled personnel are needed to correctly identify disease symptoms, which may require training and expertise. Recording and managing the data collected during monitoring can be cumbersome, especially if it is conducted manually. There is a need for efficient data reporting and management systems.
The main aim of this research is to develop a machine learning-integrated mobile and web coffee leaf disease reporting and management tool. The objective is to support farmers and their respective users to use little effort while reporting and managing huge and critical data in real-time for coffee leaf diseases. The main contributions of this research are (1) to assist the farmers in acquiring the needs and readiness for the solution; (2) to train different transfer learning deep learning models on the collected coffee leaf-related dataset; and (3) to develop a DL-integrated mobile and web application for farmers and related users. The rationale for the selection of DL models used in this study is based on their distinct architectural characteristics and performance trade-offs. Inception-v3 is chosen for its efficient utilization of computational resources through inception modules, while VGG16 offers a simple yet effective architecture with small convolutional filters, aiding in interpretability. Xception is selected for its depth-wise separable convolutions, promoting efficient spatial and channel-wise correlations. ResNet is included due to its residual connections, addressing the vanishing gradient problem and enabling deeper networks. DenseNet was chosen for its dense connectivity patterns, which encourage feature reuse and alleviate vanishing gradient issues.
Due to the data collection activities conducted by meeting respective coffee washing station cooperatives, the key outcome is to provide a testable AI-based mobile and web application for farmers to report coffee leaf issues by themselves. Transmitting information from the field to decision-makers for timely responses may be challenging, particularly in remote or poorly connected areas.
The rest of the paper is organized as follows: Section 2 details the related work of the research; Section 3 discusses the methods and tools used; Section 4 and Section 5 elaborate the research findings and their discussions, respectively; and Section 6 concludes the research with future directions.

2. Related Works

To overcome the challenges associated with manual monitoring of coffee diseases, technologies are used not only to enable more efficient monitoring but also to assist in data analysis and decision-making, allowing for more precise and timely disease management in coffee plantations. Technologies used in coffee disease monitoring include remote sensing using drones and satellite imagery, which apply to large-scale coffee-growing areas. Mobile applications, the Internet of Things (IoT), DL, and cloud computing can also be used to monitor and manage coffee diseases. Recent research has been directed toward precision agriculture, harnessing technologies like artificial intelligence (AI), remote sensing, machine learning, IoT, and cloud computing to enhance crop yield and production quality [21].
The review of advancements in crop disease detection conducted by Tej et al. [22] examined progress in the detection of crop diseases, with a particular emphasis on the application of machine learning and deep learning techniques in conjunction with unmanned aerial vehicle (UAV) remote sensing. Their research highlighted the importance of sensors and image-processing methods in improving the accuracy of crop disease assessment using UAV imagery. The authors introduced a systematic classification system to structure and categorize existing research related to crop disease detection using UAV imagery. Furthermore, they assessed the effectiveness of machine learning and deep learning approaches in this specific domain.
Machiraju et al. presented an approach to reducing herbicide consumption by categorizing plant images into weeds and crops, allowing targeted herbicide application. The initial stage involves distinguishing between crops and weeds, achieved through the implementation of an image classification technique utilizing deep learning methods [23]. Multiple machine learning (ML) methods have been deployed for the identification of coffee leaf diseases. For instance, the application of a convolutional neural network (CNN) in Ethiopia achieved an impressive 99.49% accuracy [24], while a similar study [25] yielded a performance accuracy of 99.08%.
An innovative research project explored the potential of employing CNN on edge devices for coffee tree disease classification using leaf images. The primary aim of this endeavor was to integrate the CNN model into an affordable embedded system, enabling disease identification on coffee leaves at the source. The prototype demonstrated effectiveness as a self-contained system, operating on battery power without the need for internet connectivity and being accessible to individuals with limited tech expertise. Given its affordability and self-sufficiency, this technology could particularly help small-scale coffee producers with limited resources [26].
Many research efforts have focused on setting the foundation for the utilization of AI in coffee leaf disease detection. Various models were investigated, and the most efficient one has been selected for prospective applications. Deep learning models such as DenseNet, ResNet50, Inception-v3, Xception, and VGG16 are used in classifying five distinct classes of coffee plant leaf diseases [27]. The DenseNet model was observed to be simpler, mainly due to its reduced number of trainable parameters and lower computational intricacy. This attribute renders DenseNet exceptionally suitable for identifying coffee plant leaf diseases, particularly when incorporating new coffee leaf conditions not originally included in the training data, thus streamlining the overall training process.
Thi et al. introduced an innovative system designed to help in the identification and treatment of plant leaf diseases, with a particular focus on tomato plants in Vietnam. This practical framework has been put into action, providing web-based and mobile applications that empower farmers to automatically recognize tomato leaf diseases and receive treatment advice, whether automatically generated or from expert sources. Leveraging a cloud-based machine learning model, the system achieves a remarkable degree of accuracy and rapid disease detection and response [28].
Jafar A. et al. conducted a systematic review of the application of AI and modern technologies in detecting plant diseases, highlighting limitations and suggesting future directions involving IoT drones. However, no specific key solution was proposed to address the challenges identified [29]. Barman U. et al. developed a smartphone-based application for detecting tomato leaf diseases using Vision Transformer (ViT) and Inception V3-based deep learning models. Achieving 97.37% accuracy on a dataset comprising 10,010 images across 10 classes, ViT demonstrated high performance. Despite its deployment in a smartphone application, this research lacked certain features compared to our research. Additionally, there was no mention of a centralized reporting repository, and the trained model might lack precision in distinguishing leaves with complex backgrounds [30].
Jayshree A. et al. employed CNN techniques to classify coffee leaf rust with an accuracy of 98.8%, using a dataset of 1560 images of robusta coffee leaves. However, limitations included a relatively small dataset and a lack of comparative analysis with recent trends. Furthermore, the integration of the model into practical solutions was not considered [31]. Babatunde R. et al. developed a mobile application for early detection of habanero disease using a modified VGG16 deep transfer learning model, achieving 98% accuracy with 1478 healthy images and 997 infected images. However, the focus was solely on mobile-based features to support growers. The model’s performance may be affected by the limited dataset and the risk of overfitting. Moreover, the application lacked features to support farmers, and the integration of the model into the application was not assessed [32].
In this research, we have developed a machine learning-driven application for identifying, reporting, and managing coffee leaf diseases. Farmers can use their smartphones to capture and diagnose the coffee leaves with the help of an AI-enabled application. Reporting can only be possible once the detected disease has a confidence level greater than the set threshold. The report includes the type of disease, its image, the geolocation of where the disease is found, and the reporter himself. The application has two possibilities: being reported by an authenticated user or being reported by a guest user. This authentication mechanism had been established for reporting accountability for farmers based in cooperatives. The DL model had been deployed in a hybrid way: on the device to reduce computation overhead due to network issues, and on the cloud for those who have strong internet. The application switches both ways to ensure system accessibility. As far as internet consumption is concerned with reporting multimedia data such as images, this result observed the insufficient image dataset for the matter, and the solution is proposing to accumulate a dataset in the meantime. Regarding the main contribution of this research compared to the survey resources, this solution, with suggested features by the beneficiaries (farmers), guides the user in detecting, tracking, and reporting the diseases and the data management.

3. Materials and Methods

To effectively manage and report plant diseases, it is crucial to promptly identify diseases in coffee leaves to assist farmers. This section discusses a comprehensive technique and the approaches employed in identifying the needs and readiness of farmers in our scope. It details the method used in gathering coffee leaves, the techniques used in testing various modeling methods, and the development of the solution. It delves into the procedure for data collection and outlines the transfer-learning algorithm used, aiming to pinpoint the most suitable model efficiency. Additionally, it covers the design and training process and mobile, with cloud services used to come up with the solution.

3.1. Study Area

We conducted a survey and visited ten coffee washing stations situated across five different districts, namely Ngoma, Rulindo, Gicumbi, Rutsiro, and Huye. These districts were chosen to represent all 27 districts in Rwanda, taking into consideration their sensitivity to climate variations [33]. Within each district, we sampled 30 farmers, resulting in a total sample size of 150. The purpose of the visit was to collaborate with agronomists and farmers to learn together the proper way to support them in reporting the matter. On the other hand, this collaboration aimed to support activities related to labeling coffee leaves. The other idea behind this was to assess farmers and check their ability to identify various coffee leaf diseases.
The visits were conducted during the harvesting season in March 2021 and the summer season in June and July 2021. The dataset of images was collected from four distinct provinces situated in the Eastern region (characterized by abundant sunlight, low altitude, and absence of hills), the Northern region (known for its cold climate and high altitude), the Southern region (experiencing a colder climate with varying altitudes), and the Western region (featuring cold, highland terrain with high altitudes). A combination of quantitative and qualitative methodologies was employed to examine the actual way of reporting the issue and response time, as well as the self-identification of coffee diseases by farmers.
Figure 1 details the farmers’ responses to feedback interventions just when there is a report shared with high authorities.
It is shown that the key overloaded person in the cooperative is the agronomist, who must intervene once the farmers claim any inconveniences. The agronomist is the one to report to the higher authorities (cooperative officer, SEDO, district, and NAEB officials), which may result in inaccurate data.
Figure 2 shows the time it takes to receive a response once the report is shared with the concerned users. This study shows that the quickest time to receive a response is within a week. It was also observed that the response may even come from saying that they do not have any treatment measures. It was observed as well that the farmers sometimes wait for the response at an unpredicted time. The following conclusions were drawn together with cooperative officials: either the report was not received in the same state as the sender, or due to an improper channel of reporting, the receivers were overwhelmed with no way to search in a common repository and analyze the case by considering the previous cases to conclude the matter.
Figure 3 discusses the readiness of the farmers to use smart applications in identifying and reporting the issues from the coffee plantations for direct access by the officials in one searchable system. Figure 3a shows that 81% of respondents have phones while 19% have no phones. The key characteristic was concerned with caring for the telephone possessions so that we could target their infrastructure.
After realizing that most of the farmers are missing modern smartphones to apply for the reporting as shown in Figure 3b, we made an assessment and found that even though there are cooperatives, they have been divided into different zones. Each zone has a group leader to receive and visit the farmer’s field individually before the agronomist intervenes. To take advantage of this leadership structure, each team leader must be given a smartphone and trained to use the application.

3.2. Dataset

Apart from the qualitative data collected from farmers and cooperatives in general, to apply scientific data analytics, the researchers compiled a dataset of 37,939 images in RGB format, equivalent to approximately 3.3GB. Within the coffee images, there were at least four distinct categories present in the dataset: rust, red spider mite, miner, and healthy, making respective labels. Among the classes used in this context, the infection distribution is 65% (20, 15, and 35 of rust, redspidermite, and miner, respectively), while the non-infected is 35% (healthy images). Before feeding the images into the CNN architectures, we performed preprocessing to ensure that the input parameters aligned with the CNN model’s specifications. Following this step, each input image was resized to dimensions of 224 × 224. To ensure uniformity in data representation, we subsequently applied normalization (i.e., dividing the image by 255.0), which enhanced training convergence and stability.
The preprocessing activity was conducted to avoid the model’s overfitting or underfitting due to unbalanced data. To address this issue, regularization methods were utilized, such as data augmentation post-preprocessing. To sustain data augmentation effectiveness, various alterations were applied to the preprocessed images in this research. These alterations comprised clockwise and counterclockwise rotations, horizontal and vertical flips, adjustments in zoom intensity, and rescaling of the initial images. This approach not only mitigated overfitting and minimized model loss but also bolstered the model’s resilience, leading to enhanced accuracy during testing with authentic coffee plant images.
Given the seriousness of the issue, within a particular class, one might come across various images exhibiting similar infections at different stages. This is because, at a certain stage, the model may be capable of tracking and categorizing the actual or approximate name of the diseases.

3.3. Deep Learning Algorithms

Deep learning techniques have demonstrated remarkable performance across various fields like image recognition, speech understanding, natural language processing, and even emotion detection when provided with a substantial amount of training data [34]. We evaluate the effectiveness of deep learning, particularly its cutting-edge application in digital image processing. Unlike conventional approaches that require explicit feature extraction from images before classification and prediction, convolutional neural networks (CNNs), also known as ConvNets, excel at processing data with grid-like structures, such as images and multi-dimensional data [35,36,37]. In comparison to networks relying on fully connected layers, CNNs exhibit advanced feed-forward engineering and exceptional generalization capabilities.
In the scope of the activities of this research regarding the complexity of the coffee leaves as well as the modeling complexity, in order to give the model for the application, the comparative analysis was discussed in [27]. Five different transfer learning model versions of CNN were implemented, and their evaluation measures were discussed. Among them, Inception-v3, DenseNet, ResNet50, VGG16, and Xception were tested. According to their outcomes, it was shown that the accuracy score varies based on the model and the image characteristics, even though DenseNet shows a good score of 96.99%. Before its proven feature extraction and classification in the tested dataset, DenseNet was chosen to be deployed to the on-demand mobile application to support farmers.
Figure 4 shows the dense layers of the approved best model in the context of Rwandan Arabica coffee leaf disease detection and classification purposes.
As illustrated in Figure 4, DenseNet provides a pivotal advantage in automatic feature extraction. In the initial stage, the input data are introduced to a network specialized in extracting features. These extracted features are then transmitted to a classifier network. The feature extraction network is composed of multiple sets of convolutional and pooling layers. The convolutional layer employs a series of digital filters to perform convolution operations on the input data. Meanwhile, the pooling layer serves to decrease dimensionality and establish thresholds.
For this endeavor, we utilized Python 3.10 in conjunction with TensorFlow 2.9.1, along with libraries including numpy (version 1.19.2) and matplotlib (version 3.5.2) to manage dataset preparation and establish the development environment. These tools have demonstrated their efficacy in tasks related to data preprocessing and modeling [38,39]. The experiment integrated CNN deep learning models, specifically Inception-v3, Resnet50, VGG16, Xception, and DenseNet models. The hardware employed was an HP Z240 workstation equipped with two Intel(R) Xeon(R) Gold 6226R processors and a Tesla V100s 32GB memory NVIDIA GPU, providing a total of 64 cores. This configuration significantly accelerated the training process of deep neural networks. In the ensuing section, the performance score measurements used for this experiment will be comprehensively examined.

3.4. Performance Score Measurements

To assess the effectiveness of transfer learning techniques, a range of metrics were considered. These included the performance accuracy matrix and precision–recall metric for evaluating segmentation performance. The evaluation of the classifier’s performance is carried out using these metrics to identify the most effective ones for subsequent utilization.

3.4.1. Precision–Recall Curve

The confusion matrix serves as a valuable tool for evaluating performance by comparing actual and predicted values. To adapt the precision–recall curve and compute average precision for multi-class or multi-label classification, it was essential to convert the output into binary form. While it is possible to create one curve for each label, an alternative approach involves constructing a precision–recall curve by treating each element of the label indicator matrix as a binary prediction.
P r e c i s i o n = T P T P + F P
R e c a l l = T P T P + F N
where TP means true positive, FP means false positive, and FN means false negative. Specificity and accuracy were referred to as the positive predicted value and defined in Equation (1). Recall, or the probability of detection, was calculated by dividing the number of correctly classified positive outcomes by the total number of positive outcomes (Equation (2)).

3.4.2. F1 Scores

F 1   s c o r e = 2 × T P 2 × T P + F P + F N
The F1 score ranges from 0 to 1, reaching its minimum value when there are no true positives (TP), indicating the misclassification of all positive samples. In contrast, the highest value is attained when there are no false negatives (FN) or false positives (FP), indicating perfect classification.

3.5. Mobile and Cloud Computing Techniques

The merging of mobile and cloud computing for coffee leaf disease classification and detection combines the advantages of both technologies, resulting in a robust tool for farmers and agronomists. This integration facilitates precise, instantaneous disease identification, safeguards data privacy, and guarantees the application’s operation portability. Ultimately, this method enhances the overall efficacy and productivity of the management of coffee plant status through disease mapping.
Utilizing cloud computing for distributed processing has been employed to tackle the intensive computational requirements and resource limitations of deep learning models. Nonetheless, cloud computing encounters challenges like restricted data transfer bandwidth and substantial latency when dealing with substantial multimedia data [40,41]. To counteract these challenges, edge computing is emerging as a viable solution. In this approach, dedicated servers running deep learning models are positioned in the pipeline with a dedicated server in closer proximity to the application server running application programming interfaces (APIs) where image data originates. This enables data to be processed and analyzed first at the dedicated servers and updated at the application (shared) servers [42,43]. The edge computing paradigm mentioned in this research is to allow the model to be deployed on portable mobile devices to limit computation latency and bandwidth costs.
Figure 5 details the architectural illustration showing the configuration of the development modules by leveraging each computation with enough resources to overcome the system response overhead. A dedicated server is equipped with robust Intel Xeon or AMD EPYC CPUs to efficiently manage demanding workloads. It boasts terabytes of RAM and offers SSD storage with capacities ranging from hundreds of gigabytes to multiple terabytes, ensuring smooth operations under Linux OS distributions like Ubuntu. For shared servers, we utilize Dell R430 or Dell R440 hardware with specifications including Dual Intel Xeon CPUs (E5-2660 v4 @ 2.00GHz or Xeon Gold 6140 2.3G), 256GB RAM, and RAID6 SSD. These servers host Apache, MySQL, PHP, Perl, and other services to support databases and web services, with MariaDB for databases and Laravel Framework 10.0 for web solutions. Additionally, a mobile application targeting Android users is developed in Java. The deployed DL models are containerized as microservices using Docker volumes.
The proposed configuration architecture, as shown in Figure 5, is composed of three main components. (1) encompass the end user perspective where he can use the authenticated web application solution. The solution serves to manage respective resources such as farmers, production, and reporting of coffee leaf disease. For prediction purposes, he/she can use the deployed model through the endpoint, and the system records the model results. On the other hand, the farmer representative and agronomist (known as reports) utilize a model-integrated mobile app. By taking leaf photos, the model can detect, classify, and report the disease with the geolocation of the reporter. The best model was deployed to the mobile application for portability purposes. Its update ability shall be configured to be conducted automatically once the updated model version is released. (2) The configured server for APIs to manage cooperatives and respective coffee disease-related data runs through the shared server. The APIs sync data from mobile and web applications to the backend database. Two servers are set to work together to ensure resource distribution by reducing computation overhead for deep learning models. (3) Model retraining and evaluation are configured to be conducted on a dedicated server. Docker volume and fast API services are set to run under the Docker environment to ease the update ability of the model in case the new model evaluated is better than the current one.
In the study referenced in [44], the authors utilized freshly captured images taken in field conditions using mobile devices to establish their dataset, which contained both healthy and diseased plum images. They applied additional data augmentation techniques, resulting in the generation of 19 distinct versions for each image, ultimately culminating in a comprehensive dataset comprising 100,000 images. The authors then categorized these plum images into five classes: healthy, brown rot, nutrient deficiency, shot hole, and shot hole on the leaf. They employed four developed models—AlexNet, VGG16, Inception-v1, and Inception-v3—for classification. The findings reported in [40] indicated that the Inception models exhibited superior performance. The highest-performing model demonstrated an overall accuracy of 88.42% when evaluated using a test set consisting of 100 images. Our research findings proved to outperform this in terms of accuracy.

3.5.1. Docker and APIs

Docker is a revolutionary technology that has transformed the way applications are deployed and managed [45]. In this research context, it enables us to package their applications and dependencies into self-contained packages known as containers. These containers encapsulate everything needed for an application to run, including the code, libraries, and system settings. Docker’s lightweight nature and resource efficiency make it an ideal choice for modern software development workflows. To optimize model training and evaluation complexity, we dockerized all modules as services. This scheme shall allow the best model to be selected and returned to be consumed by the end users’ applications.
To adopt the Docker configuration for independent service management support, a dedicated server was used. It gave us the flexibility to tailor the server’s hardware and software configurations to meet the specific requirements of their applications. With the ability to scale resources as needed, dedicated servers are well-suited for demanding applications and high-traffic websites.
To manage the generated model and feed the end users for prediction purposes, the application’s APIs were used to exchange data from users to servers and vice versa. Fast APIs and Rest APIs were used in the implementation of this research. Fast API emerges as a modern Python web framework that stands out for its efficiency and performance [46]. It leverages Python’s standard type hints to enable fast and intuitive API development. It supports asynchronous programming, which allows it to handle many concurrent connections with ease. Additionally, Fast API’s combination of speed, scalability, and automatic documentation makes it an excellent choice for building robust APIs for deep learning [47]. The abovementioned tools helped in creating a powerful environment with DL computational capability by caring about all software dependencies. The Fast API model module acts as a container for these encapsulated functions, providing a structured and well-organized approach.
Within this module, Python functions can be triggered via HTTP requests directed at specific API endpoints. It underscores the significance of adhering to RESTful principles for effective API design, where API endpoints align with distinct functions, allowing users to interact with the Python model using HTTP methods [48]. The seamless incorporation of Python scripts into the Fast API framework facilitates the development of robust and easily accessible web-based applications.

3.5.2. System Flow with Sequence Diagram

The smart agriculture web and mobile applications orchestrate a seamless flow of information and actions, integrating user interfaces, data processing, and disease detection with classification and analytical insights. Figure 6 details the system actors for the proposed solution.
Figure 6 illustrates the applications from a user perspective. The manager with data management capability will add, edit, and deactivate any users, report wrong data, and so on through the HTTP web services. Through the web application, we can predict the collected coffee leaf images for reporting purposes. On the other hand, the diagram details the re-training capability of the dataset accumulated. The reporter selected from the farmer’s zone shall be registered to report using a mobile application. The mobile application has a DL model integrated for offline prediction to save execution overhead while loading from the cloud.
Although previous studies have explored the utilization of DL networks and the development of basic mobile application prototypes for plant image classification [49,50], our methodology introduces a distinctive viewpoint. Our research’s investigative goals, the specific neural networks selected, and the evaluation criteria for our models, along with our emphasis on constructing a mobile-friendly model capable of identifying coffee leaf disease classes in a robust multi-label classification system, collectively represent an innovative approach in our field. Additionally, our system incorporates a range of comprehensive features that extend beyond image classification, further distinguishing our work, such as web management and reported disease localization on Google Maps.
The primary sources of reference for this research are citations [51,52]. Our specific objective is to enhance the model performance outcomes presented in [51]. This entails replicating the networks while incorporating the suggested optimized hyperparameters and adjusted base network retraining components. These modifications, developed through the transfer learning techniques detailed in [52], are aimed at elevating the accuracy levels of the models mentioned in the references.
In essence, our research acknowledges and draws inspiration from existing solutions. It combines established concepts with our novel methods and functionalities to deliver efficient and all-encompassing mobile and web application solutions tailored to the chosen coffee industry and target audience.

4. Results

This section details the CNN pre-trained models used with their respective measurements and the functional visual representation of the outcome of the research from the end user’s perspective.

4.1. Network Architecture Model

The choice of pre-trained network models was determined by their appropriateness for classifying plant diseases. Table 1 provides comprehensive details regarding the architecture of each model. These models utilize varying filter sizes to capture specific attributes from the feature maps. Filters are pivotal in this process of attribute extraction. Each filter, when applied to the input, identifies unique features, and the characteristics extracted from the feature maps are contingent on the filter values. This research experiment made use of the unaltered pre-trained network models, integrating the configurations of convolution layers and filter sizes employed in each model.
Table 1 offers a range of metrics for diverse network models, such as Inception-v3, Xception, ResNet50, VGG16, and DenseNet. These metrics encompass the overall count of layers, maximum pooling layers, dense layers, dropout layers, flattened layers, filter size, stride, and trainable parameters. These metrics are crucial for comprehending the structure and intricacy of each model. In this research, all models were standardized with a learning rate set at 0.01, a dropout rate of 2, and four output classes for classification purposes.
The dataset containing coffee leaves was split into training, testing, and validation sets. To train the models (Inception V3, VGG16, ResNet50, Xception, and DenseNet), 80% of the coffee leaf samples were employed, while 20% were used for testing purposes. Each model underwent ten epochs, and it was noted that all models began to converge with high accuracy after four epochs. The techniques used to enhance model performance, convergence speed, and efficiency encompass the use of adaptive moment estimation (Adam) for dynamically adjusting the learning rate throughout training. Additionally, regularization techniques were implemented to mitigate overfitting and enhance generalization. Stochastic gradient descent (SGD) was utilized to iteratively update model parameters by computing the gradients of the loss function concerning those parameters.
Given that the validation accuracy and precision, recall, and f1-score serve as a significant gauge of a model’s overall performance across its supported classes, it can be deduced that DenseNet demonstrated superior performance compared to the other models, based on the descriptions mentioned in Figure 7, Figure 8, Figure 9 and Figure 10.
Figure 7a showcases the validation accuracy of the Inception-v3 model, attaining a training accuracy of 99.34%. Figure 7b displays the class-wise measurements for the dataset used. It shows 0.7982, 0.7831, and 0.7012 for precision, recall, and F1-score, respectively, concerning healthy coffee leaves; 0.6002, 0.6912, and 0.7391 for miner disease; 0.6011, 0.8033, and 0.8033 for red spider mite; and 0.5043, 0.6912, and 0.7392 for rust.
Moreover, the ResNet50 model was subjected to training, utilizing 80% of the dataset, with 10% of the samples reserved for testing and an additional 10% employed for validation and further testing. Through the process of hyperparameter tuning, the findings depicted in Figure 8a reveal that this model emerged as the second-best performer, exhibiting an initial recognition accuracy of 96.00% within the first three epochs, subsequently demonstrating enhanced stability to achieve an accuracy of 98.70%. In contrast, Figure 8b illustrates the precision, recall, and f1-score, which recorded values of around 0.8023, 0.7732, and 0.7103, respectively, for healthy coffee leaves; 0.7813, 0.7121, and 0.6511 for miner disease; 0.6502, 0.8012, and 0.7724 for red spider mite; and 0.7914, 0.6013, and 0.6326 for rust.
The third model used in this experiment was VGG16. As depicted in Figure 9a, the recognition accuracy reached approximately 98.81% within the first four epochs and remained almost consistent up to the 9th epoch. Meanwhile, as shown in Figure 9b, the model’s class-wise metrics showed precision, recall, and F1-score values around 0.9007, 0.8763, and 0.8002, respectively, for healthy coffee leaves; 0.8201, 0.9607, and 0.8401 for miner disease; 0.8763, 0.6701, and 0.7902 for red spider mite; and 0.7502, 0.8263, and 0.7781 for rust.
The Xception model ranked fourth in performance within this study. As illustrated in Figure 10a, the highest performance was evident in epochs #1, 4, 7, and 10, boasting a training accuracy of approximately 99.40%. Additionally, Figure 10b depicts that the class-specific metrics for this model were in the vicinity of 0.8002, 0.8202, and 0.9111 for precision, recall, and F1-score, respectively, concerning healthy coffee leaves; 0.8132, 0.7721, and 0.6823 for miner disease; 0.7311, 0.6701, and 0.6711 for red spider mite; and 0.7824, 0.7221, and 0.5782 for rust.
According to Figure 11a, the final model in this experiment was DenseNet, with the training accuracy reaching its peak in the 10th epoch, achieving a training accuracy of 99.57%. Simultaneously, the best validation performance occurred in the 4th epoch, with an accuracy of 99.09%. As depicted in Figure 11b, the performance scores by data class (coffee leaves) were as follows: 0.9934, 0.9321, and 0.9621 for precision, recall, and F1-score, respectively, for healthy coffee leaves; 0.9824, 0.9815, and 0.9532 for miner disease; 0.8901, 0.9169, and 0.8901 for red spider mite; and 0.9111, 0.8957, and 0.9009 for rust.
Table 2 provides a comparison of different network models based on their training and validation performance.

4.2. Mobile Application as a Disease Detection, Classification, and Reporting Tool

A mobile app with an integrated DL model for leaf disease detection and classification represents a groundbreaking advancement in precision agriculture. By harnessing the capabilities of artificial intelligence, this app empowers farmers and gardeners to swiftly identify and address plant diseases. The app operates by utilizing a comprehensive database of leaf images, encompassing both healthy and diseased specimens, which serves as the foundation for training the DL model. Through this process, the model becomes adept at distinguishing between healthy and afflicted leaves based on a range of visual cues, including color variations, patterns, and textural irregularities.
Figure 12 details the mobile reporting tools. Figure 12a shows the localization functionalities with two language options. After being authenticated, Figure 12c shows the reporter’s dashboard with the current information.
Figure 13 shows the mobile reporting features with different functionalities after logging. To support its usability, the menus are organized to ease their use by end-users, as mentioned in Figure 13a. Figure 13b details the information about the application and the policy data for the users to comply with.
Figure 14 discusses model detection and disease identification. One of the most compelling advantages of this mobile app is its accessibility and ease of use. With a simple snap of their smartphone camera, users can capture an image of a leaf and upload it to the app, as shown in Figure 14a. The DL model then promptly processes the image, providing an accurate diagnosis in real-time, as indicated in Figure 14b. The data can be reported to the cloud with the geolocation information of the reporter. Figure 14c indicates how the reporter himself can view what he has been reporting, with the possibility to share information instantly with other users through email, SMS, or WhatsApp. This rapid response is instrumental in enabling early intervention, which can be pivotal in curbing the spread of diseases and mitigating potential crop losses.
The application has been piloted with 10 coffee cooperatives to report diseases for data management and intervention purposes. Overall, the integration of the DL model in a mobile app for leaf disease detection not only revolutionizes plant health management but also empowers a wider community of farmers and horticulturists with accessible, data-driven solutions.

4.3. Web Application as a Data Management and Visualization Tool

A web app with an integrated deep learning model for leaf disease detection and classification is a transformative tool in modern agriculture and horticulture [53]. This technology combines the power of artificial intelligence with the convenience of web-based accessibility. It operates by utilizing a vast database of leaf images, comprising both healthy and diseased specimens, as training data for the DL model. Through extensive training, the model becomes proficient at distinguishing between healthy and afflicted leaves based on a variety of visual features, including color variations, patterns, and textural irregularities [54].
Figure 15 discusses the normal reporting feature with no deep learning model integrated. All diseases had been mapped with their respective images and descriptions to support reporting similar findings. This module was thought to support the regular report to district and national officials for intervention purposes.
On the other hand, the solution has another mechanism to detect, classify, and report the findings on the imported image, as shown in Figure 16. Due to the experimental analysis of the DL model performance, we realized that all five models do perform somehow well, so we decided to integrate them all so that the user could switch them. To manage the submissions, when the confidence is less than the set threshold, the report button is not activated. We decided to make the confidence threshold editable for research purposes.
The most critical module is to gather and manage the submissions from the distributed mobile applications for different reporters. The key idea was to map the traced findings on Google Maps to allow visualization of the distribution of coffee disease occurrences nationally. Figure 17 details the Google map with red pinning.
The system provides a way of exporting all compiled data for decision-making purposes, as shown in Figure 18.
One of the remarkable advantages of this web app is its wide accessibility. Farmers, researchers, and enthusiasts can access the app from any device with an internet connection, making it a versatile and powerful tool in plant health management.

5. Discussion

A mobile app with GPS-enabled reporting for on-site coffee leaf disease detection using an integrated DL model, as illustrated in Figure 1, is a game-changer for the coffee industry. This technology combines the power of artificial intelligence with the practicality of mobile devices and location tracking. The app functions by leveraging a robust DL model trained on a diverse dataset of coffee leaf images, encompassing both healthy and diseased samples. Through this process, the model becomes proficient at accurately distinguishing between healthy and afflicted coffee leaves based on visual features like discoloration, texture, and shape.
To develop this tool, the need assessment phase was conducted to collect the real facts, as mentioned in Figure 1, Figure 2 and Figure 3. It was observed that most intervention is conducted by agronomists to investigate the problems found in the field. Having one agronomist for the whole sector seems overwhelming, which requires the farmer to self-see his farm.
From the dataset acquired and insight from the farmers, we made a deep learning analysis for coffee leaf disease detection and classification. We surveyed five transfer-learning CNN algorithms and conducted a comparative study. Performance analysis was investigated and highlighted in Figure 7, Figure 8, Figure 9, Figure 10 and Figure 11. The DenseNet model proved to be the best-performing model among others, and it was chosen to be deployed on-device (mobile application). Table 2 presents a comparative analysis of various models along with their respective accuracy scores. The training accuracy and loss metrics reflect the models’ performance on the training dataset, while the validation accuracy and loss metrics indicate their performance on previously unseen validation data. Among the models, DenseNet demonstrated the highest training accuracy (99.57%) and validation accuracy (99.09%), showcasing its remarkable capacity to learn and extrapolate from the provided data. Conversely, ResNet50 exhibited the lowest validation accuracy (97.80%), suggesting it may face some challenges in effectively generalizing to new data compared to the other models.
For the used dataset, Figure 7b, Figure 8b, Figure 9b, Figure 10b, and Figure 11b show the performance measurements of the transfer learning classifiers in terms of precision, recall, and f1 values. The precision for identifying health leaves is 0.9934 for DenseNet, which is higher than those identified by Xception, Inception-v3, and VGG16, which are 0.8002, 0.7982, and 0.9007, respectively. The recall for identifying health leaves is 0.9321 for DenseNet, which is higher than those identified by Xception, Inception-v3, and VGG16, which are 0.8202, 0.7831, and 0.8763, respectively. The f1-score for identifying health leaves is 0.9621 for DenseNet, which is higher than those identified by Xception, Inception-v3, and VGG16, which are 0.9111, 0.7012, and 0.8002, respectively.
Among the three classes of diseases used in this case, DenseNet experienced low scores on rust disease due to the small datasets compared to other classes. DenseNet can be considered superior to other tested models for providing higher performance measures. Analyzing all bar chart graphs, it is proven that DenseNet performs better in classifying data than other models. However, both models can be implemented in the deployed system as they exhibit overwhelming accuracy because other tested models also show promising performance.
One of the standout features of this mobile app is its ability to provide real-time, on-site analysis, as shown in Figure 12, Figure 13 and Figure 14. Coffee farmers and workers can use their smartphones or tablets to capture images of leaves directly in the field. The integrated DL model processes these images instantly, offering a rapid diagnosis of any coffee leaf disease present. The GPS-enabled reporting adds an extra layer of functionality, allowing users to document the location where the image was taken.
Users can simply upload a photo of a leaf, and the DL algorithm will rapidly analyze the image, providing an accurate diagnosis in real-time, as shown in Figure 15, Figure 16, Figure 17 and Figure 18. The integration of DL into a web app for leaf disease detection revolutionizes plant health management by providing an accessible, data-driven solution for a broad audience. This feature not only aids in creating a comprehensive record of disease prevalence but also facilitates targeted interventions and management strategies in specific areas, optimizing the response to outbreaks.
The significance of this research in agriculture, specifically in identifying and reporting plant diseases, is apparent. However, it is vital to recognize and tackle the challenges impeding the effectiveness of these models. Here, we highlight key limitations that reduce the efficiency of this solution, including issues such as image noise and background analysis, variations in image acquisition conditions like light intensity and blurriness, difficulties in identifying and isolating combined disease symptoms, and data imbalances across different diseases and those with similar symptoms.

6. Conclusions

The integration of GPS tracking with DL-based disease detection in a web and mobile application has been developed and tested in this research. It has the potential to revolutionize the way coffee farms and their respective diseases are detected, classified, and managed. To investigate the farmer’s problem, need assessment activities were conducted to acquire the basic intervention and the time it takes for feedback.
Five transfer learning CNN algorithms have been experimented with on the captured Arabica dataset. Inception-v3, VGG16, Xception, ResNet, and DenseNet models were compared. DenseNet demonstrated the highest training accuracy (99.57%) and validation accuracy (99.09%), showcasing its remarkable capacity to learn and extrapolate from the provided data. On precision, recall, and f1-score, DenseNet performs better in classifying data than other models on all classes of the leaf (health, miner, rust, and red spider mite). Based on this fact, it was chosen to be deployed on the developed mobile device for portable detection, classification, and reporting purposes. By aggregating location-specific data over time, decision-makers can identify patterns and trends in disease occurrence through the web application. This information is invaluable for making informed decisions about planting, harvesting, and disease control measures. It also enables proactive strategies for disease prevention, helping to safeguard the health of coffee crops and improve overall yield and quality.
This innovative mobile app with GPS-enabled reporting and an integrated DL model for coffee leaf disease detection is a significant step forward in modernizing coffee farming practices and ensuring sustainable, high-quality coffee production. In future work, addressing challenges such as image noise and background analysis in coffee leaf disease detection may involve specialized CNN architectures tailored to coffee leaf images. Mitigating variations in image acquisition conditions, such as light fluctuations and blurriness, could be achieved through robust preprocessing or data augmentation. Exploring advanced CNNs capable of identifying multiple diseases simultaneously could enhance detection accuracy. Overcoming data imbalances may require collecting more balanced datasets or using techniques like data resampling or GANs for augmentation. Also, due to the growth of the users, the scalability and interoperability capabilities shall be considered.

Author Contributions

Conceptualization, E.H. and G.B.; methodology, E.H., E.M., M.K., S.M.M. and P.R.; software, E.H., M.K., D.A.I., S.N. and J.N.; validation, E.H., J.N. and M.K.; formal analysis, E.H., S.M.M. and O.J.S.; investigation, E.H., D.A.I., J.M., E.M., S.N., M.K. and T.M.; resources, E.H.; data curation, J.N.; writing—original draft preparation, E.H., M.K. and S.M.M.; writing—review and editing, E.H., G.B., M.K., C.U. and S.M.M.; visualization, E.H., D.A.I., S.N. and J.N.; supervision, G.B. and P.R.; project administration, E.H.; funding acquisition, O.J.S. and P.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded through a grant offered by the University of Rwanda in partnership with SIDA (Swedish International Development Agency) under the UR-Sweden program (UR-SIDA 2021-2024). The grant supported all research activities, such as data collection, the purchase of equipment and materials, fieldwork, etc. The APC was also funded by the same grant.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

When requested, the authors will make available all data used in this study.

Acknowledgments

This work is acknowledged by the Rwanda Agricultural Board (RAB) for the farming cooperatives operating in Rwanda and the coffee washing stations.

Conflicts of Interest

Author Jackson Ngabonziza was employed by the company Bank of Kigali Plc. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. UNCTAD. Commodities at a Glance—Special Issue on Coffee in East Africa. 2018. Available online: http://creativecommons.org/licenses/by/3.0/igo/ (accessed on 20 October 2023).
  2. Belay, S.; Mideksa, D.; Gebrezgiabher, S.; Seifu, W. Factors affecting coffee (Coffea arabica L.) quality in Ethiopia: A review. J. Multidiscip. Sci. Res. 2016, 4, 27–33. [Google Scholar]
  3. Increasing Agri-Export. NAEB Strategic Plan, Kigali. 2019. Available online: https://naeb.gov.rw/fileadmin/documents/191126NAEBStrategy2019-2024_FINAL.pdf (accessed on 21 October 2023).
  4. Behuria, P. The Politics of Upgrading in Global Value Chains: The Case of Rwanda’s Coffee Sector October 2018; ESID Working Paper No. 108; The University of Manchester: Manchester, UK, 2018. [Google Scholar] [CrossRef]
  5. Waller, J.M. Control of Coffee Diseases. In Coffee; Springer: Boston, MA, USA, 1985; pp. 219–229. [Google Scholar] [CrossRef]
  6. Etana, M.B. A review on the status of coffee berry disease (Colletotrichum kahawae) in Ethiopia. Am. J. Food Technol. 2019, 76, 71–76. [Google Scholar]
  7. Nair, K.P.P. Coffee. In The Agronomy and Economy of Important Tree Crops of the Developing World; Elsevier: Burlington, MA, USA, 2010; pp. 181–208. [Google Scholar] [CrossRef]
  8. Eskes, A.B. Incomplete Resistance to Coffee Leaf Rust. In Durable Resistance in Crops; Lamberti, F., Waller, J.M., Van der Graaff, N.A., Eds.; Springer: New York, NY, USA; Boston, MA, USA, 1983; pp. 291–315. [Google Scholar] [CrossRef]
  9. Boa Sorte, L.X.; Ferraz, C.T.; Fambrini, F.; Goulart, R.D.R.; Saito, J.H. Coffee leaf disease recognition based on deep learning and texture attributes. Procedia Comput. Sci. 2019, 159, 135–144. [Google Scholar] [CrossRef]
  10. Bigirimana, J. Incidence and severity of coffee leaf rust and other coffee pests and diseases in Rwanda. Afr. J. Agric. Res. 2012, 7, 3847–3852. [Google Scholar] [CrossRef]
  11. Bigirimana, J.; Adams, C.G.; Gatarayiha, C.M.; Muhutu, J.C.; Gut, L.J. Occurrence of potato taste defect in coffee and its relations with management practices in Rwanda. Agric. Ecosyst. Environ. 2019, 269, 82–87. [Google Scholar] [CrossRef]
  12. Aristizábal, L.F.; Johnson, M.A. Monitoring Coffee Leaf Rust (Hemileia vastatrix) on Commercial Coffee Farms in Hawaii: Early Insights from the First Year of Disease Incursion. Agronomy 2022, 12, 1134. [Google Scholar] [CrossRef]
  13. Li, K.; Hajian-Forooshani, Z.; Vandermeer, J.; Perfecto, I. Coffee leaf rust (Hemileia vastatrix) is spread by rain splash from infected leaf litter in a semi-controlled experiment. J. Plant Pathol. 2023, 105, 667–672. [Google Scholar] [CrossRef]
  14. Gichuru, E.; Alwora, G.; Gimase, J.; Kathurima, C. Coffee Leaf Rust (Hemileia vastatrix) in Kenya—A Review. Agronomy 2021, 11, 2590. [Google Scholar] [CrossRef]
  15. Talhinhas, P.; Batista, D.; Diniz, I.; Vieira, A.; Silva, D.N.; Loureiro, A.; Tavares, S.; Pereira, A.P.; Azinheira, H.G.; Guerra-Guimarães, L.; et al. The coffee leaf rust pathogen Hemileia vastatrix: One and a half centuries around the tropics. Mol. Plant Pathol. 2017, 18, 1039–1051. [Google Scholar] [CrossRef]
  16. Avelino, J.; Gagliardi, S.; Perfecto, I.; Isaac, M.E.; Liebig, T.; Vandermeer, J.; Merle, I.; Hajian-Forooshani, Z.; Motisi, N. Tree Effects on Coffee Leaf Rust at Field and Landscape Scales. Plant Dis. 2023, 107, 247–261. [Google Scholar] [CrossRef] [PubMed]
  17. Koutouleas, D.B. Collinge, Coffee Leaf Rust Back with a Vengeance. 2022. Available online: www.bspp.org.uk (accessed on 24 December 2023).
  18. Nelson, S.C. Cercospora leaf spot and berry blotch of coffee. Plant Dis. 2008, PD-41, 1–6. Available online: http://www.ctahr.hawaii.edu/freepubs (accessed on 24 December 2023).
  19. Tembo, S.M. Cercospora Leaf Spot of Coffee: Cercospora Coffeicola; (Brown Eye Spot, Berry Blotch in English). PlantwisePlus Knowledge Bank: Beijing, China, 2023. [Google Scholar] [CrossRef]
  20. Luzinda, H.; Nelima, M.; Wabomba, A.; Kangire, A.; Musoli, P.; Musebe, R. Farmer awareness, coping mechanisms and economic implications of coffee leaf rust disease in Uganda. Uganda J. Agric. Sci. 2016, 16, 207–217. [Google Scholar] [CrossRef]
  21. Javaid, M.; Haleem, A.; Khan, I.H.; Suman, R. Understanding the potential applications of Artificial Intelligence in the Agriculture Sector. Adv. Agrochem 2023, 2, 15–30. [Google Scholar] [CrossRef]
  22. Shahi, T.B.; Xu, C.-Y.; Neupane, A.; Guo, W. Recent Advances in Crop Disease Detection Using UAV and Deep Learning Techniques. Remote Sens. 2023, 15, 2450. [Google Scholar] [CrossRef]
  23. Yashwanth, M.; Chandra, M.L.; Pallavi, K.; Showkat, D.; Kumar, P.S. Agriculture Automation using Deep Learning Methods Implemented using Keras. In Proceedings of the 2020 IEEE International Conference for Innovation in Technology (INOCON 2020), Bangalore, India, 6–8 November 2020; pp. 1–6. [Google Scholar] [CrossRef]
  24. Paulos, E.B.; Woldeyohannis, M.M. Detection and Classification of Coffee Leaf Disease using Deep Learning. In Proceedings of the 2022 International Conference on Information and Communication Technology for Development for Africa (ICT4DA 2022), Bahir Dar, Ethiopia, 28–30 November 2022; pp. 1–6. [Google Scholar] [CrossRef]
  25. Abuhayi, B.M.; Mossa, A.A. Coffee disease classification using Convolutional Neural Network based on feature concatenation. Inform. Med. Unlocked 2023, 39, 101245. [Google Scholar] [CrossRef]
  26. Yamashita, J.V.Y.B.; Leite, J.P.R. Coffee disease classification at the edge using deep learning. Smart Agric. Technol. 2023, 4, 100183. [Google Scholar] [CrossRef]
  27. Hitimana, E.; Sinayobye, O.J.; Ufitinema, J.C.; Mukamugema, J.; Rwibasira, P.; Murangira, T.; Masabo, E.; Chepkwony, L.C.; Kamikazi, M.C.A.; Uwera, J.A.U.; et al. An Intelligent System-Based Coffee Plant Leaf Disease Recognition Using Deep Learning Techniques on Rwandan Arabica Dataset. Technologies 2023, 11, 116. [Google Scholar] [CrossRef]
  28. Nguyen, T.H.; Ta, X.T.; Doan, D.; Nguyen, M.S. A Full Framework of Disease Treatment Assistant System for Precision Agriculture. In Proceedings of the 2022 International Conference on Advanced Computing and Analytics, Ho Chi Minh City, Vietnam, 21–23 November 2022; pp. 48–53. [Google Scholar] [CrossRef]
  29. Jafar, A.; Bibi, N.; Naqvi, R.A.; Sadeghi-Niaraki, A.; Jeong, D. Revolutionizing agriculture with artificial intelligence: Plant disease detection methods, applications, and their limitations. Front. Plant Sci. 2024, 15, 1356260. [Google Scholar] [CrossRef] [PubMed]
  30. Barman, U.; Sarma, P.; Rahman, M.; Deka, V.; Lahkar, S.; Sharma, V.; Saikia, M.J. ViT-SmartAgri: Vision Transformer and Smartphone-Based Plant Disease Detection for Smart Agriculture. Agronomy 2024, 14, 327. [Google Scholar] [CrossRef]
  31. Jayashree, A.; Suresh, K.P.; Raaga, R. Advancing Coffee Leaf Rust Disease Management: A Deep Learning Approach for Accurate Detection and Classification Using Convolutional Neural Networks. J. Exp. Agric. Int. 2024, 46, 108–118. [Google Scholar] [CrossRef]
  32. Babatunde, R.S.; Babatunde, A.N.; Ogundokun, R.O.; Yusuf, O.K.; Sadiku, P.O.; Shah, M.A. A novel smartphone application for early detection of habanero disease. Sci. Rep. 2024, 14, 1423. [Google Scholar] [CrossRef] [PubMed]
  33. Nzeyimana, I. Optimizing Arabica Coffee Production Systems in Rwanda: A Multiple-Scale Analysis. 2018. Available online: https://www.researchgate.net/publication/325615794_Optimizing_Arabica_coffee_production_systems_in_Rwanda (accessed on 26 October 2023).
  34. Ian, G.; Bengio, Y.; Courville, A. Deep Learning; MIT Press: Cambridge, MA, USA, 2016; Available online: http://www.deeplearningbook.org (accessed on 10 November 2023).
  35. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar] [CrossRef]
  36. Patterson, J.; Gibson, A.; Loukides, M.; McGovern, T. Major architectures of deep networks. In Deep Learning A Practitioner’s Approach; O’Reilly Media, Inc.: Sebastopol, CA, USA, 2017; pp. 117–164. Available online: https://www.academia.edu/37119738/Deep_Learning_A_Practitioners_Approach (accessed on 18 November 2023).
  37. Demilie, W.B. Plant disease detection and classification techniques: A comparative study of the performances. J. Big Data 2024, 11, 5. [Google Scholar] [CrossRef]
  38. Hitimana, E.; Bajpai, G.; Musabe, R.; Sibomana, L.; Kayalvizhi, J. Implementation of IoT Framework with Data Analysis Using Deep Learning Methods for Occupancy Prediction in a Building. Future Internet 2021, 13, 67. [Google Scholar] [CrossRef]
  39. Kuradusenge, M.; Hitimana, E.; Hanyurwimfura, D.; Rukundo, P.; Mtonga, K.; Mukasine, A.; Uwitonze, C.; Ngabonziza, J.; Uwamahoro, A. Crop Yield Prediction Using Machine Learning Models: Case of Irish Potato and Maize. Agriculture 2023, 13, 225. [Google Scholar] [CrossRef]
  40. Tzenetopoulos, A.; Masouros, D.; Koliogeorgi, K.; Xydis, S.; Soudris, D.; Chazapis, A.; Acquaviva, J. EVOLVE: Towards converging big-data, high-performance, and cloud-computing worlds. In Proceedings of the 2022 Design, Automation & Test in Europe Conference & Exhibition, Antwerp, Belgium, 14–23 March 2022; pp. 975–980. [Google Scholar]
  41. Niu, C.; Wang, L. Big data-driven scheduling optimization algorithm for Cyber-Physical Systems based on a cloud platform. Comput. Commun. 2022, 181, 173–181. [Google Scholar] [CrossRef]
  42. Wan, S.; Ding, S.; Chen, C. Edge computing enabled video segmentation for real-time traffic monitoring in the Internet of Vehicles. Pattern Recognit. 2022, 121, 108146. [Google Scholar] [CrossRef]
  43. Zhou, S.; Wei, C.; Song, C.; Pan, X.; Chang, W.; Yang, L. Short-term traffic flow prediction of the smart city using 5G internet of vehicles based on edge computing. IEEE Trans. Intell. Transp. Syst. 2022, 24, 2229–2238. [Google Scholar] [CrossRef]
  44. Ahmad, J.; Jan, B.; Farman, H.; Ahmad, W.; Ullah, A. Disease detection in plum using convolutional neural network under true field conditions. Sensors 2020, 20, 5569. [Google Scholar] [CrossRef] [PubMed]
  45. Senington, R.; Pataki, B.; Wang, X.V. Using docker for factory system software management: Experience report. Procedia CIRP 2018, 72, 659–664. [Google Scholar] [CrossRef]
  46. Mohammed, H.; Faraj, K. A Python-WSGI and PHP-Apache Web Server Performance Analysis by Search Page Generator (SPG). UKH J. Sci. Eng. 2021, 5, 132–138. [Google Scholar] [CrossRef]
  47. García, L. DEEPaaS API: A REST API for Machine Learning and Deep Learning models. J. Open Source Softw. 2019, 4, 1517. [Google Scholar] [CrossRef]
  48. Weber, A.S.; D’amato, D.; Atkinson, B.K. Python Regius. Herpetol. Rev. 2022, 53, 632. [Google Scholar] [CrossRef]
  49. Chethan, K.S.; Donepudi, S.; Supreeth, H.V.; Maani, V.D. Mobile application for classification of plant leaf diseases using image processing and neural networks. In Data Intelligence and Cognitive Informatics; Springer: Singapore, 2021; pp. 287–306. [Google Scholar] [CrossRef]
  50. Valdoria, J.C.; Caballeo, A.R.; Fernandez, B.I.D.; Condino, J.M.M. iDahon: An Android-based terrestrial plant disease detection mobile application through digital image processing using deep learning neural network algorithm. In Proceedings of the 2019 4th International Conference on Information Technology (InCIT), Bangkok, Thailand, 24–25 October 2019. [Google Scholar] [CrossRef]
  51. Syamsuri, B.; Negara, I. Plant disease classification using Lite pre-trained deep convolutional neural network on Android mobile device. Int. J. Innov. Technol. Explor. Eng. 2019, 9, 2796–2804. [Google Scholar] [CrossRef]
  52. Elgendy, M. Deep Learning for Vision Systems; Manning Publications: Shelter Island, NY, USA, 2020; pp. 240–262. ISBN 9781617296192. [Google Scholar]
  53. Tugrul, B.; Elfatimi, E.; Eryigit, R. Convolutional Neural Networks in Detection of Plant Leaf Diseases: A Review. Agriculture 2022, 12, 1192. [Google Scholar] [CrossRef]
  54. Shoaib, M.; Shah, B.; Ei-Sappagh, S.; Ali, A.; Ullah, A.; Alenezi, F.; Gechev, T.; Hussain, T.; Ali, F. An advanced deep learning models-based plant disease detection: A review of recent research. Front. Plant Sci. 2023, 14, 1158933. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Intervention is conducted when feedback is shared at an upper level.
Figure 1. Intervention is conducted when feedback is shared at an upper level.
Software 03 00007 g001
Figure 2. Response time once the query is shared.
Figure 2. Response time once the query is shared.
Software 03 00007 g002
Figure 3. Farmers’s phone possession and category of the phones. (a) Farmers’ readiness on the phone possession. (b) Farmers with phone categories.
Figure 3. Farmers’s phone possession and category of the phones. (a) Farmers’ readiness on the phone possession. (b) Farmers with phone categories.
Software 03 00007 g003
Figure 4. DenseNet CNN detailed architecture.
Figure 4. DenseNet CNN detailed architecture.
Software 03 00007 g004
Figure 5. Proposed cloud computing with integrated deep learning pipeline.
Figure 5. Proposed cloud computing with integrated deep learning pipeline.
Software 03 00007 g005
Figure 6. User perspective system flow diagram.
Figure 6. User perspective system flow diagram.
Software 03 00007 g006
Figure 7. Inception-v3 model performance analysis using the collected dataset. (a) Model training; (b) class-wise metrics.
Figure 7. Inception-v3 model performance analysis using the collected dataset. (a) Model training; (b) class-wise metrics.
Software 03 00007 g007
Figure 8. ResNet50 model performance analysis using the collected dataset. (a) Model training; (b) class-wise metrics.
Figure 8. ResNet50 model performance analysis using the collected dataset. (a) Model training; (b) class-wise metrics.
Software 03 00007 g008
Figure 9. VGG16 model performance analysis using the collected dataset. (a) Model training; (b) class-wise metrics.
Figure 9. VGG16 model performance analysis using the collected dataset. (a) Model training; (b) class-wise metrics.
Software 03 00007 g009
Figure 10. Xception model performance analysis using the collected dataset. (a) Model training; (b) class-wise metrics.
Figure 10. Xception model performance analysis using the collected dataset. (a) Model training; (b) class-wise metrics.
Software 03 00007 g010
Figure 11. DenseNet model performance analysis using the collected dataset. (a) Model training; (b) class-wise metrics.
Figure 11. DenseNet model performance analysis using the collected dataset. (a) Model training; (b) class-wise metrics.
Software 03 00007 g011
Figure 12. Mobile reporting tool. (a) App localization functionality; (b) reporter authentications and authorizations; (c) reporter’s dashboard.
Figure 12. Mobile reporting tool. (a) App localization functionality; (b) reporter authentications and authorizations; (c) reporter’s dashboard.
Software 03 00007 g012
Figure 13. Reporter mobile application. (a) Application’s available functionalities; (b) about the application.
Figure 13. Reporter mobile application. (a) Application’s available functionalities; (b) about the application.
Software 03 00007 g013
Figure 14. Mobile application’s functionalities: (a) Loading image; (b) DL-based detection and classification; (c) reported history.
Figure 14. Mobile application’s functionalities: (a) Loading image; (b) DL-based detection and classification; (c) reported history.
Software 03 00007 g014
Figure 15. Web-based application for coffee leaf disease reporting with no classification.
Figure 15. Web-based application for coffee leaf disease reporting with no classification.
Software 03 00007 g015
Figure 16. Web-based application with integrated DL model for disease detection and classification.
Figure 16. Web-based application with integrated DL model for disease detection and classification.
Software 03 00007 g016
Figure 17. Web-based application, the visualization on Google Maps for reported diseases.
Figure 17. Web-based application, the visualization on Google Maps for reported diseases.
Software 03 00007 g017
Figure 18. Web-based application, the aggregated reports are generated on a monthly and quarterly basis.
Figure 18. Web-based application, the aggregated reports are generated on a monthly and quarterly basis.
Software 03 00007 g018
Table 1. Trained Model Architecture.
Table 1. Trained Model Architecture.
ParametersInception-v3XceptionResNet50DenseNetVGG16
Total layers31413517843022
Max pool layers44115
Dense layers22222
Dropout layers--2-2
Flatten layers--1-1
Filter size1 × 1, 3 × 3, 5 × 53 × 33 × 33 × 3, 1 × 13 × 3
Stride2 × 22 × 22 × 22 × 21
Trainable parameters23,905,06022,963,75625,689,9888,091,20415,244,100
Table 2. Comparative analysis of various network model performances for the coffee leaf dataset.
Table 2. Comparative analysis of various network model performances for the coffee leaf dataset.
Model TypesTraining Accuracy (%)Training Loss (%)Validation Accuracy (%)Validation Loss (%)
Inception-v399.34 0.016799.010.0306
ResNet5098.70 0.0565 97.80 0.0577
Xception99.40 0.014098.84 0.0337
VGG1698.81 0.029197.53 0.0668
DenseNet99.570.013599.090.0225
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hitimana, E.; Kuradusenge, M.; Sinayobye, O.J.; Ufitinema, C.; Mukamugema, J.; Murangira, T.; Masabo, E.; Rwibasira, P.; Ingabire, D.A.; Niyonzima, S.; et al. Revolutionizing Coffee Farming: A Mobile App with GPS-Enabled Reporting for Rapid and Accurate On-Site Detection of Coffee Leaf Diseases Using Integrated Deep Learning. Software 2024, 3, 146-168. https://doi.org/10.3390/software3020007

AMA Style

Hitimana E, Kuradusenge M, Sinayobye OJ, Ufitinema C, Mukamugema J, Murangira T, Masabo E, Rwibasira P, Ingabire DA, Niyonzima S, et al. Revolutionizing Coffee Farming: A Mobile App with GPS-Enabled Reporting for Rapid and Accurate On-Site Detection of Coffee Leaf Diseases Using Integrated Deep Learning. Software. 2024; 3(2):146-168. https://doi.org/10.3390/software3020007

Chicago/Turabian Style

Hitimana, Eric, Martin Kuradusenge, Omar Janvier Sinayobye, Chrysostome Ufitinema, Jane Mukamugema, Theoneste Murangira, Emmanuel Masabo, Peter Rwibasira, Diane Aimee Ingabire, Simplice Niyonzima, and et al. 2024. "Revolutionizing Coffee Farming: A Mobile App with GPS-Enabled Reporting for Rapid and Accurate On-Site Detection of Coffee Leaf Diseases Using Integrated Deep Learning" Software 3, no. 2: 146-168. https://doi.org/10.3390/software3020007

APA Style

Hitimana, E., Kuradusenge, M., Sinayobye, O. J., Ufitinema, C., Mukamugema, J., Murangira, T., Masabo, E., Rwibasira, P., Ingabire, D. A., Niyonzima, S., Bajpai, G., Mvuyekure, S. M., & Ngabonziza, J. (2024). Revolutionizing Coffee Farming: A Mobile App with GPS-Enabled Reporting for Rapid and Accurate On-Site Detection of Coffee Leaf Diseases Using Integrated Deep Learning. Software, 3(2), 146-168. https://doi.org/10.3390/software3020007

Article Metrics

Back to TopTop