Analysis and Diagnosis of Mango Leaf Diseases through Image Processing.

Abhishek Wadhwani
11 min readApr 30, 2023

--

https://www.semanticscholar.org/paper/Expert-system-for-diagnosis-mango-diseases-using-Trongtorkid-Pramokchon/d51e3d51cdb956e7c3d518c8e91e7c022988442e

Introduction

Plant diseases can significantly impact crop yields, with leaf diseases being particularly damaging due to their impact on photosynthesis and plant growth. Identifying and treating these diseases early is crucial for minimizing the economic impact on farmers and preventing further spread. The use of advanced technologies like computer vision and machine learning has enabled automated and efficient detection of leaf diseases through analysis of plant leaf images. These technologies can accurately identify and classify leaf diseases, allowing farmers to take immediate action to control and prevent the spread of disease. Early detection and diagnosis of leaf diseases is critical for maintaining healthy crops and ensuring food security. The development of accurate and efficient methods for leaf disease detection can help farmers identify and treat diseases early, preventing further spread and minimizing the economic impact on their crops.

Dataset

There are 4000 high-resolution, 240x320 JPG photos of mango leaves in the Mango Leaf Disease dataset on Kaggle, containing both healthy leaves and those with the seven diseases Anthracnose, Bacterial Canker, Cutting Weevil, Die Back, Gall Midge, Powdery Mildew, and Sooty Mould. Each of the eight categories, including the healthy category, has 500 photos. The dataset was collected from mango trees in four orchards in Bangladesh using a mobile phone camera. The dataset can be used to train machine learning models, like convolutional neural networks (CNNs), that can distinguish between healthy and diseased leaves or differentiate between different diseases. This can help with early disease detection and treatment, which will ultimately result in increased crop yields and improved food security. The dataset citation is provided as “MangoLeafBD Dataset” by Ali et al., 2022.

link: https://www.kaggle.com/datasets/aryashah2k/mango-leaf-disease-dataset

Background

Convolutional Neural Network (CNN)

The dataset provided can be utilized to detect and classify mango leaf diseases using CNNs. By feeding high-resolution images of mango leaves into a CNN, features that differentiate healthy and diseased leaves can be identified. The convolutional layers in the CNN can detect patterns, such as discoloration, spots, or irregular shapes, that are characteristic of various mango leaf diseases.

The pooling layers can reduce the feature map dimensions while preserving essential information, and the fully connected layers can generate a probability distribution over the various disease classes. The model can be trained on the labeled dataset to accurately classify new mango leaf images based on their disease status, enabling early detection and treatment of diseases, leading to higher crop yields and improved food security.

https://www.sciencedirect.com/science/article/pii/S2666285X21000303

Activation Functions

1. Rectified Linear Unit (ReLU) — In deep learning neural networks, ReLU is a popular activation function. ReLU’s equation is f(x) = max(0, x). If the input is negative, the ReLU function returns 0, and if it is positive, it returns the input. ReLU is a linear function, and its graph has a slope of 1 for inputs larger than or equal to 0 and a slope of 0 for inputs less than 0.

2. Tanh (Hyperbolic Tangent) — Tanh is an activation function that is frequently employed in neural networks. Tanh has the formula f(x) = (ex — e-x) / (ex + e-x). The Tanh function is zero-centered and maps input values to the interval [-1, 1], which facilitates model learning. Tanh’s graph is a “S”-shaped curve with a [-1, 1] range.

3. Softmax — A popular activation function for multi-class classification applications is Softmax. Softmax has the formula f(x_i) = exi / (sum(ex_j) for j in the range of 1 to n). The input values are converted to a probability distribution over the classes by the Softmax function. Softmax’s graph resembles a curve that climbs quickly at first before flattening out as input numbers rise.

4. Sigmoid — In neural networks, the sigmoid activation function is frequently used. Sigmoid has the formula f(x) = 1 / (1 + e-x). The Sigmoid function converts the input values to probabilities by mapping them to the range [0, 1]. The Sigmoid graph is a “S”-shaped curve having a [0, 1] range.

5. ELU (Exponential Linear Unit) — The ELU activation function was recently created and is intended to solve the vanishing gradient issue in deep neural networks. f(x) = x, x > 0; alpha * (ex — 1), x = 0, where alpha is a tiny positive constant, is the formula for ELU. The [-alpha, infinite] range is where the ELU function translates the input values to. Similar to the ReLU function, the ELU function’s graph has a negative slope for input values that are less than or equal to 0.

Activation Functions: (a) RELU (b) ELU © SELU (d) Sigmoid (e) Tanh. https://www.researchgate.net/figure/Activation-Functions-a-RELU-b-ELU-c-SELU-d-Sigmoid-e-Tanh_fig5_346467552

Random Forest Classifier

Random Forest Classifier is a popular machine learning algorithm that can be used for classification tasks, such as identifying and diagnosing plant diseases based on leaf images. The algorithm can be trained on the attributes retrieved from the photos, such as color, texture, and form, using the Mango Leaf Disease Dataset to create a decision tree model. The accuracy of the classification is increased by lowering the likelihood of overfitting when this decision tree model is merged with other decision trees to create a random forest.

The ability of Random Forest Classifier to swiftly assess a large number of photos and provide precise predictions makes it a great tool for determining the several diseases that damage mango leaves. The model can also be tuned by adjusting hyperparameters, such as the number of decision trees, to further improve its performance.

Decision Tree Classifier

Decision tree classifiers can be employed to identify these diseases based on their visual characteristics. A common categorization technique is decision trees, which build a tree-like model of decisions and potential outcomes based on input features.

The decision tree might be trained on parameters like the color, shape, and texture of the leaves as well as any other pertinent traits that could distinguish between various types of diseases in the context of the Mango Leaf Disease Dataset. After being taught, the decision tree may be used to identify the disease kind from fresh photos of mango leaves, assisting farmers in promptly and effectively spotting possible outbreaks and taking appropriate action.

https://commons.wikimedia.org/wiki/File:Decision_Tree_vs._Random_Forest.png

Implementation

Importing Libraries and Packages

We will first import all the various libraries and packages commonly used in the program. It includes NumPy, PIL (Python Imaging Library), Pandas, OpenCV (cv2), TensorFlow, and scikit-learn. It also imports modules for data visualization such as matplotlib and seaborn, and modules for measuring the performance of the machine learning models such as classification_report, confusion_matrix, and accuracy_score.

import numpy as np 
import PIL
from PIL import Image
import pandas as pd
import cv2
import os
import random
from random import shuffle
from tqdm import tqdm
import glob
import tensorflow as tf
from sklearn.model_selection import train_test_split
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Dense, Flatten, Dropout, BatchNormalization

Data Visualization

The ‘visualize_dataset’ function accepts three parameters: the directory path to the dataset, the desired image height and width, and the number of samples to be displayed.

The function first creates a matplotlib figure with a specified number of subplots, and then selects a random folder from the dataset. A random image file is chosen and its path is built from this folder. If the file is an image file, OpenCV is used to load it, resize it to the necessary height and width, and change the color space from BGR to RGB. Finally, the image is shown in one of the subplots with the axis disabled and the subplot title set to the name of the folder.

def visualize_dataset(data_dir, img_height, img_width, num_samples=10):
fig, axes = plt.subplots(1, num_samples, figsize=(20, 5))
folders = [folder for folder in os.listdir(data_dir) if os.path.isdir(os.path.join(data_dir, folder))]
for i in range(num_samples):
folder = random.choice(folders)
file = random.choice(os.listdir(os.path.join(data_dir, folder)))
file_path = os.path.join(data_dir, folder, file)
if file.endswith(('.jpg', '.png')):
img = cv2.cvtColor(cv2.resize(cv2.imread(file_path), (img_height, img_width)), cv2.COLOR_BGR2RGB)
axes[i].imshow(img)
axes[i].set_title(folder)
axes[i].axis('off')

data_dir = '/kaggle/input/mango-leaf-disease-dataset'
img_height, img_width, num_samples = 256, 256, 10
visualize_dataset(data_dir, img_height, img_width, num_samples)

Image Classification with Pretrained Model : Resnet50

The code imports a pre-trained ResNet50 model for image classification using Tensorflow’s ‘saved_model.load()’ function. The model predicts the class probabilities for each image in the supplied CSV files (train.csv and test.csv) containing filenames and their labels. The model’s predictions are stored in the corresponding CSV files under a new ‘prediction’ column. The test set’s classification report and accuracy score are then computed and printed.

The photos are divided into training and validation sets using a 90/10 split and scaled to a predetermined height and width before batch processing with a predetermined batch size. The results show that the model achieved very high performance with F1-scores of 0.99 or 1.00 for all classes and an overall accuracy of 1.00. This suggests that the model is effective in classifying mango leaves into their respective disease categories.

Experimenting with CNN Layers and various activation functions

Here, we categorize photos of mango leaves into three groups using a convolutional neural network (CNN) model: healthy, diseased, and damaged. The CNN model has a number of layers, including dense, max pooling, and convolutional layers. The supplied images’ pixel values are scaled using the Rescaling layer to a range of [0, 1]. The three classes are represented by the three nodes in the output layer. The loss function and the Adam optimizer are used to create the model.

The model is trained using the ‘fit()’ method on the training set, with the validation set being used to assess the model’s performance. The ‘his_cnn_md1’ variable keeps track of the training history, which includes accuracy and loss values for training and validation data across epochs. The performance of the model during training is then shown by plotting these results using Matplotlib, which displays the change in accuracy and loss over the duration of the training epochs.

Now I have used, different activation functions to observe variance in results.

CNN with RELU Activation function:

Result from RELU: The CNN model achieved an accuracy of 93% on the training data and 85% on the validation data after 5 epochs. The training and validation loss decreased over time.

CNN with TanH Activation function:

Result: The model was trained for 5 epochs and achieved a maximum validation accuracy of 0.4625 with a validation loss of 1.5762. The training accuracy was 0.3703 with a training loss of 1.6351.

CNN with Softmax Activation function:

Result: The outcome displays the accuracy and loss for a CNN model over five iterations of training and validation. The model performed poorly in categorizing the photos, with a very low accuracy of 12.53 percent on both the training and validation sets. Additionally high loss values indicated poor model fit.

CNN with Sigmoid Activation function:

Result: On both the training and validation sets, the CNN model trained over 5 iterations on the mango leaf disease dataset had a low accuracy of 11–12%. Additionally, the loss remained high throughout training, demonstrating that the model did not effectively learn from the data.

CNN with ELU Activation function:

Result: The model achieved a training accuracy of 77.33% and a validation accuracy of 73.25% after 5 epochs. Over the epochs, the training loss decreased from 5.42 to 0.66 and the validation loss from 2.03 to 0.70.

Experimenting with Random Forest Classifier

Result: The model achieved an overall accuracy of 74%. Precision, recall, and F1-score were calculated for each class, and the macro-average F1-score was 0.74, indicating balanced performance across classes.

Experimenting with Decision Tree Classifier

Result: The classification model achieved an overall accuracy of 69% on a dataset with 8 classes. The precision and recall scores varied across the classes, with some classes performing better than others. The weighted average of precision, recall, and F1-score was 0.69.

Comparison of Various Activation Function (CNN), Random forest and Decision Tree.

A bar chart comparing the validation accuracies of different machine learning models, namely ResNet50, ReLU, Tanh, Softmax, Sigmoid, ELU, Random Forest Classifier, and Decision Tree Classifier.

ResNet50 had the highest accuracy, followed by ELU and Random Forest Classifier. Sigmoid had the lowest accuracy.

My Contribution

Here is my contribution to the dataset for the mango leaf disease image classifier.

  • I have conducted an experiment comparing ResNet50, ReLU, Tanh, SoftMax, Sigmoid, ELU, Random Forest Classifier, and Decision Tree Classifier. I tweaked the model’s hyperparameters, specifically the convolutional pooling pairs in the CNN.
  • I focused mainly on experimenting with various activation functions performance in my custom CNN model in this experiment.
  • I found out that from different machine learning algorithms and variations which one would work best for the mango leaf dataset.
  • Finally I create a hyperparameter-tuned version of the CNN sequential model.

Technical Challenges

During the development of this image classifier, I faced a few challenges. Here’s a brief list of those challenges along with the solutions I found to resolve them:

  • One of the main challenges was to understand and evaluate the different Machine Learning models used for training and validation of the data.
  • The dataset provided had images of the same kind, but rotated with respect to edges, which made it difficult for me to merge the data into a single directory. This took me a significant amount of time to resolve.
  • Another challenge I encountered was selecting the right ML models to use for my image classification app. After some research, I chose the most specific and trendy ML models for the task.
  • Finally, training the models with images required a considerable amount of computation power for the CNN.

Submission Link: https://github.com/Abhismoothie/ImageClassifier-MangoLeafDiseaseDataset

References

Kaggle dataset — Mango Leaf Disease Images, link: https://www.kaggle.com/datasets/aryashah2k/mango-leaf-disease-dataset

GitHub Gist, Visualizing a Dataset with Matplotlib and OpenCV — https://gist.github.com/SebLague/ebc558e2346b75f5b57f6cd8d2e93f6a

PyTorch Tutorials, Image Classification — https://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html

PyTorch, Convolutional Neural Networks — https://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html#convolutional-neural-network

Scikit-learn, Decision Tree Classifier — https://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.html

Random Forest Classification — https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html

Decision Tree Classification — https://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.html

IBM, Image Recognition and Classification — https://www.ibm.com/cloud/learn/image-recognition-and-classification

https://medium.com/@biraaj.ca/wild-animals-image-classification-using-machine-learning-multiclass-f9307368bdf6

https://medium.com/@bolajkiran_89403/wild-life-image-classifier-dc76cc28f4d4

--

--