Measuring plant disease severity using the pliman R package


pliman (PLant IMage ANalysis) is an R package for the analysis of images of plants, mainly leaves, that offers a function for measuring plant disease severity. In this post I will demonstrate the use of pliman for determining severity on leaves affected by soybean rust as a case study.

Emerson Del Ponte


Two weeks ago I came across pliman, a new R package developed by Tiago Olivoto that provides a suite of functions for conducting several analyses on images of plants. For obvious reasons, I was greatly interested in testing a function that allows measuring plant disease severity - or the percentage leaf area affected.

More than two dozen software, most of them proprietary, have been used for measuring percentage severity in phytopathometry research.[1] As far as I know, there is no specialized R package or one with functions built particularly for plant disease severity measurement.

The availability of these image analysis tools is of great importance mainly for research purposes or situations when the most accurate severity is necessary.[2] Examples include the development of standard area diagrams (set of images of leaves with known severity) as well as their validation when an “actual” severity measurement is required. These images are usually obtained under standardized conditions (light and background) and analyzed individually. However, a batch processing can automate and greatly speed up the process - which is of course beneficial in any research field.

In this post I will demonstrate how to measure severity using the sympmatic_area() function of pliman to measure severity in 10 leaves (batch processing) infected with soybean rust. I will further compare the measures with ones determined using QUANT software as used in a previous work[3]

What is healthy and diseased?

The most critical step is the initial step - this avoids garbage in, garbage out, when the user needs to correctly define the color palettes, which in pliman are separate images representing each of three classes named background (b), symptomatic (s) and healthy (h).

These reference image palettes can be made simply by manually sampling small areas of the image and producing a composite image. Of course, the results may vary significantly depending on how these areas are chosen, and are subjective in nature due to the researchers experience. The observation of the processed masks becomes important to create image palettes that are most representative of the respective class.

Here, I cut and pasted several sections of images representative of each class from a few leaves into a Google slide. Once the image palette was ready, I exported each one as a separate image PNG file (JPG also works). These were named: sbr_b.png, sbr_h.png and sbr_s.png.

Now that we have the image palettes, we can start by importing the image palettes into the environment for further analysis. Let’s create an image object for each palette named h (healthy), s (symptoms) and b (background).

# install.package("pliman")
# note that pliman requires R version 4 and EBImage pkg
# install.packages("BiocManager")
# BiocManager::install("EBImage")

h <- image_import("sbr_h.png")
s <- image_import("sbr_s.png")
b <- image_import("sbr_b.png")

We can visualize the imported images using the image_combine() function.

image_combine(h, s, b, ncol = 3)

Measuring severity

Single image

To determine severity in a single image (img46.png), the image file needs to be loaded and assigned to an object using the same image_import() function used to load the palettes for each of the predefined classes. We can then visualize the image, again using image_combine().

img <- image_import("originals/img46.png")

Now the fun begins with the symptomatic_area() function to determine severity. Four arguments are needed, the one representing the target image and each of the three images of the color palettes. As the author of the package says “pliman will take care of all details!”

symptomatic_area(img = img,
                 img_healthy = h,
                 img_symptoms = s,
                 img_background = b,
                 show_image = TRUE)

   healthy symptomatic
1 93.24391    6.756089

Lots of images

That was fun, but usually we don’t have a single image to process but several. It would take a longer time to process each one using the above procedure, thus becoming tedious.

To automate the process, pliman offers a batch processing approach. For such, instead of using img argument, one can use img_pattern and define the prefix of names of the images. In addition, we also need to define the folder where the original files are located.

If the users wants to save the processed masks, the save_image argument needs to be set to TRUE and the directory where the images will be saved also should be informed. Check below how to process 10 images of soybean rust symptoms. The outcome is a dataframe with the measures of the percent healthy and percent symptomatic area for each leaf.

pliman <- symptomatic_area(img_pattern = "img",
                 dir_original = "originals" ,
                 dir_processed = "processed",
                 save_image = TRUE,
                 img_healthy = h,
                 img_symptoms = s,
                 img_background = b,
                 show_image = FALSE)
Processing image img11 |===                            | 10% 00:00:00 
Processing image img35 |======                         | 20% 00:00:04 
Processing image img37 |=========                      | 30% 00:00:08 
Processing image img38 |============                   | 40% 00:00:12 
Processing image img46 |================               | 50% 00:00:14 
Processing image img5 |===================             | 60% 00:00:18 
Processing image img63 |======================         | 70% 00:00:22 
Processing image img67 |=========================      | 80% 00:00:26 
Processing image img70 |============================   | 90% 00:00:31 
Processing image img75 |===============================| 100% 00:00:34 
   sample  healthy symptomatic
1   img11 72.15811   27.841891
2   img35 33.98541   66.014586
3   img37 59.33331   40.666691
4   img38 80.79859   19.201408
5   img46 93.33026    6.669743
6    img5 20.88541   79.114593
7   img63 97.24194    2.758056
8   img67 99.84139    0.158607
9   img70 31.07596   68.924039
10  img75 92.35234    7.647665

With the argument save_image set to TRUE, the images are all saved in the folder with the standard prefix “proc.”

How good are these measures?

These 10 images were previously processed in QUANT software for determining severity. Let’s create a tibble for the image code and respective “actual” severity - assuming QUANT’s measures as reference.

quant <- tribble(
  ~sample, ~actual,
   "img5",     75,
  "img11",     24,
  "img35",     52,
  "img37",     38,
  "img38",     17,
  "img46",      7,
  "img63",    2.5,
  "img67",   0.25,
  "img70",     67,
  "img75",     10

We can now combine the two dataframes and produce a scatter plot relating the two measures.

dat <- left_join(pliman, quant)

dat %>% 
  ggplot(aes(actual, symptomatic))+
  geom_abline(slope = 1, intercept = 0)+
  labs(x = "Quant", 
       y = "pliman")

The concordance correlation coefficient is a test for agreement between two observers or two methods. It is an indication of how accurate the pliman measures are compared with a standard. The coefficient is greater than 0.97 (1.0 is perfect concordance), suggesting an excellent agreement!

ccc <- epi.ccc(dat$actual, dat$symptomatic)
        est    lower     upper
1 0.9832351 0.947739 0.9946877


The community of R users may enjoy using pliman as an alternative to proprietary software or other point-and-click open source solutions such as imageJ. The simplicity of the batch processing approach can greatly improve the speed of the assessment and the user can set arguments to run R in parallel for enhanced computational speed.

The most critical step, as I mentioned, is the definition of the reference color palettes. A few preliminary runs may be needed for a few leaves to check whether the segmentation is being performed correctly, based on visual judgement. This is no different than any other color-threshold based methods when the choices made by the user affect the final result and contribute to variation among assessors.[4] The cons are the same encountered in the direct competitors, which is the necessity to have images obtained at uniform and controlled conditions, especially a contrasting background.

1. Del Ponte, E. M., Pethybridge, S. J., Bock, C. H., Michereff, S. J., Machado, F. J., & Spolti, P. (2017). Standard Area Diagrams for Aiding Severity Estimation: Scientometrics, Pathosystems, and Methodological Trends in the Last 25 Years. Phytopathology®, 107(10), 1161–1174.
2. Bock, C. H., Barbedo, J. G. A., Del Ponte, E. M., Bohnenkamp, D., & Mahlein, A.-K. (2020). From visual estimates to fully automated sensor-based measurements of plant disease severity: status and challenges for improving accuracy. Phytopathology Research, 2(1).
3. Franceschi, V. T., Alves, K. S., Mazaro, S. M., Godoy, C. V., Duarte, H. S. S., & Del Ponte, E. M. (2020). A new standard area diagram set for assessment of severity of soybean rust improves accuracy of estimates and optimizes resource use. Plant Pathology, 69(3), 495–505.
4. Bock, C. H., Cook, A. Z., Parker, P. E., & Gottwald, T. R. (2009). Automated Image Analysis of the Severity of Foliar Citrus Canker Symptoms. Plant Disease, 93(6), 660–665.



If you see mistakes or want to suggest changes, please create an issue on the source repository.


Text and figures are licensed under Creative Commons Attribution CC BY 4.0. Source code is available at, unless otherwise noted. The figures that have been reused from other sources don't fall under this license and can be recognized by a note in their caption: "Figure from ...".