Assessing Ecosystem Carbon using Drones: The State of the Art

We use imagery from UAVs to calculate the heights and volume of individual mangrove trees, and can use this information to calculate the above-ground carbon stock within a given area.

We use imagery from UAVs to calculate the heights and volume of individual mangrove trees, and can use this information to calculate the above-ground carbon stock within a given area.

While quantifying the actual value of nature may not always be possible or even desirable, the scientific community has been developing methods to quantify ecosystem services to bolster the case for the conservation of those ecosystems for a while. In our previous blogposts, we summarised methods used to estimate ecosystem carbon based on satellite imagery and field work. In this blogpost, the last in this series (for now), we examine potential methods of estimating carbon using Unmanned Aerial Vehicles (UAV). With UAVs rapidly becoming more accessible, it is important for us to understand how they could be used to provide ecosystem carbon estimates that are more accurate and precise than those derived solely from satellite data.

We examine two broad approaches to assessing carbon using UAVs. The first is based on volumetric assessments of vegetation using UAVs and the second is an attempt to use UAV-based imagery to mimic LiDAR-based methods for carbon estimation. We found several studies (Messinger, 2016; Shin, 2018; Mlambo, 2017) that demonstrated the use of UAVs for the assessment of vegetation biomass; while none of the articles we found subsequently estimated carbon, we think its possible to adapt this method for carbon estimation as well. 

A recent study by Warfield and Leon (2019) is a prime example of the first approach, of assessing vegetation volumes using UAVs. They recently conducted a comparative analysis of UAV imagery and Terrestrial Laser Scanning (TLS) to capture the forest structure and volume of three mangrove sites. The data obtained from the UAV and TLS surveys were processed to make point clouds to create 3D models from which the volume of the mangrove forest was estimated. A canopy height model (CHM) was created by subtracting a digital terrain model (DTM) from a digital surface model (DSM). This approach normalises object heights above ground, and as all the pixels in the image are linked to vegetation, the total volume is estimated by multiplying the canopy height value of a raster cell by its resolution. The UAV method produced lower height values in each patch of mangrove forest compared to the TLS surveying method and its accuracy was found to be correlated with mangrove maturity. Identifying fine scale gaps in dense forest is one of the primary limitations of using UAVs to calculate aboveground biomass. This study highlighted the suitability of utilising UAVs to calculate canopy volume in forests that are not very dense.

Though carbon stock was not calculated in the Warfield and Leon (2019) study, it can theoretically be estimated from the values obtained for the canopy volume. As shown in a report published by the USDA (Woodall, 2011), biomass can be calculated using volume, density and mass relationships, as described in equation 1.

Bodw = Vgw * SGgw * W . . . (1)

where Bodw is the oven-dry biomass (lb) of wood, Vgw is the cubic volume of green wood in the central stem, SGgw is the basic specific gravity of wood (i.e. the oven-dry mass of green volume), and W is the weight of one cubic foot of water (62.4lb).

Converting this into the metric system is a trivial calculation, and the resulting value for dry biomass can then be replaced in equation (2) to calculate carbon stock.

Cp = Bodw * CF . . . (2)

where Cp is the carbon stock within a plot, Bodw is the dry biomass in the plot and CF is the species specific carbon fraction (Goslee, 2012).

In the case of mangroves, the value for carbon fraction would be in the 0.45 to 0.48 range; in a previous blogpost, we described how Bindu et al. (2018) use a factor of 0.4759 on the AGB to generate an estimate of carbon.

The second approach for using UAVs to estimate carbon is based on a study conducted by Messinger (2016) in the Peruvian Amazon. In this study, UAVs were used to create a 3D model of the forest, which was compared with data of the same forest obtained through a LiDAR survey conducted in 2009. In order to estimate carbon stocks in the forest, the authors used a formula and coefficients developed by Asner (2013) which is a method designed to estimate regional carbon stock using LiDAR data.  For carbon estimation they used equation 3.

EACD = a * (TCH ^ b1) * ( BA ^ b2) * (R ^ b3) ………(3)

where EACD is estimated above ground carbon density, TCH is top of canopy height, BA is the regional average basal area, R is the regional average basal area-weighted wood density, and a, b1, b2, and b3 are coefficients estimated from the data.

Basal area is defined as the area within a plot that is occupied tree trunks and stems, and can be calculated using equation 4.

BA = 0.005454 * DBH^2……..(4)

We were unable to find a definite formula to calculate the regional average basal area-weighted wood density. The paper by Asner et al. (2013) uses coefficients based on fieldwork done by the authors and their team in Panama and does not specify the applicability of these coefficients to other forests. Since these coefficients are not specified as universal, it appears that one would have to conduct field work to calculate the variables for this formula. This, coupled with the ambiguity of measuring and calculating the regional average basal area-weighted wood density, makes this study difficult to replicate.

The calculation of the Top of Canopy Height also suffered a setback in this study. In order to derive the TCH, one has to first create a DTM (Digital Terrain Model) and CHM (Canopy Height Model) . The method used in this paper requires the use of LiDAR data in order to calculate canopy height.  In their study, the GPS on the drone was not accurate enough to create a good estimation of the CHM. They thus had to combine their UAV-based Structure-from-Motion model with LiDAR data to make this estimation. They state that this barrier can be overcome by the use of higher precision GPS where error in X, Y, and Z is less than 1m, which is now possible using UAVs in conjunction with directional-GPS (D-GPS) systems, such as the DJI Phantom 4 RTK.

In conclusion, we think that technology and research has advanced to a point where it can be used to carbon stocks using UAVs, but a clear methodology for doing so is still not publicly available. There is a need to synthesize existing methods into the most effective workflow based on these studies and current technology. We believe that having a clear and accessible method that has been tested for accuracy is crucial to bridge the gap between science, policy and conservation, and we’re going to be working on this over the next few months!    

References

Asner, G. & Mascaro, J. (2013). Mapping tropical forest carbon: Calibrating plot estimates to a simple LiDAR metric. Elsevier. 

Bindu,   G., Rajan,   P., Jishnu, E.   S., & Joseph, K.  A. (2018). Carbon stock   assessment of mangroves using remote sensing and geographic information system. The Egyptian Journal of Remote Sensing and Space Science.

Goslee, K., Walker, S., Grais, A., Murray, L., Casaraim, F., Brown, S. (2012). Module C-CS: Calculations for Estimating Carbon Stocks. Winrock International. 

Messinger, M., Asner, G., Silman, M. (2016). Rapid Assessments of Amazon Forest Structure and Biomass Using Small Unmanned Aerial Systems. MDPI (Remote Sensing), 8(8), 615.

Mlambo, R., Woodhouse, I., Gerard, F., Anderson, K., (2017). Structure from Motion (SfM) Photogrammetry with Drone Data: A Low Cost Method for Monitoring Greenhouse Gas Emissions from Forests in Developing Countries. MDPI (Forests), 8, 68. 

Shin, P., Sankey, T., Moore, M., & Thode, A. (2018). Estimating Forest Canopy Fuels in a Ponderosa Pine Stand. Remote Sensing, 10, 1266. 

Warfield, A., Leon, J. (2019). Estimating Mangrove Forest Volume Using Terrestrial Laser Scanning and UAV-Derived Structure-from-Motion. MDPI (Drones), 3, 32.

Woodall, C., Heath, L., Domke, G., & Nichols, M. (2011). Methods and Equations for Estimating Aboveground Volume, Biomass, and Carbon for Trees in the U.S. Forest Inventory, 2010. USDA (U.S. Forest Service).

On ghost gear and seaweed stirfry

For the last few months, we’ve been building up towards a project on abandoned, lost and otherwise discarded fishing gear, more commonly known as ghost gear. This began with a conversation with a conservation scientist working on the issue of marine debris in India. He had read an academic paper describing the creation of a map depicting the zones where there was a high probability of turtles getting entangled in ghost gear off the Australian coast. This map could then be used to target areas for ghost gear removal, and thus reduce the threat to the turtles. . Could we do something similar for India? What do we know about ghost gear off the Indian coast? The fact that there does not seem to be enough data about marine debris off the Indian coast to begin answering these questions shaped our discussions, and concluded with us being brought on as researchers to answer one question: given the current state of technology, what is the best method to locate and detect ghost gear in Indian territorial waters?

Abandoned, lost or otherwise discarded fishing gear

Abandoned, lost or otherwise discarded fishing gear

Once we completed the contractual work with the NGO, we dove head first into the project. We were on a tight schedule, so went looking for some external assistance; we were very lucky to be able to convince Gabriella D’Cruz, a lovely and passionate marine conservationist, to join us for this project. We began by structuring the workflow and assigning tasks to each member of the team. We would be conducting reviews of academic literature, non-academic material and interviews with experts, to ensure we covered the subject comprehensively, and would then be summarising the information, analysing it and submitting our recommendations to our client.

We took a beach-break on of the days and found some ghost gear washed up on the shore.

We took a beach-break on of the days and found some ghost gear washed up on the shore.

Once the research itself began, the next few weeks just whizzed past. We spent all our energy looking for and reading through the available information on our subject. The work was very motivating, as the eventual conservation impact was evident, and there was also a steep learning curve; while we were familiar with the technology being considered, its application to ghost gear detection was new to us.

Working together as a team was lovely; we all bonded over discussions of food, Goa and conservation. One particular day was memorable, for Gabriella (who loves sea-weed, to say the very least) brought a few different varieties of seaweed seasoning to work. We ended up cooking stir-fry noodles that day with the sea-weed which was just delicious; but it didn’t stop there. A few hours later, tea-time consisted of black tea with seaweed, accompanied with cashewnuts roasted in butter and seaweed!

Seaweed stirfry

Seaweed stirfry

On another day, we had to take a break because one of us found a video recording of a conference where they’d live-streamed the proceedings over Facebook, but had mistakenly put the cat filter on; we couldn’t see our screens because of our tears of laughter.

Over the first weeks of this project, we collated all the information we could find on this topic, had filtered and summarised it and had begun making our primary assessments. In the last weeks, we brought all of this information together and prepared the report. We spent a few days reading through our information summaries, identifying patterns, concerns and potential recommendations. We spent one long day at our whiteboard, discussing and condensing all the information we had on the topic.

Working things out on the whiteboard

Working things out on the whiteboard

It was lovely to be able to work through the information, have a structured debate (shout-out to Brooklyn 99 S06e12), construct flow-charts and discover what we agreed to as a team. It was a tiring but particularly rewarding day, because at the end we had identified a clear workflow, specific recommendations for our client and potential areas for innovation in the location and detection of ghost gear.*photo of us working on the white board*Over the last few days, we completed writing and editing the report before sending it to our clients. We’re now working on our other projects, but know that we’re going to be doing more work on eliminating ghost gear from the oceans soon!

Assessing Carbon using Satellite Imagery: the Math and Maps behind Ecosystem Valuation.

As a geospatial data consultancy with a conservation focus, we’re developing workflows that we believe will have the most conservation impact, and one of the areas we see the most promise in is in the rapid estimation of carbon stocks in a landscape. Carbon sequestration is one of the many ecosystem services intact wildlife habitat can also provide, and valuing the carbon within such ecosystems can provide an added incentive to conserve them.

In a previous blogpost, we described the process of using vegetation indices to analyse satellite and drone imagery. We’re following that up with this blogpost, where we’re going to describe how carbon stocks in a landscape can be estimated by combining satellite imagery, vegetation indices and field data.

A caveat up front: we’re using this blogpost to summarise some methods, to teach ourselves this workflow and to familiarise ourselves with the entire process. We haven’t collected our own field data for this process, and the maps themselves were made using regression coefficients extracted from the papers we’ve mentioned. The maps are a lie(!) but they’re also reasonably accurate depictions of what actual maps would look like if we had field data; they definitely do not reflect actual carbon stock values for these landscapes.

The workflow for the estimation of Above Ground Carbon

The workflow for the estimation of Above Ground Carbon

Field Data Collection

Collecting field data to estimate carbon stocks is an essential step in the process; this information is used to develop regression equations that correlate Above Ground Biomass (AGB) as observed on the ground with information obtained from satellite imagery. As collecting field data for every pixel of the image isn’t possible, a statistical sampling process is used to obtain adequate information for the entire study area, and most papers we read described using a stratified random sampling technique. In this method, plots of  the area are initially classified into classes based on specific criteria such as, for example, vegetation density. Plots from each class are then selected at random, while also accounting for other possibly related factors like species variation. Once the plots for sampling are selected, laborious field work ensues, where the biomass of individual trees within each plot is assessed. One of the most common methods to determine AGB used to be destructive sampling, where the trees randomly selected to be part of the sampling process were uprooted, and their components physically weighed. This process is expensive, labour-intensive and time consuming, and doesn’t really work when the overarching goal is the preservation of wildlife habitat. Fortunately, there’s an alternative; allometry is the science of measuring and studying the growth or size of a part in relation to an entire organism and can be used to obtain the required data non-destructively.

 

Allometry is a very useful way to estimate tree weight from measurements of its independent parts. However, different allometric equations have to be developed for each tree species and site which makes this process complicated. The Pipe Model Theory (Shinozaki et al. 1964a,b) showed that the above ground weight of a tree was a function of the squared diameter at trunk base and wood density. This allows for the development of common allometric equations based on species which can be applied across the geographical location of the forests. Komaya et al. (2005) for example, have developed common allometric equations for mangroves which have been used across mangroves species and areas for AGB estimation, making this process much less labour intensive and much more uniform.

 

Most AGB studies measure parameters such as Diameter at Breast Height (DBH) and tree height, which are then used to develop allometric equations by which the biomass of any given tree can be estimated. DBH is usually measured by wrapping a measuring tape around the primary trunk of the tree at a height of 4.5 feet (~150cm) above the ground. The height of a tree is estimated using basic trigonometry along with tools such as a laser rangefinder or a clinometer.

Carbon Stock Calculation from Vegetation Indices:

We’ve assessed some of the methods that we found in the literature and consider applicable to our work. This is not an exhaustive list and if you know of any methods that you believe are efficient or effective, please let us know! As we’ve mentioned earlier, we’ve not conducted field data collection for this blogpost and are going to be using coefficients from other papers in our calculation of carbon stock.

We acquired Landsat 8 imagery from December 2018 which was processed to obtain Top of Atmosphere (ToA) reflectance values. We analyzed these using vegetation indices as described in a previous blogpost and then used these index maps to replicate the carbon assessment workflows we found in the literature.

Bindu et al. (2008) calculated both above and below ground biomass for a mangrove forest in Kerala. They collected field data through stratified sampling and applied common allometric equations (eq 1) for mangroves developed by Komaya et al. (2005) for estimating biomass in these field sample data sets. This ground truthing is done to establish a relationship between AGB calculated from field observations and vegetation indices calculated from satellite imagery.

AGB=0.251* r * (DBH^2.46) . . . . . (1)
where ρ is species-specific wood density.

 The researchers used equation 2 developed by Myeong et al. (2006) as the relationship between Above Ground Carbon (ABC) and vegetation indices calculated from satellite imagery. They used their field data along with satellite data from Indian Space Research Organization’s LISS-IV satellite for their calculations.

AGB = a * e^(NDVI*b) . . . . . . (2)
where AGB is measured in kg per pixel, a and b are constants developed from the non-linear regression. 
In this case, a = 0.507 and b=9.933.

In mangrove forests, Below Ground Biomass (BGB) is also extremely important, and Bindu et al. (2018) found that the average ratio of BGB to AGB in their study area was 0.38. By relating AGB to BGB, they were able to estimate the total biomass and convert this into carbon values by multiplying it by 0.4759 ( which is the value this study used as the ratio of biomass to carbon).

To replicate their work, we’ve created a Normalised Difference Vegetation Index map from our Landsat 8 satellite imagery, and estimated carbon stock using their methods and regression coefficients. We’ve also converted the units from kg/pixel into tonnes/hectare to compare our results with other methods.

Estimating carbon from an NDVI map after Bindu et al. (2018), using their regression coefficients of a = 0.507 and b = 9.933.

Estimating carbon from an NDVI map after Bindu et al. (2018), using their regression coefficients of a = 0.507 and b = 9.933.

 Laosuwan et al. (2016) studied orchards in Thailand to estimate Above-Ground Carbon directly. They used different allometric equations since they were doing this survey for agro-forestry rather than studying mangroves. For this, they collected data on height and DBH for a number of trees and combined this field data with a Difference Vegetation Index (DVI) applied on Landsat 8 imagery to create regression equations for estimating carbon stock. The exact break-down of the methodology they’ve used to develop these regression equations was not explained clearly in the paper (and we’re planning to learn more about it!) but the final equation for carbon sequestration estimated using DVI can be seen in equation 3.

ABC = a * e^(b * DVI) . . . . . (3)
where ABC is measured in tonnes/rai; one rai (a Thai unit of area) is equivalent to 0.16 hectares.

 

Estimating carbon from a DVI map after Laosuwan et al. (2016), using their regression coefficients of a = 0.3184 and b = 0.482.

Estimating carbon from a DVI map after Laosuwan et al. (2016), using their regression coefficients of a = 0.3184 and b = 0.482.

Situmorang et al. (2016) used a completely different work flow to estimate carbon stocks. Their field data included the species name, number of trees, tree height and diameter at breast height (DBH). They used these variables to calculate biomass using the following formula derived from Chave et al. (2005).

W = 0.0509 x ρ x (DBH ^ 2) x T
where W = the total biomass (kg)
DBH = diameter at breast height
ρ = wood density (gr / cm3)
T = height (m)

This was then extrapolated to the entire area using the following formula:

Screen Shot 2019-03-26 at 13.28.05 26 Mar 19.jpeg

After Brown et al. (1996), the carbon content is estimated to be exactly half (50%) of the biomass value.

Regression equations were then developed between two vegetation indices applied on satellite imagery, and carbon stock, in these sample areas to estimate Above Ground Carbon. Carbon stocks in the production forest of Lembah Seulawah district were calculated using the regression analysis equation y = 151.7x-39.7 on an EVI image and y= 204.3x-102.1 on a NDVI image. We attempted to use these regression equations to estimate carbon stock in our satellite image but these equations resulted in negative carbon stock values over most pixels in the satellite image we used. In fact, using the range of NDVI (0.17 - 0.85) and EVI (0.05 - 0.93) given by the researchers in their own study area, it seems possible that they would have also generated negative carbon values, which is a physical impossibility. We’re unsure if the researchers are using absolute values generated from their equation or if there are additional steps in the methodology not mentioned in the paper, but we were unable to recreate carbon stock estimates based on the methodology explained in their paper.

Some scientists have been using LiDAR (light detection and ranging) to replace field-based surveys. However, LiDAR imagery remains very expensive and is, for the most part, inaccessible. Planet, a satellite operator and provider of very high resolution satellite imagery, recently used imagery from their RapidEye and Dove satellites to estimate carbon stocks in Peru using very high resolution imagery instead of LiDAR.  Their algorithm explained 69% of the variation in their study area and they found that using spatial data with more than 5m resolution could be used for calculating biomass estimates at national level.

Our next post on this topic is going to be about developing methods where Unmanned Aerial Vehicles (UAVs) can be used to replace the ground-based field work. If you have any questions or comments, get in touch with us at contact@techforwildlife.com.

Calculating a drone camera's image footprint OR How I learned to love tan again.

When we use drones for mapping purposes, we usually program them to fly autonomously along a pre-programmed flight path, collecting images with a specified front- and side- overlap. Once we have enough images covering the area to be mapped, we stitch them all together to create the final map. This really doesn’t require us to think a lot about the length and width of each image; the only two limiting factors we usually need to consider are the final map’s (or technically, orthorectified mosaic’s) spatial resolution and the drone’s maximum legal height.

However, one of the projects we’re currently working on is not so much a mapping project as it is ecological research, and to cut a long story short, ensuring that we can correctly apply a set of ecological statistical tools that account for double-counting and observer error requires us to be able to ascertain the length and width of each image at different drone altitudes. Also, as we work with a number of different drones (and thus drone cameras), I wanted to have a set of equations in place that we could use for a variety of situations. All of this required some high-school level trigonometry to work out; I was never a fan of trigonometry as a teenager, but using it to understand both the Triangular Greenness Index (as detailed in a previous post on vegetation indices) and the current problem was actually a lot of fun.

To break the problem down, the two fixed variables are a camera’s field of view (FoV), which is described as the angle which it can ‘see’ at any given instant, and its aspect ratio, which is the ratio between the length and width of its images. For example, from the technical descriptions, a Phantom 3 Advanced camera has a FoV of 94° and user-selectable aspect ratios of 4:3 and 16:9, while the Phantom 4 Pro 2.0 has an FoV of 84° and user-selectable aspect ratios of 3:2, 4:3 and 16:9. In combination with the height of the drone, these two camera-parameters determine the final image footprint. For more on aspect ratios, see this post which recommends using the native aspect ratio for any given camera.

Caveats:

! These equations assume the camera to be perpendicular to the ground and don’t account for lens distortion. For a far more complex solution (which I have to admit I barely understand) look up this post (StackExchange) where mountainunicycler (Github user) describes nesting a Python script within Latex (what) which then calculates the FoV of a drone-mounted camera and outputs a PDF with graphics ( I can’t even.) !


x = Drone Height
θ = Field of View
r = Aspect Ratio

If the diagonal of the image is D, D/2 is the length of the base 
of the right-angled triangle with the two included angles as θ°/2
and (180°-θ°)/2 (as becomes clear when the FoV angle is bisected
to create two identical right-angled triangles).

D = 2 * x * tan(θ°/2)   --- (1)

If (A) and (B) are the sides of the image, then the aspect ratio 
(r) is equal to either A/B or B/A. We assume (A) to be the 
independent variable and (B) to be the dependent variable.

r = A / B
B = r * A   --- (2)

Using the equation of a right-angled triangle again,

D^2 = A^2 + B^2
D^2 = A^2 + (r * A)^2   - substituting the value of B from (2)
D^2 = A^2 * (1 + r^2)
A^2 = D^2 / (1 + r^2)       - flipping the terms of the equation

A = D / sqrt(1 + r^2)       --- (3)
B = r * D / sqrt(1 + r^2)   --- (4)

To express (A) and (B) only in terms of x, θ and r, we now 
combine equations (1) and (3), and (1) and (4).

A = (2 * x * tan(θ°/2)) / sqrt(1 + r^2) --- (5)

and

B = r * (2 * x * tan(θ°/2)) / sqrt(1 + r^2) - from (1) and (4)
B = 2 * r * x * tan(θ°/2) / (sqrt(1 + r^2) --- (6)

So, if we know the field of view (θ°), the aspect ratio (r) and the height of the drone (x), equations (5) and (6) allow us to determine the image footprint. To calculate the area of the image, the math is simply (A) * (B), which is the formula for the area of a triangle.

To flip this around, the other equation we needed for this project was to determine the height we needed to fly the drone, given a specific image footprint i.e. given (A), calculate (x). This is straightforward, since it means we just need to make (x) the subject, as opposed to (A), in equation (5).


A = (2 * x * tan(θ/2)) / sqrt(1 + r^2) --- (5)

2 * x * tan(θ/2)) = A * sqrt(1 + r^2)

x = A * sqrt(1 + r^2) / (2 * tan(θ/2)) --- (7)

In summary, equations (5) and (6) allow us to use the FoV, aspect ratio and height of any given camera to determine the image footprint, while equation (7) allows us to determine what height we should fly a drone at, given a desired horizontal image dimension.

I’ve used these equations to create a calculator in Excel; note that Excel uses radians as the default unit for angles. Using this calculator, I can determine that with a FoV of 94°, an aspect ratio of 4:3 and a drone height of 60m, the image footprint would be 102.9m * 77.2m, while with an FoV of 77°, an aspect ratio of 16:9 and the same drone height, the image footprint would be 83.1m * 46.8m. Similarly, if I wanted an image length of 50m with the first camera (FoV = 94° and aspect ration = 4:3), I would need to fly the drone at a height of 41.25m.

Let us know if you have any comments or find any errors in the math! We’re on Twitter at @techforwildlife, and you can mail us at contact@techforwildlife.com. As usual, comments on this post will be open for a few days.

Analysing Drone and Satellite Imagery using Vegetation Indices

A majority of our ecosystem monitoring work involves acquiring, analysing and visualising satellite and aerial imagery. Creating true-colour composites, using the Red, Green and Blue (RGB) bands, allows us to actually view the land areas we’re studying. However, this is only a first step; creating detailed reports on deforestation, habitat destruction or urban heat islands requires us to extract more detailed information, which we do by conducting mathematical operations on the spectral bands available from any given sensor. For example, we can extract surface temperature from Landsat 8 satellite data, as detailed in a previous blogpost.

A true-colour composite image created using data from Landsat 8 bands 2, 3 and 4.

As you may imagine, understanding how much vegetation is available in any given pixel is essential to many of our projects, and for this purpose, we make use of Vegetation Indices. In remote sensing terms, a Vegetation Index is a single number that quantifies vegetation within a pixel. It is extracted by mathematically combining a number of spectral bands based on the physical parameters of vegetation, primarily the fact that it absorbs more more light in the red (R) than in the near-infrared (NIR) region of the spectrum.  These indices can be used to ascertain information such as vegetation presence, photosynthetic activity and plant health, which in turn can be used to look at climate trends, soil quality, drought monitoring and changes in forest cover. In this blogpost, we’re going to provide a technical overview of some of the vegetation indices available for analysing both aerial and satellite imagery. We’ve included the basic formulae used to calculate the indices, using a bracketing system that allows for the formulae to be copy-pasted directly into the Raster Algebra (ArcMap) and Raster Calculator (QGIS) tools; don’t forget to replace the Bx terms with the relevant band filenames when doing the calculations! We’ve also noted down the relevant band combinations for data from Landsat 8’s Operational Land Imager and both the Sentinel-2’s MultiSpectral Instruments.

We’ve created maps for most of the vegetation indices described below, using data from Landsat 8 acquired over Goa, India on the 28th of December 2018. Each band was clipped to the area of interest and the Digital Numbers were rescaled to calculate Top-of-Atmosphere radiance values. All the index calculations were then executed on these clipped and corrected bands. We used a single min-max stretched red-to-green colour gradient to visualise each index. For actual projects, we’d then classify each image to provide our partners with meaningful information.

The Basic Vegetation Indices

Ratio Vegetation Index

One of the first Vegetation Indices developed was the Ratio Vegetation Index (RVI) (Jordan 1969) which can be used to estimate and monitor above-ground biomass. While the RVI is very effective for the estimation of biomass, especially in densely-vegetated areas, it is sensitive to atmospheric effects when the vegetation cover is less than 50%, (Xue et al. 2017).

RVI = R / NIR

Sentinel 2: B4 / B8

Landsat 8: B4 / B5

 

Difference Vegetation Index

The Difference Vegetation Index (DVI) (Richardson et al. 1977) was developed to distinguish between soil and vegetation, and as the name suggests, is a simple difference equation between the red and near-infrared bands.

DVI = NIR - R

Sentinel 2: B8 - B4

Landsat 8: B5 - B4

Normalised Difference Vegetation Index

The Normalised Difference Vegetation Index (NDVI) (Rouse Jr. et al. 1974) was developed as an index of plant “greenness” and attempts to track photosynthetic activity. It has since become one of the most widely applied indices. Like the RVI and the DVI, it is also based on the principle that well-nourished, living plants absorb red light and reflect near-infrared light. However, it also takes into account the fact that stressed or dead vegetation absorbs comparatively less red light than healthy vegetation, bare soil reflects both red and near-infrared light about equally, and open water absorbs more infrared than red light. The NDVI is a relative value and cannot be used to compare between images taken at different times or from different sensors. NDVI values range from -1 to +1, where higher positive values indicate the presence of greener and healthier plants. The NDVI is widely used due to its simplicity, and several indices have been developed to replicate or improve upon it.

NDVI = NIR - R / NIR + R

Sentinel 2: B8 - B4 / B8 + B4

Landsat 8: B5 - B4 / B5 + B4

 

Synthetic NDVI

Synthetic NDVI

The Synthetic NDVI is an index that attempts to predict NDVI values using only Red and Green bands. Hence it can be applied to imagery collected from any RGB sensor., including those used on consumer-level drones. Like the NDVI, its values also range from -1 to +1, with higher values suggesting the presence of healthier plants. However, it is not as accurate as the NDVI and needs to be calibrated using ground information to be truly useful. It is also known as the Green Red Vegetation Index (GRVI) (Motohka et al. 2010).

Synthetic NDVI = ( G - R ) / ( G + R )

Sentinel 2: ( B3 - B4 ) / ( B3 + B4 )

Landsat 8: ( B3 - B4) / ( B3 + B4 )

 

Visible Difference Vegetation Index

Similarly, the Visible Difference Vegetation Index (VDVI) (Wang et al. 2015) can also be calculated using information from only the visible portion of the electromagnetic spectrum. Some studies indicate that VDVI is better at extracting vegetation information and predicting NDVI than other RGB-only indices,.

VDVI = ( (2*G) - R - B ) / ( (2 * G) + R + B )

Sentinel 2:  ( ( 2 * B3 ) - B4 - B2 ) / ( (2 * B3 ) + B4 + B2 )

Landsat 8: ( ( 2 * B3 ) - B4 - B2 ) / ( ( 2 * B3 ) + B4 + B2 ) 

  

Excess Green Index

The Excess Green Index (ExGI) contrasts the green portion of the spectrum against red and blue to distinguish vegetation from soil, and can also be used to predict NDVI values. It has been shown to outperform other indices (Larrinaga et al. 2019) that work with the visible spectrum to distinguish vegetation.

ExGI = ( 2 * G ) - ( R + B )

Sentinel 2: ( 2 * B3) - ( B4 + B2 )

Landsat 8: ( 2 * B3 ) - ( B4 + B2 )

Green Chromatic Coordinate

The Green Chromatic Coordinate (GCC) is also an RGB index (Sonnentag et al. 2012) which has been used to examine plant phenology in forests.

GCC = G / ( R + G + B )

Sentinel 2: B3 / ( B4 + B3 + B2 )

Landsat 8: B3 / ( B4 + B3 + B2 )

One of the primary shortcomings of the NDVI is that it is sensitive to atmospheric interference, soil reflectance and cloud- and canopy- shadows. Indices have thus been developed that help address some of these shortcomings.

Indices that address Atmospheric (and other) Effects

Enhanced Vegetation Index

The Enhanced Vegetation Index (EVI) was devised as an improvement over the NDVI (Heute et al. 2002) to be more effective in areas of high biomass, where it is possible for NDVI values to become saturated. The EVI attempts to reduce atmospheric influences, including aerosol scattering, and correct for canopy background signals. In remote sensing terms, a saturated index implies a failure to capture variation due to the maximum values being registered for some pixels. 

EVI = 2.5 * ( ( NIR - R ) / ( NIR + (6 * R) - ( 7.5 * B ) + 1 ) )

Sentinel 2: 2.5 * ( ( B8 - B4) / ( B8 + ( 6 * B4) - ( 7.5 * B2 ) + 1) )

Landsat 8: 2.5 * ( ( B5 - B4) / ( B5 + ( 6 * B4) - ( 7.5 * B2 ) + 1 ) )

 

Atmospheric Reflection Vegetation Index

The Atmospheric Reflection Vegetation Index (ARVI) was developed specifically to eliminate atmospheric disturbances (Kaufman et al. 1992).  However, for a complete elimination of aerosols and the ozone effect, the atmospheric transport model has to be implemented, which is complicated to calculate and for which the data is not always easily available.  Without integrating this model into the calculation, the ARVI is not expected to outperform the NDVI in terms of accounting for atmospheric effects, but can still be useful as an alternative to it.

ARVI (w/o atmospheric transport model) = ( NIR – ( R * B ) ) / ( NIR + (R * B) )

Sentinel 2: ( B8 - ( B4 * B2 ) ) / ( B8 + ( B4 * B2 ) )

Landsat 8: ( B5 - ( B4 * B2) ) / ( B5 + (B4 * B2 ) )

 

Green Atmospherically Resistant Index

The Green Atmospherically Resistant Index (GARI) was also developed to counter the effects of atmospheric interference in satellite imagery. It shows much higher sensitivity to chlorophyll content (Gitelson et al. 1996) and lower sensitivity to atmospheric interference.

GARI = ( NIR – ( G – ( γ * ( B – R ) ) ) ) / ( NIR + ( G – ( γ * ( B – R ) ) ) )

Sentinel 2: ( B8 – ( B3 – ( γ * ( B2 – B4 ) ) ) ) / ( B8 + ( B3 – ( γ * (B2-B4) ) ) )

  Landsat 8: ( B5 – ( B3 – ( γ * ( B2 – B4 ) ) ) ) / ( B5 + [ B3 – ( γ * ( B2 – B4) ) ) )

In the formula above, γ is a constant weighting function that the authors suggested be set at 1.7 (Gitelson et al. 1996, p 296) but may have to be recalibrated in areas of complete canopy coverage. For this image, we used a γ value of 1.

 

Visible Atmospherically Resistant Index

The Visible Atmospherically Resistant Index (VARI) can be used to account for atmospheric effects in RGB imagery.

VARI = ( G - R) / ( G + R - B )

Sentinel 2: ( B3 - B4 ) / ( B3 + B4 - B2 )

Landsat 8: ( B3 - B4 ) / ( B3 + B4 - B2 )

Addressing Soil Reflectance

As in the case of atmospheric effects, indices were also developed to address the effects of varying soil reflectance.

Soil Adjusted Vegetation Index

The Soil Adjusted Vegetation Index is a modified version of the NDVI designed specifically for areas with very little vegetative cover, usually less than 40% by area. Depending on the type and water content, soils reflect varying amounts of red and infrared light. The SAVI accounts for this by suppressing bare soil pixels.

SAVI = [ ( NIR – R ) / ( NIR + R + L ) ] * (1 + L)

Sentinel 2: [ ( B8 – B4 ) / ( B8 + B4 + L ) ] * (1 + L)

Landsat 8: [ ( B5 – B4 ) / (B5 + B4 + L ) ] * (1 + L) 

In the above equations, L is a function of vegetation density; calculating L requires a priori information about vegetation presence in the study area. It ranges from 0-1 (Xue et al. 2017) with higher vegetation coverages resulting values approaching 1.  

 

The Modified Chlorophyll Absorption in Reflectance Index (MCARI) was developed as a vegetation status index. The Chlorophyll Absorption in Reflective Index (Kim 1994) was initially designed to distinguish non-photosynthetic material from photosynthetically active vegetation. The MCARI is a modification of this index and is defined as the depth of chlorophyll absorption (Daughtry et al. 2000) in the Red region of the spectrum relative to the reflectance in the Green and Red-Edge regions.  

MCARI = (Red-Edge - R ) - 0.2 * ( Red-Edge - G) * ( Red-Edge / Red )

Sentinel 2: ( B5 - B4) - 0.2 * ( B5 - B3) * ( B5 / B4)

 Landsat 8: No true equivalent

The Structure Insensitive Pigment Index (SIPI) is also a vegetation status index, with reduced sensitivity to canopy structure and increased sensitivity to pigmentation. Higher SIPI values are strongly correlated with an increase in carotenoid pigments, which in turn indicate vegetation stress. This index is thus very useful in the monitoring of vegetation health.

SIPI = (800nm - 445nm) / (800nm - 680nm)

Sentinel 2: (B8 - B1) / (B8 - B4)

Landsat 8: (B5 - B1 ) /( B5 - B4)

Agricultural Indices

Some indices that were initially designed for agricultural purposes can also be used for the ecological monitoring of vegetation.

Triangular Greenness Index

The Triangular Greenness Index (TGI) was developed to monitor chlorophyll and indirectly, the nitrogen content of leaves (Hunt et al. 2013) to determine fertilizer application regimes for agricultural fields. It can be calculated using RGB imagery and serves as a proxy for chlorophyll content in areas of high leaf cover.

 TGI = 0.5 * ( ( ( λR - λB ) * ( R - G) ) - ( ( λR - λG ) * ( R - B ) ) )

Sentinel 2A: 0.5 * ( ( ( 664.6 - 492.4 ) * ( B4 - B3 ) ) - ( ( 664.6 - 559.8) * ( B4 - B2 ) ) )

Sentinel 2B: 0.5 * ( ( ( 664.9 - 492.1 ) * ( B4 - B3 ) ) - ( ( 664.9 - 559.0 ) * ( B4 - B2 ) ) )

Landsat 8: 0.5 * ( ( ( 654.59 - 482.04 ) * ( B4 - B3 ) ) - ( ( 654.59 - 561.41 ) * ( B4 - B2 ) ) )

In the above equations, λ represents the center wavelengths of the respective bands; the central wavelengths of Sentinel 2A and Sentinel 2B vary slightly.

 

Normalised Difference Infrared Index

The Normalised Difference Infrared Index (NDII) uses a normalized difference formulation instead of a simple ratio. It is a reflectance measurement that is sensitive to changes in the water content of plant canopies, and higher values in the index are associated with increasing water content. The NIDI can be used for agricultural crop management, forest canopy monitoring, and the detection of stressed vegetation.

NDII = ( NIR - SWIR ) / (NIR + SWIR )

Sentinel 2 : ( B8 - B11 ) / ( B8 + B11 )

Landsat 8: ( B5 - B6) / ( B5 + B6 )

Green Leaf Index

The Green Leaf Index (GLI) was originally designed for use with a digital RGB camera to measure wheat cover. It can also be applied to aerial and satellite imagery.

GLI = ( ( G - R ) + ( G - B ) ) / ( ( 2 * G ) + ( B + R ) )

Sentinel 2: ( ( B3 - B4 ) + ( B3 - B2 ) ) / [ ( 2 * B3) + ( B2 + B4 ) )

Landsat 8:  ( ( B3 - B4 ) + ( B3 - B2 ) ) / [ ( 2 * B3) + ( B2 + B4 ) )

 

Task-specific Vegetation Indices

As we can see, one index might be more appropriate than another based on the purpose of your study and the source of the imagery. The following section lists indices developed to meet the needs of specific research requirements.

Transformed Difference Vegetation Index

The Transformed Difference Vegetation Index (TDVI) was developed to detect vegetation in urban settings where NDVI is often saturated.

TDVI = 1.5 * ( NIR - R ) / √( NIR^2 + R + 0.5)]

Sentinel 2: 1.5 * ( B8 - B4 ) / sqrt( B8^2 + B4 + 0.5)

Landsat 8: 1.5 * ( B5 - B4 ) / sqrt( B5^2 + B4 + 0.5)

Calculating square roots in QGIS Raster Calculator and ArcMap’s Raster Algebra have different syntaxes; QGIS uses ‘sqrt’ while ArcMap uses ‘SquareRoot’.

The Leaf Chlorophyll Index (LCI)  was developed to assess chlorophyll content in areas of complete leaf coverage.

LCI= ( NIR − RedEdge) / (NIR + R)

Sentinel 2: ( B8 - B5 ) / ( B8 + B4 )

Landsat 8: No true equivalent

Vegetation Fraction

The Vegetation Fraction is defined as the percentage of vegetation occupying the ground area; since it’s calculated using values generated from a NDVI, it is subject to the same errors. It’s a comprehensive quantitative index in forest management and an important parameter in ecological models, and can also be used to determine the emissivity parameter when calculating Land Surface Temperature.

Vegetation Fraction: [ NDVI - NDVI(min) ] / [ NDVI(max) - NDVI(min) ]

In this blogpost, we’ve listed down and organised the vegetation indices that we’ve found while improving our ecological monitoring techniques. We make extensive use of both satellite and drone imagery, and will be using this blogpost internally as a quick reference guide to vegetation indices.

Find us on Twitter @techforwildlife if you have any questions or comments, or email us at contact@techforwildlife.com. We’ve also opened up the comments for a few days, so please feel free to point out any errors or leave any other feedback!

P.S.: Hat-tip to Harris Geospatial (@GeoByHarris) for a comprehensive list of vegetation indices, which can be found here.

P.P.S.: We’ll be updating this post with Sentinel-2A imagery in the next few days.

References

·      C. F. Jordan (1969) Derivation of leaf-area index from quality of light on the forest floor. Ecology, vol. 50, no. 4, pp. 663–666, 1969

·      Daughtry, C. S. T., Walthall, C. L., Kim, M. S., De Colstoun, E. B., & McMurtrey Iii, J. E. (2000). Estimating corn leaf chlorophyll concentration from leaf and canopy reflectance. Remote Sensing of Environment, 74(2), 229-239.

·      Gitelson, A., Y. Kaufman, and M. Merzylak. (1996) Use of a Green Channel in Remote Sensing of Global Vegetation from EOS-MODIS. Remote Sensing of Environment 58 (1996): 289-298.

·      Huete, A., et al. (2002) Overview of the Radiometric and Biophysical Performance of the MODIS Vegetation Indices." Remote Sensing of Environment 83 (2002):195–213.

·      Hunt, E. Raymond Jr.; Doraiswamy, Paul C.; McMurtrey, James E.; Daughtry, Craig S.T.; Perry, Eileen M.; and Akhmedov, Bakhyt, (2013) A visible band index for remote sensing leaf chlorophyll content at the canopy scale. Publications from USDA-ARS / UNL Faculty. 1156.

·      J. Richardson and C. Weigand, (1977) Distinguishing vegetation from soil background information. Photogrammetric Engineering and Remote Sensing, p. 43, 1977.

·      Jinru Xue and Baofeng Su. (2017) Significant Remote Sensing Vegetation Indices: A Review of Developments and Applications, Journal of Sensors, vol. 2017, Article ID 1353691, 17 pages, 2017.

·      Kim, M. S. (1994). The Use of Narrow Spectral Bands for Improving Remote Sensing Estimations of Fractionally Absorbed Photosynthetically Active Radiation. (Doctoral dissertation, University of Maryland at College Park).

·      Larrinaga, A., & Brotons, L. (2019). Greenness Indices from a Low-Cost UAV Imagery as Tools for Monitoring Post-Fire Forest Recovery. Drones, 3(1), 6.

·      Motohka, T., Nasahara, K. N., Oguma, H., & Tsuchida, S. (2010). Applicability of green-red vegetation index for remote sensing of vegetation phenology. Remote Sensing, 2(10), 2369-2387.

·      Sonnentag, O.; Hufkens, K.; Teshera-Sterne, C.; Young, A.M.; Friedl, M.; Braswell, B.H.; Milliman, T.; O’Keefe, J.; Richardson, A.D. (2012) Digital repeat photography for phenological research in forest ecosystems. Agric. For. Meteorol. 2012, 152, 159–177

·      X. Wang, M. Wang, S. Wang, and Y. Wu. (2015) Extraction of vegetation information from visible unmanned aerial vehicle images. Nongye Gongcheng Xuebao/Transactions of the Chinese Society of Agricultural Engineering, vol. 31, no. 5, pp. 152–159, 2015. 

·      Y. J. Kaufman and D. Tanré. (1992) Atmospherically Resistant Vegetation Index (ARVI) for EOS-MODIS. IEEE Transactions on Geoscience and Remote Sensing, vol. 30, no. 2, pp. 261–270, 1992.