Let’s open up the skies for drones

(Cross-posted from the Hindu Business Line, October 19th 2015)

Unmanned aerial vehicles are flying robots that provide some of the benefits of manned flight without its attendant risks and inconveniences. Commonly known as drones, they proved their worth on the battlefield during the 1973 Yom Kippur and 1982 Lebanon wars, after which numerous military forces began implementation of their surveillance and weaponised drone programmes. Today, India is reported to have some 200 Israeli-made drones in service, and is in the process of developing indigenous ones for military use. Civilians, however, are banned from flying drones.

Drones are not just used for military purposes; they have also been used by civilians around the world for a diverse set of non-conflict use cases. These include assisting aid agencies during humanitarian crises, helping farmers with their fields, providing a new perspective to journalists, letting conservationists rapidly monitor wildlife and conduct anti-poaching patrols, as well as simple recreational activity; flying a drone can be a lot of fun.

Drones, thus, have commercial value; they provide a much cheaper alternative to manned flight, and enable applications that were impossible earlier. Unfortunately, most new technologies come with their own dangers, and drones are no exception. They can occasionally crash. This matters most when the drone being flown is large and heavy, as a crash can damage property and harm people. Drones also occupy airspace that is used by manned aircraft, and an in-air collision or even a near-miss, could be disastrous. These are dangers that could occur unintentionally. However, there is also the fear that drones could be used to intentionally cause harm.

For these reasons, the relevant regulatory bodies of some countries have limited the public use of drones until these concerns can be addressed. In India, the Directorate General for Civil Aviation (DGCA) completely banned their use by civilians as of October 7, 2014. However, the authorities in other countries haven’t gone as far; in the US, the Federal Aviation Agency (FAA) allows the civilian use of drones with caveats, while their commercial use is licensed. While the various countries of the European Union (EU) currently have multiple regulations covering drone flights, the European Aviation Safety Agency intends to create common drone regulations, with the intention of permitting commercial operations across the EU starting in 2016.

The regulatory authorities of these countries have understood that drones are here to stay, and that their use can be extremely beneficial to the economy. A report by the Association for Unmanned Vehicle Systems International (AUVSI), a non-profit trade organisation that works on “advancing the unmanned systems community and promoting unmanned systems”, states that by 2025, the commercial drone industry will have created over 100,000 jobs in the US alone, with an economic impact of $82 billion. Drones can also contribute to the export market. For example, in Japan, where commercial drones have been licensed since the 1980s, Yamaha Corp has been producing drones for aerial spraying for agricultural purposes which are now exported to the US, South Korea, and Australia, generating $41 million in revenue for Yamaha in 2013-14. That’s small change compared to the current global market leader’s expected sales for 2015. SZ DJI Technology Co Ltd of Shenzen, China, was only founded in 2006, but by 2015, they controlled 70 per cent of the global commercial drone market and a higher percentage of the consumer drone market, for an estimated revenue of $1 billion.

These countries and companies have addressed the inherent dangers of drone technology by looking at technology- and policy-based solutions. The FAA and the UK’s Civil Aviation Agency (CAA) prohibit the flying of drones within five km of an airport or other notified locations, and drone manufacturers like DJI and Yamaha could enforce these rules by incorporating them into the drone’s control software. This means that a drone will be inoperable within these restricted zones. Outside these zones, drone misuse can be treated as a criminal offence. In the US, two individuals were recently arrested for two separate drone-related incidents: in one, the operator’s drone crashed into a row of seats at a stadium during a tennis match and in the other, the operator flew his drone near a police helicopter.

In India, the DGCA’s October 2014 public notification states that due to safety and security issues, and because the International Civil Aviation Organisation (ICAO) hasn’t issued its standards and recommended practices (SARPs) for drones yet, civilian drone use is banned until further notice. One year later, there are still no regulations available from the DGCA; the ICAO expects to issue its initial SARPs by 2018, with the overall process taking up to 2025 or beyond. Meanwhile, the loss to India’s economy, and the threat to its national security, will be enormous. Today, it is still possible to import, buy, build or fly small drones in India, despite the DGCA’s ban. This means that drone-users in India currently exist in an illicit and unregulated economy, which is far more of a threat to the nation than regulated drone use could ever become.

Finally, flying drones safely in India will require research and development to understand how they can be best used in India’s unique landscape. Such R&D occurs best in a market-oriented environment, which will not happen unless civilian drone use is permitted. Building profitable companies around drone use can be complicated when the core business model is illegal.

Like civil aviation regulators in other countries, the DGCA should take a pro-active role in permitting civilian use of drones, whether for commercial use or otherwise. Creating a one-window licensing scheme at the DGCA, where drone users only have to apply for permission from the ministries of defence and home affairs in special circumstances, would be a useful first step. Setting up a drone go/no-go spatial database would allow the DGCA to discriminate between these use cases and could also be mandatorily encoded into drone systems by their manufacturers. The DGCA should also discriminate between drones based on their size and weight; the smaller and lighter the drone, the less risk it poses. This should be recognised while regulating drones.

Whether it is to assist fishermen with finding shoals off the Indian coastline or conducting rapid anti-poaching patrols in protected areas across the country, mapping refugee settlements in Assam and Bengal for better aid provision or assessing the quality of national highways, drones can transform the way we conduct operations in India. Thus, a blanket ban on civilian drones in India is more of a hindrance to development than a solution to a problem. Drones are here to stay and the sooner India’s civilians are allowed to use them, the faster we can put them to work.

[ This article was commissioned by the Centre for the Advanced Study of India at the University of Pennsylvania and is also available on their blog. ]

From the Space Shuttle to a block of wood

Creating a 3D model of the Nanda Devi Sanctuary using SRTM data and a CNC router

NandaDevi_Zoom.jpg

I’ve had Nanda Devi and the Sanctuary surrounding her in my thoughts for a very long time, and she seemed like a fitting first attempt to bring spatial data out of the digital world and into reality. For the uninitiated, Nanda Devi is a mountain in the Indian Himalaya, and she’s always referred to as she: the goddess in the clouds. Surrounded by a protective ring of mountains, she towers over them all, and this space between the ring and the central peak is known as the Nanda Devi Sanctuary. Due to this ring, the first entry into the Sanctuary was only made in 1934, by Shipton, Tilman and their three porters, who entered via the gorge of the Rishi Ganga. The mountain herself was first summited in 1936 (see- Nanda Devi: Exploration and Ascent, by Shipton and Tilman).

The geography of the region is fascinating ( and the history as well; there’s a nuclear-powered CIA device somewhere inside the Sanctuary!) and the heights and depths of the various relief features make it a joy to visualise. In this post, I’m going to describe, in brief, the steps I used to get from the data to the final model in wood. While I’m sure most of this can be done using open-source tools, as a result of my current University of Cambridge student status and my @cammakespace membership, I have access to (extremely expensive) ESRI and Vectric software, which I’ve used liberally.

Relief map of the Nanda Devi Sanctuary and the Rishi Ganga gorge. The lighter the colour, the higher the elevation.

Relief map of the Nanda Devi Sanctuary and the Rishi Ganga gorge. The lighter the colour, the higher the elevation.

I have a repository of digital elevation data collected by the Space Shuttle Endeavour in 2000 (STS-99; Shuttle Radar Topography Mission). It’s freely available from CGAIR-CSI (http://srtm.csi.cgiar.org/) and is not difficult to use. In QGIS, it was cut and trimmed down to my area of interest around Nanda Devi; I was looking for a rough crop that would include the peak, the ring and the Rishi Ganga gorge. This relief map was exported as a GeoTIFF, and opened up in ArcScene, which is ESRI’s 3D cartography/analysis workhorse. ArcScene allowed me to convert the raster image into a multipoint file; as the tool description states, it “converts raster cell centers into multipoint features whose Z values reflect the raster cell value.” For some reason, this required a lot of tweaking to accurately represent the Z-values, but I finally got the point cloud to look the way I wanted it to in ArcScene.

 

The point cloud (red dots), overlaid on the relief map in ESRI ArcScene

The point cloud (red dots), overlaid on the relief map in ESRI ArcScene

I then exported the 3D model of the point cloud in the .wrl format (wrl for ‘world’) which is the only 3D format ArcScene knows, and used MeshLab, which is an open source Swiss-knife type tool for 3D formats, to convert the .wrl file into a stereolithographic (.stl) file which the next tool in the workflow, Vectric Cut3D, was very happy with. As a side note, Makerware was also satisfied with the .stl file, so it is 3D-print ready.

The final model in Vectric Cut3D, ready to be sent to the CNC router for carving.

The final model in Vectric Cut3D, ready to be sent to the CNC router for carving.

More tweaking in Cut3D to get the appearance right, and the toolpaths in order, and I was ready to actually begin machining. After an abortive first attempt where the router pulled up my workpiece and ate it, I spent some more time on the clamping for my second attempt. First, I used the router to cut out a pocket in a piece of scrap plywood to act as my job clamp; this pocket matched the dimensions of my workpiece exactly. After a bit of drilling, I had my workpiece securely attached to the job clamp, which was screwed into the spoilboard on the router.

The CNC router doing its thing

The CNC router doing its thing

For the actual routing itself, I used two tools; a 4mm ballnose mill and a 2mm endmill for the roughing and finishing respectively. It took about 45 minutes for the CNC router to create this piece. I love the machine, and am very grateful to the Cambridge Makespace for the access I have to it. In the near future, I’m going to try and use different CNC router tools and types of woods to make the final product look neater; specifically, a 1mm ballnose tool for the finishing toolpath would be very nice! I’m also going to try and make relief models of a few other interesting physical features.

The final product: A model of the Nanda Devi sanctuary in wood, based on data from the Space Shuttle and carved using a CNC router.

The final product: A model of the Nanda Devi sanctuary in wood, based on data from the Space Shuttle and carved using a CNC router.

 

 

S696, Maps and Atlases: The Cambridge Map Room

I’ve re-entered the academic world as a student at the University of Cambridge in the United Kingdom, and one of the benefits I’m enjoying the most is near-unlimited access to one of the world’s largest repositories of recorded information; the Cambridge University Library. Commonly known as the UL, this is a copyright library which means that under British rules on legal deposit, the library has the right to request a copy of any work published in the UK free of charge. Currently, the UL has over 8 million items, which includes books, periodicals, magazines and of course, maps.

 

The Map Room in the UL is a fascinating place; it functions as the reading room for the Map Department, which holds over a million maps (as the librarian told me; Wikipedia claims it has 1.5 million). It’s not a very large room, as reading rooms go, but is a beautiful space and is very well managed. Everything is catalogued very efficiently with a filing card system, and there’s one card with the name, date of publication and classmark (UID/coordinates) for each map.  Visitors are not allowed to simply browse through the map collections; to refer to a map, one must fill out a request form with the appropriate details and submit this form to the library assistants, who will then pull out the required map folio from its storage location. The title of this post comes from the fact that  map holdings with classmarks beginning with ‘S696′, ‘Maps’ or ‘Atlases’ are held in the Map Room, in various drawers and cabinets.

The Map Room is a pen-free zone; if you’re writing something, use a pencil. Smartphones and hand-held cameras are allowed, but under UL policy photos cannot be taken of the building itself. With prior permission however, it is possible to take images of material in the UL, which I did. The first series is from a map on display in the UL; titled “A map containing the towns villages gentlemen’s houses roads river and other remarks for 20 miles around London“, it was printed for a William Knight in 1710 and is a wonderful piece of cartography. The second series is from a map I requested using the card-index system; this map dates back to 1949 and beautifully illustrates tea-growing regions in the Indian-subcontinent.

If there’s a map in the UL you want an image of (for non-commercial or private-study purposes only!), I’d be happy to do what I can to help; I would actually be very grateful for an excuse to spend an afternoon looking at maps.

 

Using data from the Landsat 8 TIRS instrument to estimate surface* temperature

The Thermal InfraRed Sensor (TIRS) is a new instrument carried onboard the Landsat 8 satellite that is dedicated to capturing temperature-specific information.  Using radiation information from the two electromagnetic spectral bands covered by this sensor, it is possible to estimate the temperature at the Earth’s surface (albeit at a 100m resolution, compared to the 30m resolution of the other instrument, the Operational Land Imager).

The city-state of Delhi, India as visualised from band 10 of Landsat 8's TIRS instrument.

The city-state of Delhi, India as visualised from band 10 of Landsat 8's TIRS instrument.

I used data from the TIRS to estimate the surface temperature in the city-state of Delhi, India as of the 29th of May, 2013.  The relevant tarball file containing the data was downloaded using the United States’ Geological Survey’s (USGS) EarthExplorer tool; the area of interest was encompassed by [scene identifier: path 146 row 040] in the WRS-2 scheme. I think I’ll leave the specific explanations describing WRS-2, path/row values and the other miscellaneous small data-management operations for a later post. For now, I’ll just explain that these are important things to know when in the process of actually obtaining this data. When the tarball is unpacked fully, the bands from the TIRS instrument are bands 10 and 11;  the relevant .tif files are [“identifier”_B10.tif] and [“identifier”_B11.tif], and these were clipped to the administrative boundary of Delhi. There’s also a text file containing metadata: [“identifier”_MTL.txt] is essential for the math we’re going to do on these two bands. 

Each pixel of processed Level 1 (L1) Landsat 8 ETM+ data is stored as a Digital Number (DN) with a value between 0 and 2^15** . This means that when you visualise the two TIRS bands in your GIS/image-manipulation program of choice, you’re going to see a grey-scale image, where dark is cool and white is hot. The bands differ very slightly from each other; this is because each registers a different band of radiation, which is useful for correcting for atmospheric distortions.

To obtain actual surface temperature information, we need to first a) convert these DNs to top-of-the-atmosphere (ToA) radiance values, and then secondly b) convert the ToA radiance values to ToA brightness temperature in Kelvin.  The interesting thing is that the satellite stores information as radiance values; it’s converted to DNs when processed from L0 (raw data) to L1. We’re essentially reconverting this information to its original format.

Since the math does get a bit technical, I've posted the details at the end of this blogpost.

Since the TIRS has two bands, I conducted the mathematical operation on both, and then averaged the results. I know that there are more sophisticated ways of using both bands to accurately estimate surfacte temperature; as the USGS comment to this post has noted, I’ve only derived ToA brightness temperature here. Another step is required to calculate the actual surface temperature. In fact, a fellow Landsat 8 data user, Valerio de Luca, has informed me that there’s an equation of the split-window algorithm quadratic form that can be used for this purpose; thank you! I’ll update this post again once I test that. Finally, the conversion from degrees Kelvin to degrees anything else has been sufficiently well-documented that I’m going to leave it out; I’ve converted this map into degrees Celsius.

AvgTemp_Delhi-854x1024.jpg

An optional step in the middle is to correct for atmospheric distortion and solar irradiance by converting the DNs to ToA planetary reflectance values and then converting these to ToA radiance values. This requires information such as Earth-Sun distance and the solar angle, some of which is in the metadata file. I’ll be experimenting with this later as well.

Please pitch in if you have any suggestions on improving these techniques; I would definitely welcome points on better ways to use data from the TIRS instrument!

*Update 1 (245am IST on the 9th of August) : Firstly, I’d like to thank the USGS Landsat team, who’ve commented on this post; I’m very grateful for their encouragement and have updated the post accordingly. Please read the map title as ToA brightness-temperature in Delhi; another step is required before it accurately depicts the actual surface temperature values.

**Update 2 (2105 GMT on the 2nd of November) – Landsat 8 ETM+ Digital Number range has been corrected to reflect the fact that the ETM+ sensor stores 16-bit data; thanks for the comment Ciprian!

***Update 3 (0110 IST on the 5th of November 2017) - This article has been cross-posted from geohackers.in

I’ve referred to the documents below while figuring the math and writing this blogpost; they’re very useful.

USGS equations re: Landsat 8 data

https://landsat.usgs.gov/Landsat8_Using_Product.php

Converting Digital Numbers to Kelvin

http://www.yale.edu/ceo/Documentation/Landsat_DN_to_Kelvin.pdf

Information re: ToA reflectance

http://www.pancroma.com/Landsat%20Top%20of%20Atmosphere%20Reflectance.html

GRASS methodology

http://grass.osgeo.org/grass64/manuals/i.landsat.toar.html

More information on the inverted Planck function

http://www.yale.edu/ceo/Documentation/ComputingThePlanckFunction.pdf

Markham et al. (2005): Processing data from the EO-1 ALI instrument

http://www.asprs.org/a/publications/proceedings/pecora16/Markham_B.pdf

Converting from spectral radiance to planetary reflectance

http://eo1.usgs.gov/faq/question?id=21

<math>

a)             Digital Numbers to ToA radiance values

 = (ML * Qcal) + AL – - equation I

where:

Lλ        =          TOA spectral radiance

ML       =          Band-specific multiplicative rescaling factor

AL        =          Band-specific additive rescaling factor

Qcal        =        Quantized and calibrated standard product pixel values (DN)

Human-readable format:

Spectral Radiance = Band-specific multiplicative rescaling factor into Digital Number plus Band-specific additive rescaling factor from the metadata1. From the Markham et. al. (2005) paper on processing data from the EO-1 ALI instrument, I’m led to understand that the multiplicative and additive scaling factors result in ‘better preservaton of the ALI’s precision and dynamic range in going from raw to calibrated data’; this probably holds true for the L8 instruments as well but do correct me if I’m wrong.

In practise, this is:

[ Spec_rad_B“X”.tif ] = RADIANCE_MULT_BAND_”X”  * [“identifier”_B”X”.tif] + RADIANCE_ADD_BAND_”X”

where “X” is the specific band number, the RADIANCE values are obtained from the metadata file and Spec_rad_B”X”.tif is the output spectral radiance file name (user-determined).

b)            ToA radiance values to ToA brightness-temperature in Kelvin

T = K2 / (ln (K1/ Lλ +1))[2]   – equation II

(The physics-inclined will realise that equation II above is the inverted form of the Planck function.)

where:

T          =          ToA brightness temperature (K)
Lλ        =          TOA spectral radiance
K1        =          Band-specific thermal conversion constant
K2        =          Band-specific thermal conversion constant

 Human-readable format:

Temperature (Kelvin) = Band-specific thermal conversion constant I by the natural log of (Band-specific thermal conversion constant II by spectral radiance plus 1)

In practise, this is:

[Temp_B”X”.tif] = K2_CONSTANT_BAND_”X” / ln((K1_CONSTANT_BAND_”X” / [Spec_rad_B”X”.tif]) + 1)

where “X” is the specific band number, the CONSTANT values are extracted from the metadata file and the Temp_B”X”.tif file is the output temperature file name (user-determined).

 </math>

Landsat 8 data and maps of Delhi

This will be a relatively short post; I’ve been working with Landsat data for a few years now, and I find it absolutely fascinating. The new Landsat satellite, initially named the Landsat Data Continuity Mission and now known as Landsat 8, is actually the 7th in the series; Landsat 6 never made it to orbit. When Landsat 8 was launched on the 11th of February 2013, I was really anxious and excited and when it made it to orbit successfully, I was ecstatic. I downloaded my first set of Landsat data (Path146/Row040, covering the Indian city of Delhi) off the USGS EarthExplorer website last week, and have been tinkering with it ever since.

State_of_Delhi_L8_imagery-1024x724.jpg

 

The map above depicts various band-combinations; each band covers a discrete section of the electromagnetic spectrum. While bands 2, 3 and 4 lie within the visible spectrum and correspond to the colours blue, green and red, the others extend into the non-visible electromagnetic spectrum. A variety of mathematical operations can be conducted using different band combinations to obtain meaningful information about the Earth’s surface. For example, the following maps depict the surface temperature in Delhi, with a close-up of the Yamuna, on the 29th of May. A longer post on Landsat 8, its instrumentation and the various use-cases will follow once I’ve had my fill of playing with this data.