DEV Community

Taylor Higgins
Taylor Higgins

Posted on • Edited on

Analyzing Italy's Vulnerability to Sea Level Rise

Overall Goal:

to have a vulnerability resilience map of the coastal areas of Italy based on sea level rise predictions for RCP 4.5 and 8.5 and to calculate how much crop land could be lost.

Datasets to use:

sea level rise sample point data - sample collection points for sea level rise predictions 2050 and 2100 based on extreme risk and three levels of confidence. These collection points are 20 meters separated from each other around the coastline of Italy and the main Italian islands of Sardinia and Sicily.
https://data.jrc.ec.europa.eu/dataset/jrc-liscoast-10012#dataaccess (global coastline data)

elevation raster data - DEM/DTM lidar collected images of the topographical elevation at 30 meter resolution https://earthexplorer.usgs.gov/

watershed polygon area data - watersheds of Italy Level of granularity 8 https://www.hydrosheds.org/products/hydrobasins

crop raster data https://data.jrc.ec.europa.eu/dataset/15f86c84-eae1-4723-8e00-c1b35c8f56b9

https://land.copernicus.vgt.vito.be/manifest/ndvi300_v2_333m/manifest_cgls_ndvi300_v2_333m_latest.txt

corine: https://land.copernicus.eu/pan-european/corine-land-cover
corine legend: http://clc.gios.gov.pl/doc/clc/CLC_Legend_EN.pdf

admin country boundary polygon data
https://www.naturalearthdata.com/downloads/10m-cultural-vectors/

Hardware used:

Text editor visual studio code
Device AsusVivoBook
Processor Intel(R) Core(TM) i5-1035G1 CPU @ 1.00GHz 1.19 GHz
Installed RAM 20.0 GB (19.8 GB usable)
System type 64-bit operating system, x64-based processor

OS Edition Windows 10 Home
Version 21H2

Data Processing Steps

  1. First I will convert the raster elevation data into vector data. We will do this by overlaying a lattice of 500 square meter hexagons on top of the DTM/DEM dataset, and apply zonal statistics o get summary statistics (min, max and median) of the height of each of the raster pixels, to have an approximation of the elevation for each hexagon. Then I will merge by location the watershed polygon file to the vectorized elevation file so that each hexagon row has an associated watershed that it falls within.

  2. Second I'll join the extreme sea level rise (ESL) point data to the watershed polygon layer using a near function. Since I will use a left join, this will result in a polygon vector file where each row, signifies a watershed, and the watersheds closest to the coast will have the closest ESL points appended to the row in a new column. The near function will search for ESL points within a 5km radius of the watershed, and cap the number of nearest points at 5. We should expect null data for watersheds that are further than 5km from a collection point, and the ESL point columns that are appended to the watershed rows should be averaged so that no new rows of duplicated watershed are created.

When I have that, we will then have the most accurate predicted future sea level forecast for each of the watersheds within the search radius.

NOTE: IT MATTERS HOW YOU DO THE NEAR FUNCTION JOIN If we were to instead join the watersheds to the collection points we would lose data about the watersheds, since we are interested in the watersheds, this is not ideal, and also since there is no data that we would want to average from the watershed file, it doesn't serve us to do the join this way. Ultimately the watershed file is the link between the ESL point data and the DEM raster data, we could have done a join to the hexagonal vectorized DEM data, removing the watersheds data, but taking into account the hydrolic realities makes the flood and crop data loss prediction more accurate.

3.)

Third we will merge the two vector files (the watershed polygon vector that has the nearest ESL points joined from step 2, and the hexagonal vectorized DEM elevation data step 1) on the watershed id that they both have in common.

Analysis and Visualisation

At this point we are done processing the data and we can create new columns after subtracting the predicted sea level rise in meters data from the elevation data. Since there are four predictions for ESL (RCP 4.5, RCP 8.5 for 2050 and 2100) where each prediction has the 5%, 50% and 95% confidence value, we will create 16 resulting columns from a conservative mix of calculations seen in the table below. The new columns that are negative or around 0 imply the area will be underwater in 2050 and 2100 respectively based on the RCP predictions.

4.5 2050 8.5 2050 4.5 2100 8.5 2100
elev med - esl 50% elev med - esl 50% elev med - esl 50% elev med - esl 50%
elev min - esl 50% elev min - esl 50% elev min - esl 50% elev min - esl 50%
elev min - esl 95% elev min - esl 95% elev min - esl 95% elev min - esl 95%
elev min - esl 5% elev min - esl 5% elev min - esl 5% elev min - esl 5%

Lastly I will visualize the data that we just processed by creating maps from each of the new columns highlighting the hexagonal areas of the coast that will be most effected by extreme sea level rise. By creating choropleth maps with equal interval algorithm applied with 5 classification groupings and color coded accordingly, the areas underwater or close to sea level will be showcased.

To be continued...

  1. Adding the crop data to see how much crop revenue could be lost by sea level rise, and ultimately see how much crop land would become too degraded to be fertile under current agricultural practices, even if it is above sea level.

2.
Expanding to the entire Euro-mediterranean region.


Caveats

Since sea level rise has above ground and below ground affects, especially when considering the salinity levels of soil, understanding the effects of sea level rise isn't as simple as subtracting the predicted sea level rise from the elevation. Due to capillary action and soil seepage, understanding how far the sea water rise can travel underground is an important consideration when evaluating a region's vulnerability to sea level rise.

Problems I encountered and How I solved them

Memory issues with the merged file being close to 15 gb I manually adjusted page file limit in advanced system settings but it ultimately needed to run on other less strong computers (mine has 32gb of ram) so instead of this loophole we segmented the raster mosaic into 24 pieces. That proved to be a problem since pulling the tiff files from the Aster sensor for the entire Euromed region was cumbersome and time consuming even with a script, I ultimately split the satellite data image pull by country and used a script calling the earth explorer USGS api after setting up a bounding box for the country of interest.

Gdal downloading is always a bit cumbersome. But before you use pip install you have to have the actual pre-built gdal wheel file and make sure it's on your path.

Top comments (0)