DEV Community

Taylor Higgins
Taylor Higgins

Posted on • Edited on

Analyzing Italy's Vulnerability to Sea Level After Applying Feedback

This is a continuation of a previous post, but where we will start off at the analysis and visualisation part. For some context, after presenting the maps to the Geographic Society of Italy, a couple suggestions and improvements were given, so this is a result of applying that feedback to make the analysis and visualisation more useful to the academic research commnunity.

Recap

First we loaded in the files from the data processing, this incudes: -the elevation data with max, min and median elevations by hexagon after the zonal stats -pivot files of sea level rise by hydrobasin for each of the four projections (4.5 2050 4.5 2100 8.5 2050 8.5 2100) and then the shapefile with all geometric data corresonding to the hexagons we used for the zonal stats, and country border of italy to use as a buffer.

Then we make sure they are cleaned in the way we want by dropping and renaming some columns, and finally we merge the first three files alltogether so we have one geodataframe containing the elevation data (max min med), the elsr data by hydrobasin, and the geometric data of each hexagon. We will later use the country border to clip that merged file to remove isolates so we leave that file separate.

Analysis and Visualisation

Applying the feedback from the presentation means that we will end up with more maps, but they will be more focused and some of the data will be abstracted to support ease of interpretation.

For each of the four projections (4.5 2050 4.5 2100 8.5 2050 8.5 2100), we will make three maps based off of the degree of severity for each projection (ie. 5th%, 50th%, and 95%).
In each of those maps, there will be three color-coded layers that you can toggle on and off and will correspond to the degree of vulnerability (red being the most severe at level 3, orange being the second most severe at level 2, and green being the third most severe at level 1).

We arrive at those 36 layers doing these 9 subtractions below for each of the 4 projection, as an example shown below is the 9 subtractions for projection 4.5 2050 :

4.5 2050 5% min 4.5 2050 5% med 4.5 2050 5% max 4.5 2050 50% min 4.5 2050 50% med 4.5 2050 50% max 4.5 2050 95% min 4.5 2050 95% med 4.5 2050 95% max
elev min - eslr projection point 5% elev med - eslr projection point 5% elev max - eslr projection point 5% elev min - eslr projection point 50% elev med - eslr projection point 50% elev max - eslr projection point 50% elev min - eslr projection point 95% elev med - eslr projection point 95% elev max - eslr projection point 95%

And then based off of the following logic, we will arrive at three levels of severity for each projection's percentile:

RED 4.5 2050 5%: Elevation hexagons where all 5% subtractions (max, med, min) are below zero
v45_50_5_red = elev_subs[(elev_subs.smi45505 <= 0.0) & (elev_subs.sme45505 <= 0.0) & (elev_subs.sma45505 <= 0.0)]

ORANGE 4.5 2050 5%: Elevation hexagons where both med and min 5% subtractions (med, min) are below zero.
v45_50_5_orange = elev_subs[(elev_subs.smi45505 <= 0.0) & (elev_subs.sme45505 <= 0.0)]

Blue 4.5 2050 5%: Elevation hexagons where only the min 5% subtraction (min) is below zero.
v45_50_5_green = elev_subs[(elev_subs.smi45505 <= 0.0)]

RED 4.5 2050 50%: Elevation hexagons where all 50% subtractions (max, med, min) are below zero

v45_50_50_red = elev_subs[(elev_subs.smi455050 <= 0.0) & (elev_subs.sme455050 <= 0.0) & (elev_subs.sma455050 <= 0.0)]

ORANGE 4.5 2050 50%: Elevation hexagons where both med and min 50% subtractions (med, min) are below zero.

v45_50_50_orange = elev_subs[(elev_subs.smi455050 <= 0.0) & (elev_subs.sme455050 <= 0.0)]

BlUE 4.5 2050 50%: Elevation hexagons where only the min 50% subtraction (min) is below zero.
v45_50_50_green = elev_subs[(elev_subs.smi455050 <= 0.0)]

RED 4.5 2050 95%: Elevation hexagons where all 95% subtractions (max, med, min) are below zero
v45_50_95_red = elev_subs[(elev_subs.smi455095 <= 0.0) & (elev_subs.sme455095 <= 0.0) & (elev_subs.sma455095 <= 0.0)]

ORANGE 4.5 2050 95%: Elevation hexagons where both med and min 95% subtractions (med, min) are below zero.

v45_50_95_orange = elev_subs[(elev_subs.smi455095 <= 0.0) & (elev_subs.sme455095 <= 0.0)]

BlUE 4.5 2050 95%: Elevation hexagons where only the min 95% subtraction (min) is below zero.
v45_50_95_green = elev_subs[(elev_subs.smi455095 <= 0.0)]

Finally we visualise this data with folium's choropleth function, and remove the legend by accessing the maps child object and deleting all children that start with 'color_map'. For each of the 12 maps, there wil be 3 layers corresponding to red, orange or blue severity. Each map will only have data for one projectio and percentile (ie. 4.5 2050 5%, 8.5 2100 50%, and so on)

Problems I encountered and How I solved them

Each map has isolates, these hexagons that are not surrounded by other hexagons. There was a bit of a debate amongst the team on how to resolve them, since the could be relevant due to how the river network works in Italy. Ultimately, we decided to clip them with a buffer (of varying size) from the coastline that was tailored for each projection percentile to only remove the most drastic isolates.

The most time consuming way would have been to select each isolate in qgis and export that file as a csv and then remove the polygons by id from the geodataframe.

Since we had 36 geodataframes, that was not worth the time. An option that would be exciting to explore once more engineering resources are devoted to this project is to write a script that iterates through each geodataframe's row of polygons, and examines using the geometry column if it is near another row's geometry. Some functions to explore that could be useful here would be the geopandas touches function, distance function or nearest neighbor function (coming from shapely) that can remove any polygon that either doesn't touch another polygon, or that is more than 600 meters (2 hexagons width) away from another polygon or whose nearest neighbor is more than a certain distance away.

In the interest of time, and since the first approach of merging all the hexagons and then filtering (with the overlay function with a how intersection) any hexagons that didn't intersect with the coastline, took more time that we had planned for, I decided to wait on exploring this more accurate option and instead used a buffer to clip the hexagons in a rather rudimentary way by trial and error.

The reason the first first approach took more time than I allotted was ultimately because I initially didn't understand well what the unary union function from shapely would output. I hoped it would output groupings of polygons, so that we ended up with a core group of polygons around the coast, but instead it merged all polygons even those that were not touching, so all polygons were one mutlipolygon, and a part of that geometry was of course intersecting with the coastline, so overlaying by an intersection didn't produce the useful result.

Top comments (0)