Exposure, i.e. the description of people and assets at risk, is one of the main components for risk assessment. While exposure at the country-scale is often well-defined, fine-grained exposure datasets are key to make risk assessments more detailed, both in terms of resolution and in identifying which people and assets are most at risk.
Models, input-, and output datasets range from raster-based descriptions of population distribution or built-up area, to complex datasets that describe people’s characteristics (e.g., gender, age and education) and detailed asset information (e.g., building material, number of floors, road types). Some models are local implementations, that are close to the ground truth and have a high-resolution, while others cover continents or even have a global reach. Some find their origins in grassroots activities, such as OpenStreetMap-based exposure models, while others rely on big data, through remote sensing and AI-driven methods, and are often created by larger agencies (e.g., the work of commercial parties like Google Open Buildings; WorldPop; or mixed organisations like Overture). The broad landscape of exposure is reflected in the wide variety of stakeholders, ranging from the insurance industry, to local and national governments, research institutes, the tourism sector and NGOs.
In this session we will welcome submissions addressing (1) geospatial methods and tools for the creation of exposure models, such as Volunteered Geographic Information or earth observation and AI models; (2) assessment of the quality or completeness of the data sources of exposure models, such as remote sensing, crowd-sourced, or official registry datasets; (3) exposure models for single hazards, for multi-hazard or hazard-independent contexts; (4) Comparison, validation and analysis of exposure models; (5) Cat model, insurance, government and financial exposure models and datasets; and (6) innovative applications of exposure models.
Danijel Schorlemmer