AI and the Blind Spots of Disaster Risk Knowledge
Technical PresentationsOpen Access

AI and the Blind Spots of Disaster Risk Knowledge

S

Soenke Ziesche

Abstract

A critical examination of AI's limitations in disaster risk reduction, focusing on biased training data skewed toward the Global North, the opacity of deep learning models, and unresolved governance challenges. Proposes mitigation approaches including inclusive data governance, participatory model development, and bias audits.

Video thumbnail

The presentation "AI and the Blind Spots of Disaster Risk Knowledge" acknowledged the potential of AI in the field of DRR, but focused on limitations. It was argued that significant blind spots remain due to biased data, opaque models and unresolved governance challenges.

A central concern is bias in training data. AI systems used for disaster risk knowledge are typically trained on sources such as remote sensing data, insurance and governmental datasets, English-language scientific literature and digitized historical disaster records. These sources tend to focus on regions that are heavily monitored and well documented, i.e. often in the Global North. Meanwhile, many forms of locally grounded knowledge remain largely absent from machine-readable datasets. These include documents in local languages, oral histories, ecological indicators and indigenous forecasting practices. Therefore, AI models tend to highlight hazards that can be instrumentally measured and digitally recorded. This imbalance could reproduce existing inequalities and potentially increase vulnerability in already marginalized communities.

The presentation also highlighted the opacity of modern AI systems, particularly deep learning models. Drawing on observations by researchers such as Yudkowsky, Soares and Yampolskiy, it was noted that contemporary AI systems are often "grown" rather than engineered in a transparent manner. Developers train models with large datasets and computational power, but the resulting internal reasoning processes remain largely inscrutable. When AI-generated risk assessments cannot be clearly explained, it becomes difficult to evaluate, challenge or appeal their conclusions, creating serious concerns for accountability and trust in disaster risk predictions and governance.

To address these issues, the presentation proposed governance-based mitigation approaches. These include inclusive data governance that integrates local and indigenous knowledge, participatory model development involving communities and practitioners, and systematic bias audits of datasets and model outputs. However, the presentation concluded that full transparency and explainability remain unresolved challenges.

Subscribe to Our Newsletter

Stay updated with latest insights & events

Stay connected! Follow TecHive on social media for updates and success stories.