Beyond one-size-fits-all: a path toward region-specific flash drought monitoring and management by Gesualdo & Hadjimichael (2025)

Gabriela Gesualdo, Pennsylvania State University

At the end of 2025, PhDs Gabriela Gesualdo and Antonia Hadjimichael, researchers from Pennsylvania State University, published an interesting paper regarding flash drought detection in  Environmental Research Water. We had the pleasure of discussing this research with Gabriela, the lead author. Here’s an inside look at our fascinating conversation.

Please introduce yourself.

I’m Gabriela Gesualdo, and I work at the intersection of hydrology, climate, and decision-making, trying to understand how extreme events shape water systems and people’s lives. My path started in environmental engineering in Brazil and evolved into studying complex hydroclimatic risks across scales. Today, as a postdoc at Penn State, I focus not only on detecting hazards like flash droughts, but also on what they mean for water security, vulnerability, and adaptation. What drives me is turning science into something usable, helping connect physical processes, impacts, and real-world decisions under uncertainty.

What were the key characteristics of the flash drought events you selected for analysis?

First, I think we need to define flash droughts. They are defined by being a rapid onset event, that develops in weeks (not months, like regular droughts) and intensifies quickly. They often emerge from a combination of drivers, like lack of rain and high atmospheric demand, rapidly depleting soil moisture. In our study, we focused on capturing this rapid onset and intensification using six indicators. What stood out is that these events are not just fast, they are also hard to detect consistently, which is exactly what makes them so challenging for monitoring and management.

What criteria guided the selection of these regions as case studies, and how representative are they for broader flash drought dynamics?

We started broad, analyzing the entire U.S. to capture diverse hydroclimatic conditions. Then we zoomed in on two contrasting cases where flash droughts clearly occurred. Montana represented an agricultural system under stress, while Connecticut highlighted impacts on water supply. These cases helped us move beyond theory and ask: what actually works in practice? They show that flash drought is not one phenomenon, but many, shaped by local climate, infrastructure, and sectoral vulnerability.

What criteria guided the selection of flash drought indicators, and how do these choices affect the robustness and consistency of event detection?

We selected six widely used indicators that represent different ways of “seeing” flash drought, through soil moisture, atmospheric demand, precipitation, and combined metrics. Each captures a different piece of the system. But what we found is that these choices matter a lot: indicators rarely agree, even when based on similar variables. That inconsistency is not a flaw, it reflects that flash drought is multidimensional. The challenge is not picking one “best” indicator but understanding what each one is telling us.

In your paper, different combinations of indicators were required to detect flash drought events. How scalable is this approach to regions with limited monitoring infrastructure and fewer available datasets?

One goal was to make this approach widely applicable. We relied on reanalysis datasets like ERA5, which provide global coverage, so the framework can be used almost anywhere. The idea is not to prescribe a fixed set of indicators, but to offer a flexible way to test combinations based on what data are available. In regions with limited monitoring, this becomes even more valuable, you can still build meaningful detection systems using globally available data and adapt them to local needs. For anyone interested in applying or extending this work, the full database and code are openly available through the paper’s data and code repository (https://data.msdlive.org/records/n8ynk-9wn13).

Drawing from the Montana (2017) and Connecticut (2022) cases, what gaps in monitoring and early warning systems may have limited preparedness, and what lessons can be generalized for improving flash drought management?

In both cases, the drought was already underway before official responses began. In Montana, impacts started weeks before declarations; in Connecticut, detection depended heavily on the “right” indicator. This shows two key gaps: first, existing systems are too slow for rapid events; second, they are not tailored to regional conditions. Many monitoring systems miss flash droughts entirely or detect them too late. The lesson is clear, we need faster, more flexible tools, and better alignment between what we monitor and what actually impacts people.

What were the most challenging aspects of this research?

The hardest part was realizing there is no single answer to “what is a flash drought?” or how to detect it. We expected differences, but the level of disagreement between indicators was striking. That makes validation incredibly difficult, how do you know which method is right? This challenge pushed us toward a different perspective: instead of searching for one perfect method, we need ways to evaluate indicators based on impacts and context. That shift is now shaping my current work.

In what ways could your findings be translated into actionable improvements in drought monitoring, early warning systems, and water management policies in the United States?

Our findings suggest moving away from one-size-fits-all systems. Instead, monitoring should be tailored to regions and sectors, agriculture, water supply, ecosystems, each may need different indicators. Policies can also benefit from earlier, impact-based signals, not just hazard thresholds. By combining multiple indicators carefully and validating them against real impacts, we can design early warning systems that are faster and more actionable. Ultimately, it is about making flash drought information more relevant to decisions, not just more precise scientifically.

Considering the uncertainties associated with flash drought detection, what are the priority directions for advancing the validation and benchmarking of flash drought indicators?

The key priority is linking indicators to real-world impacts. Right now, most methods are validated against other datasets, not against what actually happened on the ground. We need impact-based databases, using sources like reports, media, and sectoral data, to evaluate which indicators matter where. Another priority is testing indicators across regions and sectors, because performance is not transferable. Finally, developing flexible frameworks that allow adaptation, rather than fixed thresholds, will be essential for advancing flash drought science.

To what extent do you plan to extend this line of research, and what are the key priorities for your future work in this area?

This work opened more questions than answers, which is exciting. I am now focusing on validating flash drought events using impact data, trying to understand when and where these events truly matter. I am also interested in building databases that connect physical signals with societal responses, moving toward impact-based monitoring systems. In the long term, I want to help develop decision-support tools for more resilient water management. For those interested in following the next steps of this work, I will be discussing this in more detail in my invited EGU talk on Friday, May 8 (https://meetingorganizer.copernicus.org/EGU26/EGU26-8499.html).

This entry was posted in Research "Hylight" and tagged . Bookmark the permalink.