• Aucun résultat trouvé

Data-driven resource allocation decisions : FEMA's disaster recovery centers

N/A
N/A
Protected

Academic year: 2021

Partager "Data-driven resource allocation decisions : FEMA's disaster recovery centers"

Copied!
123
0
0

Texte intégral

(1)

Data-Driven Resource Allocation Decisions: FEMA's Disaster Recovery Centers

by Julia N. Moline B.S. Civil Engineering

B.A. Economics Columbia University, 2008

SUBMITTED TO THE TECHNOLOGY AND POLICY PROGRAM IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF

MASTER OF SCIENCE IN TECHNOLOGY AND POLICY AT THE

MASSACHUSETTS INSTITUTE OF TECHNOLOGY June 2014

@2014 Massachusetts Institute of Technology. All rights reserved.

The author hereby grants to MIT permission to reproduce and to distribute publicly paper and electronic copies of this thesis document in whole or in part in any medium now known or hereafter created.

/S

Signature of Author:

S g a u e r d c e

A

echnology and Policy Program Engineering Systems Division May 9, 2014

Signature redactedM

Jarrod Goentzel Director, Humanitarian Response Laboratory Thesis Supervisor

Signature redacted

Erica Gralla Assistant Professor of Engineering Management and Systems Engineering George Washington University Thesis Supervisor

Signature redacted

I

Dava Newman Professor of Aeronautics and Astronau ics and Engineering Systems Director, Technology and Policy Program Accepted by: MASSACHUSETTS INsT1t'flE OF TECHNOLOGY

MAY 2 9 2014

L BRARIES

Certified by: Certified by:

(2)
(3)

Data-Driven Resource Allocation Decisions: FEMA's Disaster Recovery Centers

by Julia N. Moline

Submitted to the Technology and Policy Program on May 9, 2014 in Partial Fulfillment of the Requirements for the Degree of Master of Science in Technology and Policy

ABSTRACT

Resource allocation decisions in post-disaster operations are challenging because of situational

dynamics, insufficient information, organizational culture, political context, and urgency. We propose a methodology to create a data-driven decision process for post-disaster resource allocation that enables timely, transparent and consistent decision-making during crisis. Our methodology defines the decisions that must be made, identifies relevant historical, initial, and trending data sources, and develops numerical thresholds, quantitative relationships, and optimization models to support decision making. The general process also offers flexibility to consider non-quantitative factors and spans multiple review periods.

We apply this methodology to the Federal Emergency Management Agency's (FEMA) program for establishing and managing Disaster Recovery Centers (DRCs) after a disaster. A detailed case study of one disaster response and relevant historical data provide the basis for DRC decision making thresholds, relationships, and optimization models. We then apply the newly developed process to several recent disaster response scenarios and find that FEMA could have reduced cost by 60-80% while providing sufficient capacity for survivors. Finally, we discuss the generalizability of the methodology to other post-disaster programs along with limitations and potential future work.

Thesis Supervisor: Jarrod Goentzel

Title: Director, Humanitarian Response Laboratory Thesis Supervisor: Erica Gralla

Title: Assistant Professor of Engineering Management and Systems Engineering, George Washington University

(4)
(5)

Acknowledgements

I wish to thank the many people who helped me to this point. This thesis is the product of your support, guidance, criticism, and patience over the last two years. I extend my deepest gratitude:

To Dr. Jarrod Goentzel of the MIT Humanitarian Response Laboratory, who gave me the opportunity to work in the Humanitarian Response Lab, to see aspects of post-disaster operations in the field and in the office, and to engage with the humanitarian community in Boston and in New York; who encouraged me to think critically about everything I encountered; who pushed me to consider new ideas and elements at every turn; who always forced me to think harder and go deeper than I ever had before.

To Dr. Erica Gralla of the George Washington University Department of Engineering Management and Systems Engineering, who offered measured and thoughtful guidance not just on my thesis; who took the time to explain lots of things I should have known; whose criticism never came without clear action; who grounded me when I tried to do too much and pushed me when I tried to get away with too little. To Gregg Hogan of the Surveillance Systems Group at Lincoln Laboratory, who provided meaningful and

practical support and guidance through many draft research questions; who offered encouragement to try new things; who introduced me to ideas, systems, and concepts I had never considered. To Paul

Breimeyer of the Surveillance Systems Group and Adam Norige of the Information and Decision Support Group, also at Lincoln Laboratory, who took time out of their incredibly busy days to demonstrate systems and applications; who offered detailed feedback on papers and presentations; who showed up to events and presentations just to have my back. To all three of you, and to the groups you represent, who funded this work and my education.

To Carlos Davila of the Recovery Business Management Division at FEMA, who mentored and tor-mentored me constantly; who encouraged and supported me every day; who reassured me and pushed me; who opened doors and made sure I had the background, data, and connections I needed to do meaningful work; who managed to navigate the bureaucracy over and over again to make sure I had the tools I needed.

To Zach Usher and James Thach of the Recovery Directorate and the Disaster Recovery Center Team at FEMA, and to all the members of their team, who took more time than they had to talk me through the nuances of DRC operations, to validate and correct my assumptions, and to offer explanations or questions that I'd missed; who made sure I had access to the best, newest, and most complete data that existed.

To Keith Turi, Ashley Zohn, Steve Saunders, James Berlus, Chris Vaughan, Ted Okada, Karole Johns, Nate Gallen, Jennifer Bible, and all of the other FEMA staff who let me interview them, sent me data, asked tough questions, and offered critical input.

To Barb DeLaBarre, Frank Field, and the rest of the TPP administration, who offered support and guidance and who made sure I did what I needed to do to graduate.

(6)
(7)

Contents

Acknow ledgem ents... ... 5

1 Introduction ... ... ... ... 11

2 Literature and Background ... 12

2.1 DRCs and DRC Resource Allocation Decisions ... 12

2.1.1 The Literature: FEM A and DRCs... 12

2.1.2 Background Interviews: Disaster Recovery Centers ... 13

2.2 Decision M aking in Post-Disaster Operations... 17

2.2.1 Characterizing Decision m aking in Post-Disaster Operations... 17

2.2.2 General Challenges Associated with Decision making under Uncertainty...17

2.2.3 Disaster-Specific Challenges Associated with Decision m aking... 18

2.2.4 Evidence-Based Decision M aking... 19

2.3 Refining Post-Disaster Decisions over Tim e... 20

3 Resea rch Design ... 21

3.1 Research Question...21

3.2 Approach... 22

3.2.1 Decision Points for DRC Resource Allocation... 22

3.2.2 Developm ent of Relationships and Thresholds ... 24

3.2.3 Decision Process... 25

3.3 Outcom es of this W ork ... 25

3.4 Data...-... 25

4 Decision Process... ... 27

4.1 Overview ... 27

4.1.1 The Role of Tim e in the Decision Process ... 29

4.2 Notation ... 30

4.3 Data and M ethodology ... 31

4.3.1 Analysis 1: Does Expected Demand Justify Opening a DRC?... 31

4.3.2 Analysis 2: Determine the Number and Types of DRCs Required in each County. ... 50

4.3.3 Analysis 3: Determine the Registration Equipment Required at each DRC...54

4.3.4 Analysis 4: Determ ine staffing and hours for each DRC . ... 55

4.3.5 Analysis 5: Is trending demand at or above m inim um capacity?... 58

(8)

4.4 Decision Process, Detailed and Annotated ... 62

4.4.1 The Initial Decision ... 63

4.4.2 The First Two W eeks (or m ore): Daily Review ... 66

4.4.3 Subsequent Operations: W eekly Review ... 67

4.5 Som e Notes on the DRC Process... 68

4.6 Future W ork...69

4.6.1 Geospatial Analysis ... 69

4.6.2 Visitor Tracking System s ... 70

4.6.3 Staffing by Skill Level...71

5 Revisiting DRC Decisions and Policy Im plications ... 72

5.1 Revisiting Decisions...72

5.1.1 DR 4116: Illinois Flash Floods... 73

5.1.2 DR 4145: Colorado Flash Floods ... 80

5.1.3 DR 4157: Illinois Tornado ... 86

5.2 Policy Im plications ... 90

5.2.1 County and Disaster Level Decisions... 90

5.2.2 Prelim inary Dam age Assessments ... 91

5.2.3 Overtim e M em oranda ... 91

5.2.4 State/Local Cost Share ... 91

6 Discussion...93

6.1 Post-Disaster Resource Allocation in General ... 93

6.1.1 The Advantages of a Pre-Defined, Data-Driven Decision Process ... 93

6.1.2 Interagency Coordination ... 94 6.1.3 Applications... 95 6.2 Keys to Success ... 95 6.3 Lim itations...96 6.4 Identifying Alternatives... 97 7 Conclusion...98 References ... .. .100

Appendix A: Data Sources...103

Disaster-Specific Data ... 103

(9)

C e nsu s D ata ... 10 6

Appendix B: Observations from Colorado Analysis ... 107

R eg istratio ns ... 10 7 V isito r R ate s ... 10 8 Repeat Visitor Rates ... 110

O th e r Se rv ices ... 1 13 C o sts ... 1 14 Staff in g ... 1 14 Appendix C: Cost Estimates for Past and Proposed Decisions ... 116

Sum mary and Overview ... 116

D R 4 1 1 6 -IL ... 1 16 Actual DRC Costs ... 116 Proposed DRC Costs ... 118 Disaster-Level Costs ... 119 Cost Summary ... 119 D R 4 14 5 -C O ... 120 Actual DRC Costs ... 120 Proposed DRC Costs ... 121 Disaster-Level Costs ... 121 Cost Summary ... 121 D R 4 15 7 -IL ... 12 2 Actual DRC Costs ... 122 Proposed DRC Costs ... 122

(10)
(11)

1

Introduction

Every disaster is different. Every storm surge, every tremor, every funnel cloud causes a different kind of damage to a different set of buildings. Every political climate and every incident command structure and every demographic distribution is unique. So it can be extraordinarily difficult to follow standard

procedures and processes in ill-defined, dynamic, and urgent situations. As a result, very few such processes have been developed, and those that have been developed are rarely trusted.

However, in the wake of several large-scale disasters and tightening government budgets, it has become increasingly important to develop exactly such processes to ensure that resource allocation decisions in particular result in efficient provision of services to disaster survivors and are cost-effective. In this thesis, we propose a methodology to develop processes that account for commonalities in resource allocation decisions across disasters but provide flexibility to account for disaster-specific factors. We describe a methodology that begins with defining key decisions, and develop a framework for

identifying relevant historic, initial, and trending data to inform those decisions. We develop a process by which decisions should be made, including identifying review periods.

We use the United States Federal Emergency Management Agency's (FEMA) Disaster Recovery Center (DRC) program as a lens through which to develop the methodology and also as a case study. FEMA sets up DRCs in disaster-affected communities (typically schools or community centers) for individuals to meet with representatives from FEMA's Individual Assistance and Hazard Mitigation programs as well as several other federal, state, local, and voluntary agency programs. We chose DRCs as our primary program of study for three reasons. First, they represent a direct provision of government assistance to disaster-affected individuals and so are a critical piece of FEMA's post-disaster operations. Second, DRCs require a complex set of resource allocation decisions including facilities, staff, and equipment. Third, the DRC program staff was eager to participate and engaged in the process throughout the project. In the following sections, we describe the methodology we used and the process we developed for DRC resource allocation decisions, ultimately applying the process to several past disasters. We find that our DRC process could result in a much more efficient use of resources and a 60-80% cost savings to FEMA. We propose the use of mini DRCs to establish a FEMA presence where full DRCs are not warranted. We emphasize the importance of considering other, non-quantitative factors in decision making,

understanding that the role of our process is to supplement, not replace, experience and common sense. We identify areas for future work, including a heavier reliance on geospatial technologies. We also discuss the generalizability of our methodology and identify other programs within FEMA and

other organizations that could benefit from a similar process. We explore the ways our methodology could improve decision making, including improving outcomes and facilitating information sharing and interagency coordination. We identify factors necessary for success, including a nuanced understanding of the program and the engagement of program staff in the development and implementation of the process. We conclude with the key contributions of our work to DRCs and to post-disaster resource allocation in general.

(12)

2 Literature and Background

We undertook two simultaneous efforts to understand the problem space and to begin to frame our research. The first effort was a review of the relevant literature. We began with literature related to decision making in post-disaster operations, which helped to identify the broad challenges facing decision-makers in chaotic disaster response efforts. We then turned to more general literature related to decision-making in uncertain environments, and began to identify strategies used to reduce

uncertainty and promote effective decision making. As we identified the use of indicators as a decision-making tool, we looked at methodologies used in various sectors, including humanitarian response, for developing effective relationships and thresholds for decision-making. Finally, we turned to literature specifically relating to FEMA to understand both post-disaster challenges and important background information.

The other effort we undertook was a series of semi-formal interviews with FEMA staff and, in particular, members of the DRC program team. These interviews resulted in a detailed understanding of the decision making challenges specific to DRCs, including large scale coordination and daily, operations-level challenges.

Before we embarked on data analysis, we turned to the literature and to subject matter experts to try to answer our research question. We examined FEMA policies and procedures and FEMA-specific literature to try to understand DRCs, and supplemented that knowledge with input from FEMA's DRC Team, to understand DRCs and DRC resource allocation decisions. We searched for information on how post-disaster decisions are made, and found that although the challenges in post-post-disaster decision making are well documented, few papers or tools exist to facilitate post-disaster decision making. Finally, we

studied mechanisms that exist to refine decisions overtime and found much material related to monitoring and evaluating program outcomes but little related to refining programmatic decisions. We discuss each of these three questions in detail in the following sections.

2.1 DRCs and DRC Resource Allocation Decisions

In this section, we examine the literature related to FEMA and to DRCs. We also summarize background gained from a series of semi-structured interviews with the DRC Team.

2.1.1 The Literature: FEMA and DRCs

We began with the literature to understand DRCs, reviewing two sets of literature related to the specific application: (1) FEMA and (2) DRCs.

The first set of literature included statutes, regulations, policies, reports, and reviews issued by and about FEMA. These include the Robert T. Stafford Disaster Relief and Emergency Assistance Act (PL 93-288), which is the law authorizing FEMA, and 44 CFR § 206, which is the set of regulations that define how FEMA implements its statutory authority (Federal Disaster Assistance, 2011, Robert T. Stafford Disaster Relief and Emergency Assistance Act, as Amended, 2013). They also include various policies and online references describing certain aspects of FEMA's operations; these are cited as necessary in sections to follow.

(13)

This set also includes a report prepared by the Congressional Research Service. In the report (2011), McCarthy's document describes the declaration process, including preliminary damage assessments and requirements for Public Assistance and Individual Assistance declarations. The declaration process is described in and this report is cited extensively Section 3 of this thesis. In addition, McCarthy makes the following point: he calls out a recent Government Accountability Office (GAO) report that "raised concerns about FEMA's ability to manage its day-to-day resources and the lack of information on how FEMA's resources are aligned with its operations." Therefore the need for better resource allocation decision processes is clear and has been clear for several years at every level of the

organization(McCarthy, 2011).

The second and much smaller set of FEMA-specific literature relates directly to indicators of FEMA's performance in post-disaster operations; although several post-disaster after-action reports exist, few publications look at specific indicators for decision making. Christophe identified response time, duration of operation, number of people served, and unit cost of service as indicators of success and collected data on these indicators through a qualitative survey of disaster survivors. She finds that, in FEMA's operations, funding was a significant determinant of the unit cost of service and that

coordination was a determinant of the number of people served (Christophe, 2009).

Two works related to Disaster Recovery Centers. The first describes the methodology used by a group of students to use spatial analysis to locate DRCs in a particular county. This article was very useful in

identifying criteria for opening a DRC. In addition to FEMA's standard criteria for DRCs (which include adequate space, accessibility, air conditioning, connectivity, restroom facilities, and adequate parking),

Dekle et al identified adequate road access and reasonable travel times as two criteria for DRCs. They used a maximum travel distance of 20 miles, which was useful in developing my own criteria for geographic area served. In addition, they set analysis objectives that were useful in determining the

objective functions described in Section 4. These included minimizing travel distance to DRCs and minimizing the total number of DRCs needed (Dekle, Lavieri, Martin, Emir-Farinas, & Francis, 2005). The second, a recent article in Wired Magazine, identifies community-driven DRCs as a way to improve DRC operations by empowering communities to help themselves. Specifically, it discusses "franchise DRCs", partnerships with national retailers who could use FEMA-provided kits to set up DRCs. The concept of a DRC operated outside of a traditional FEMA facility was very useful in formulating the decision process-in addition to empowering the community, a "franchise DRC" (or a mini DRC, as we refer to it below) could help FEMA to establish a presence in a community whose expected DRC demand might not justify opening a full, traditional DRC (Vanhemert, 2014).

2.1-2 Background Interviews: Disaster Recovery Centers

in addition to the literature review, we conducted a series of semi-structured and informal interviews with the DRC Team to gain a more nuanced understanding of the DRC program and how it fits into

FEMA's Recovery operations. Here, we use "DRC Team" to include FEMA employees based out of FEMA Headquarters who oversee the DRC program including developing and implementing policies for DRC operations, coordinating with disaster field offices, and fitting DRC operations into broader disaster

(14)

response. The following sections summarize the DRC program, drawing mainly from the qualitative interviews in addition to the literature.

2.1.2.1 FEMA Assistance for Disaster-Affected Individuals

FEMA gets involved in disasters when state governments request and the federal government awards a presidential disaster declaration (McCarthy 7, 44 CFR §206 Subpart B). Once a declaration has been made, FEMA plays a role in two broad categories of activities. The first category, response activities, includes activities to "save and sustain lives, minimize suffering, and protect property" immediately after a disaster. These include search and rescue operations, evacuation, and other incident containment activities ("Response Directorate," n.d.). The second category, recovery activities, includes longer term activities whose goal is to help disaster-affected communities and individuals begin to rebuild their lives ("Recovery Directorate," n.d.). Note that Response and Recovery were originally two separate

directorates within FEMA; in 2011, they were brought together under the the Office of Response and Recovery (The State of FEMA, 2012).

Recovery activities are mostly in the form of financial assistance and are undertaken through two major programs. The Public Assistance (PA) program includes financial assistance to state and local

governments for the purposes of recouping emergency management costs and rebuilding damaged infrastructure and facilities (McCarthy, 2011). The Individual Assistance (IA) program includes financial assistance to disaster-affected individuals and is awarded in two ways:

" Housing Assistance, which provides both rental or shelter assistance for immediate housing

needs and financial assistance to rebuild disaster-affected homes; and

" Other Needs Assistance, which provides financial assistance for other disaster-related costs such

as medical or funeral expenses ("Assistance to Individuals and Households Fact Sheet," n.d.). In order to be eligible for financial assistance, FEMA must issue a declaration for specific types of assistance (44 CFR § 206.40(a)), which include IA and various categories of PA. These declarations are made on a county-by-county basis (44 CFR § 206.40(b)). Therefore in order for a local government to be eligible for PA, it must receive a PA declaration, and in order for individuals to be eligible to receive assistance through IA, their county of residence must receive an IA declaration. Note that FEMA often issues PA declarations at the state level so that state governments can be eligible for PA (44 CFR 206.222), but there is no equivalent state declaration for IA.

FEMA considers a number of factors in deciding whether to issue declarations. One of the key information sources is the Preliminary Damage Assessment (PDA), a rapid assessment of impacted structures in the affected area (McCarthy, 2011). Until very recently, PDAs were completed by sending teams of individuals to the disaster affected area to conduct a pen-and-paper count of the number of structures affected, damaged, and destroyed. Since Hurricane Sandy in 2012, there has been movement to use aerial imagery to conduct faster, more extensive PDAs.

Once an IA declaration has been issued, FEMA must connect disaster survivors with assistance. To receive assistance, a survivor must first register with FEMA's system, the National Emergency

(15)

information that determines whether or not the survivor will be eligible for particular programs. Once the survivor has registered, he must apply for each type of assistance. In most cases, disaster-specific

algorithms built into NEMIS are used to determine whether a survivor is eligible for each type of

assistance, and problems or disputes are handled by staff at FEMA's National Processing Service Centers (NPSC).

FEMA provides three ways for survivors to complete this process. The first way is to interface directly with the NPSCs via phone on internet. Survivors can call a FEMA hotline, which is typically well-advertised after a disaster, and speak with a NPSC employee who will walk them through the registration process and, eventually, the application process. They can also register and apply online through disasterassistance.gov. FEMA receives the majority of registrations and applications through direct interface with the NPSCs.

The second way survivors can access FEMA services is through Disaster Recovery Centers, which are physical locations set up in disaster-affected communities. Disaster Recovery Centers are the subject of this research and are described in detail in Section 2.1. The third way survivors can access FEMA services is through Disaster Survivor Assistance (DSA) Teams, which are groups of FEMA personnel who canvass disaster-affected neighborhoods. DSA team members can walk survivors through the registration process using tablets or other mobile devices and refer survivors to the NPSCs or to the nearest DRC for further assistance.

2.1.2.2 Disaster Recovery Centers

A Disaster Recovery Center (DRC) is a physical location set up and operated within a disaster-affected community. They can be open for as short as a week or as long as 18 months. DRCs can take one of two forms: A "fixed" DRC is set up inside a public building such as a local community center or a school. A "mobile" DRC can take a variety of forms, most frequently a tent or a trailer-sized vehicle. Fixed DRCs must be compliant with the Americans with Disabilities Act (U.S.C. 42.126 § 12182(a)) and can typically house more services than mobile DRCs. Mobile DRCs have the benefit of flexibility but do not have sufficient room for survivors to wait indoors, therefore they are not always usable in inclement weather. Mobile DRCs often also have connectivity challenges beyond what fixed DRCs experience.

When a disaster survivor visits a DRC, his first interaction is with a greeter, a FEMA employee who interviews him to identify which services he might need. If he has not already gone through the formal FEMA registration process, he is directed to a bank of phones or computers by which he can register. After registration, he proceeds to each of the service stations (e.g. Individual Assistance) identified by the greeter and has one-on-one conversations about his needs and his options at each station. The entire process can take minutes or hours depending on the situation. In addition, when DRCs are busy, there may be a long wait before entering.

2.1.2.3 DRC Goals and Challenges

DRCs have two goals:

* Connect survivors with disaster assistance.

(16)

The first goal-connecting survivors with disaster assistance-aligns with the description in Section 2.1. DRCs provide a portal through which survivors can register and apply for assistance. Note that, unlike the NPSCs, DRCs service to connect survivors with disaster assistance beyond what IA offers. DRCs aim to offer a one-stop shop for government disaster assistance available to individual disaster survivors; as a result, they often are co-operated by the state government. They nearly always include the following federal services: Individual Assistance, Small Business Administration (which offers low-interest home loans to individuals), and Mitigation Assistance (which offers guidance to protect a home from future damage). They often also include services such as crisis counseling, tax assistance, legal assistance, and other location- and disaster-specific services.

The second goal-establishing presence-is a much more nebulous concept. In opening a DRC, FEMA aims to communicate to the community that the government is there to help them. Often, this is closely

linked to media coverage, political pressures, and public perception. As a result, decision making around opening and closing DRCs can be contentious and focused on matters of perception rather than on efficient and effective resource allocation.

There is no detailed regulatory basis for DRCs. The relevant regulation occurs in the description of the responsibilities of the Federal Coordinating Officer (the federal representative with primary coordinating responsibility after a disaster):

"(2) In coordination with the [State Coordinating Officer], establish field offices and Disaster Application Centers as necessary to coordinate and monitor assistance programs, disseminate information, accept applications, and counsel individuals, families and businesses concerning available assistance (44 CFR § 206.42 (a)(2)."

A Disaster Application Center is defined as "a center established in a centralized location within the disaster area for individuals, families, or businesses to apply for disaster aid (44 CFR §206.32 (c))." This lack of specificity allows for a good deal of flexibility in DRC operations; it also means that there is no clear requirement for accountability, information tracking, or decision making.

In addition, although FEMA opens and operates DRC, it does so at the request of the state or tribal government. The nature of the process by which the decision is made to open DRCs is highly dependent on the disaster and the states, tribes, and individuals involved; therefore it can be difficult to

characterize the DRC decision making process and to understand, even in the midst of post-disaster operations, how specific decisions were made and by whom. Recognizing this challenge, we have developed the decision processes described below as a way for DRC Group Supervisors within FEMA to advise disaster-specific decision makers on the best approach to DRC decision making.

Finally, unlike most other federal post-disaster services, there is no cost-share requirement from the state or local government. Since state and local governments do not have to pay for DRCs, they have no incentive to close them when DRC visits fall.

(17)

2.2 Decision Making in Post-Disaster Operations

The next part of our research question is, how are decisions made in post-disaster operations? To answer this question, we first characterized post-disaster decision making based on the literature. We turned to the literature about decision making to understand general challenges associated with decision making under uncertainty and then examined disaster-specific challenges. Finally, we identified programs and tools to facilitate decision making, finding that although the need for evidence-based decisions and decision support systems has been documented, few efforts to define specific processes for evidence-based decision making have been published.

2.2.1 Characterizing Decision making in Post-Disaster Operations

Post-disaster decisions are characterized distributed decision-making, uncertainty, and high-pressure environments. Turoff et al (2011) note that in many post-disaster environments, multiple entities are involved in decision making, though ultimately one person must make the final decision. This can lead to disagreement and political, organizational, and personal disputes (Turoff, White, & Plotnick, 2011). Kowalski-Trakofler and Vaught (2003) note post-disaster operations are highly stressful, as operations often involve the welfare of individuals. Contributing to the stress, post-disaster operations are dynamic; therefore they are subject to high levels of uncertainty. Under high stress, decision makers often failt o gather the right kinds of information, "which prevents them from making appropriate responses (Kowalski-Trakofler & Vaught, 2003)."

We examine these characteristics through the lens of general decision making and then identify specific challenges in post-disaster operations.

2.2.2 General Challenges Associated with Decision making under Uncertainty

The literature related to decision making under uncertainty is a very large body of work covering decades of behavioral and psychological research. The following few articles highlight the relevant aspects of this literature to this work, focusing on the nature of uncertainty and how decision-makers cope with it.

Lipshitz and Strauss (1997) outline the various coping strategies decision makers use to address uncertainty, including assumption-based reasoning and forestalling. The three types of uncertainty are inadequate understanding, incomplete information, and undifferentiated alternatives. Additionally decision makers use five coping strategies to reduce uncertainty:

" Reducing uncertainty, which can include collecting more information, deferring the decision until more information becomes available, and extrapolating from existing information.

" Assumption-based reasoning.

* Weighing pros and cons of competing alternatives.

* Suppressing uncertainty, which can include ignoring uncertainty, acting on the basis of "intuition", and taking a gamble.

* Forestalling, including improving readiness, preempting, and avoiding irreversible action (Lipshitz & Strauss, 1997).

(18)

Kahneman (2003) provides further insight on the behavior of decision-makers, providing an alternative framework for understanding decision making to the rational actor theory. He says, "The central characteristic of agents is not that they reason poorly but that they often act intuitively. And the behavior of these agents is not guided by what they are able to compute, but by what they happen to see at a given moment (Kahneman, 2003)." Other articles call out the role of emotions in decision making (Loewenstein & Lerner, n.d.).

Vessey (1994) discusses information presentation, finding that in the face of information overload, decision-makers tend to rely on gut instinct over data. She cites several studies in finding that providing more information and more flexibility to decision makers can actually hinder the decision making process, noting that "decision makers are prepared to forego some accuracy for substantial reduction in effort (Vessey, 1994)."

These findings helped to shape the ultimate outcome of this project: A clear, implementable decision making process for resource allocation. Resource allocation decisions in post-disaster operations must be based on incomplete information and are riddled with uncertainty. The goal of this project is to provide a basis for decision making that reduces uncertainty in productive ways without overloading the decision-maker.

2.2.3 Disaster-Specific Challenges Associated with Decision making

A general review of literature relating to post-disaster operations reveals common challenges relating to information collection, synthesis, and sharing in disaster response. Several works highlight the

challenges of information sharing and sense making in the midst of chaotic post-disaster environments. Many point to both lack of information and information overload as a major impediment to decision making. Many others describe the various technical and political challenges associated with information sharing.

Darcy et al (2013) apply much of the general literature to the humanitarian space, making the following key observations:

" "When faced with complex problems or incomplete information, rather than undertake taxing calculations, people tend to resort to simple educated guesses, 'rule-of-thumb' thinking or personal intuition." They note that overcoming these highly individualized heuristics very difficult.

* Decision-making is highly impacted by personality traits (Darcy, Stobaugh, Walker, & Maxwell, 2013).

Day et al (2009) summarize impediments to information flow which lead to uncertainty in decision making. Three particular challenges were relevant to our work:

" Inaccessibility, including the inability to obtain data or information that is known or assumed to exist;

* Inadequate stream of information, including too little or too much data/information available to an organization; and

(19)

* Source identification difficulty, including not knowing where to get wanted data or information (Day, Junglas, & Silva, 2009).

Aligne and Mattioli (2011) call out the importance of understanding the context of information, saying that the timeliness and relevance of information is as important as the information itself (Aligne &

Mattioli, 2011). Kowalski-Trafkofler and Vaught (2003) also highlight the importance of balancing reasoning and judgment in post-disaster situations, noting that reasoning-based decision making and intuition are both critical to success (Kowalski-Trakofler & Vaught, 2003).

Other authors expand on some of these challenges to suggest decision-support systems that "ensure that improvisation or creativity is allowed (Turoff et al., 2011)." They outline characteristics of a Highly

Reliable Organization, noting that the structure and nature of an organization influences its ability to produce high quality decisions in a post-disaster context (Van de Walle & Turoff, 2007). Although the nature of the organization making decisions is out of the scope of this thesis, this point is an important consideration in developing processes intended for use in an emergency context. This is one of several

reasons that we included other factors in the final decision making process described below.

2.2.4 Evidence-Based Decision Making

Much of the literature related measuring decisions relates to measuring the performance of programs; in other words, it focuses on the outcomes of the decisions rather than on how to make the decisions themselves. Much of this literature focuses on the development of indicators for performance measurement. The development of indicators combines many of the strategies discussed by both the disaster literature and the decision literature. It requires some assumption-based reasoning in identifying a metric or a relationship that can describe a particular phenomenon. It also requires

significant prioritization, as some information is used to calculate indicators and some is not. In addition, it reduces a potentially large body of disparate information into manageable pieces.

Dijkzeul et al (2013) summarize methods for collecting disaster-specific evidence, noting that all methods focus on some or all of the following goals:

1. Determining whether a need exists,

2. Determining whether a response will be effective,

3. Determining whether a particular form of a response is most appropriate, and

4. Understanding the goals, norms, and values associated with humanitarian crises (Dijkzeul, Hilhorst, & Walker, 2013).

Pedraza-Martinez et al (2013) expand on the challenges associated with using academic methods to collect data and to develop indicators for humanitarian practice. The call out the differences in

approaches that academics and practitioners typically take, noting that practitioners' goal is to solve the problem in front of them whereas academics' goal is to further knowledge and understanding. This difference often causes difficulties collaborating in the field. Pedraza-Martinez et al point to the

combination of quantitative and qualitative methodologies, most notably by collecting and applying the input of practitioners in the development of academic approaches, as an important way to address this challenge. In developing our decision process, we were careful to follow this recommendation,

(20)

consulting with program staff throughout the project (Pedraza-Martinez, Stapleton, & Van Wassenhove, 2013).

Other fields offer insights that are highly instructive to post-disaster indicator development. Mainz (2003) outlines the key characteristics of an ideal indicator for clinical healthcare, calling out clarity, relevance, and a strong evidence-base as key characteristics of an indicator (Mainz, 2003). Hagan and Whitman (2006) call out the top reasons indicators for biodiversity do not work, many of which mirror the post-disaster challenges described above. In particular, they focus on the lack of clarity, relevance, or immediate applicability in indicators as points of failure (Hagan & Whitman, 2006).

Only recently has there been a shift toward evidence-based approaches to facilitate making post-disaster decisions themselves in addition to understanding the outcomes of decisions.

Bradt (2009) describes the beginnings of evidence-based decision making in medicine and in public health, showing that evidence-based decision making, including gathering and prioritizing data inputs, has improved those fields and could be beneficial in humanitarian assistance. He notes an important difference: "However, evidence-based medicine affirms the ascendancy of evidence-based judgments over personal judgments regardless of how eminence-based they may be. By contrast, humanitarian assistance continues to rely heavily on eminence-based decisions." He quotes the Humanitarian Response Review of 2005 further: "the international humanitarian coordination system works by goodwill and consensus and depends too often on the authority and skills of HCs [humanitarian

coordinators]. While its role has to be maintained and reinforced, there is also a need to make progress in designing a more explicit model where sector operational accountability will be clearly identified at the level of a designated organization, following standards to be agreed upon. Responsibilities to be

covered under such a model are: (a) planning and strategy development, (b) standard-setting, (c) implementation and monitoring, (d) advocacy (Bradt, 2009)."

Darcy et al (2013) echo these issues, noting that the pressure to show that decisions in humanitarian response are grounded in evidence has grown in recent years. They add that information presentation is crucial to decision makers. Information must be presented in clear, concise ways in order to be usable and useful, saying that one way to overcome individual biases is to create a clear, evidence-based case for decision making (Darcy et al., 2013).

2.3 Refining Post-Disaster Decisions over Time

Program monitoring and evaluation has long been a component of implementation. Crawford and Bryce (2003), Maxwell and Watkins (2003), and Roberts and Hofman (2004) offer some representative

descriptions of programs that can be used to assess whether a program is meeting the needs of its intended beneficiaries (Crawford & Bryce, 2003; Maxwell & Watkins, 2003; Roberts & Hofmann, 2004). Many of these works describe the use of indicators and evidence to track the progress of a program. However, this body of work does not address the act of revising decisions based on new or changing information. In fact, very little literature is available to inform the way decisions should be refined over time in post-disaster operations.

(21)

3

Research Design

We set out to address the challenges identified above by developing a clear, data-based decision process. Our goal was to develop a process that would supplement field experience and provide decision-makers with a basis upon which to make decisions. This section describes the approach we took to do so, beginning with our research question.

A note: throughout this thesis, we use the term "methodology" to refer to the approach we took, including the frameworks and iteration process it included. We use the term "process" to refer to the outcome of the methodology, i.e. the step-by-step procedure we recommend for decision-making.

3.1 Research Question

Having established the need for better decision-making for DRCs, the central research question of this thesis is as follows:

How can available historic, initial, and trending data be leveraged to make and refine DRC resource allocation decisions over time?

This question has four key parts. First, we begin with "DRC resource allocation decisions." For DRCs, the three primary resource considerations are facilities, staff, and equipment. Facilities are the physical structures in which DRCs are housed, including physical buildings such as schools and community centers as well as parking lots and other locations at which tents and RVs can be set up. Staff are the FEMA personnel responsible for running the DRCs and interacting with disaster survivors. Equipment includes signage, IT equipment, and communications equipment required to operate the facility, is not included in this analysis. Only phones and computers required to complete registration are included in the resource optimization below; however the costs of all equipment are accounted for in cost

determinations. Note that our question focuses on DRCs only; however the methodologies we describe could be as useful to resource allocation decisions for other post-disaster programs.

The second part of the question is how decisions should be made. We needed to develop ways to include quantitative analyses as part of the existing, qualitative decision process. This included identifying specific quantitative relationships to estimate demand or capacity, developing numerical thresholds against which to compare the resulting indicators, and developing optimization models for more complex questions. In this work, we use quantitative relationship to mean a number estimated based on empirically observed data.

The third part of the question is critically important: "to refine those decisions over time." Every disaster is different, and it is impossible to make one decision immediately after the disaster occurs that will be appropriate for the remainder of operations. The situation changes rapidly and information evolves and improves over the course of operations. Therefore any useful post-disaster decision process must involve the flexibility to revise and refine decisions over time.

Finally, we look to available historic, initial, and trending data as a basis for making and refining these decisions. We are interested in making use of all of the best available data for decision making, including data from past disasters, data available immediately after the disaster occurs, and data collected in the

(22)

course of post-disaster operations. We discuss the historic-initial-trending (HIT) framework in more detail below.

Our goal in answering this question is to make efficient decisions. For the purposes of this project, we define an efficient outcome as one that aligns the available DRC capacity to the demand (DRC visitors) as closely as possible. We make the assumption that adequate capacity is equivalent to an adequate level of service for visitors. As we discuss in several places throughout this research, there are many future data collection and research efforts that could improve our understanding of capacity and level of service.

3.2 Approach

To answer this question, we used a four step approach. Although we discuss each of these steps individually, the process was iterative and required us to revisit each step at least once. The four steps, each discussed in more detail below, are as follows:

1. Identify decision points for DRC resource allocation;

2. Develop quantitative relationships to describe demand or effectiveness for each decision; 3. Develop numerical thresholds to describe the target outcomes of decisions;

4. Develop optimization models for complex questions;

5. Develop the process by which the relationships, thresholds, and models should be applied to make decisions.

Note that steps 2, 3, and 4-developing relationships, thresholds, and models-were closely linked. Therefore, we discuss those three steps together below.

We relied heavily on analysis of historical disasters to complete and revisit each step. We used data from recent disasters (see Section 3.4) to evaluate the outcomes of historical decisions, to identify trends and gaps, and to determine relationships and thresholds. Many of the results of this analysis are included in Section 4; many others were interesting but did not bear directly on our final process and so are discussed in Appendix B.

One key part of the process of analyzing historical outcomes was reviewing the outcomes with the DRC Team. We interpreted each analysis to reach conclusions about why certain things happened the way they did, or to infer the existence of relationships between two things. It was important to ground these conclusions and inferences in field experience and deep program knowledge. In many cases the DRC Team offered richer inferences or more realistic explanations for various phenomena because they could compare the trends and outcomes we presented with their own experience in the decision-making process. This partnership between our academic analysis and the DRC Team's collective experience was critical to effectively interpreting and applying the results of the historical analysis.

3.2.1 Decision Points for DRC Resource Allocation

The critical first step in identifying indicators, including relationships, thresholds, and optimization models, was to identify the decisions that needed to be made. Starting with the decisions is important, because it required us to look at every piece of information and every particular indicator and ask, "How

(23)

will this help make decisions?" The decisions are the lens through which all analysis and all indicator development was completed.

Through discussions and qualitative interviews with DRC team staff, we initially identified the following five decision points for DRC resource allocation:

1. Should DRCs be opened?

2. What services should be available within DRCs?

3. How many and which types of DRCs should be opened? 4. How should DRCs be staffed?

5. When DRCs should be closed?

We analyzed each decision individually, beginning with the outcomes in historical disaster and then going deeper into trends and relationships. As we began to understand the data, we made the following observations:

* The five services that are, by policy, always offered inside a DRC (Registration, FEMA Housing, Other Needs Assistance, Hazard Mitigation, and Small Business Administration) are also the

most popular services.

* Other services vary widely from disaster to disaster and from DRC to DRC. Very little information was available about which other services were offered inside DRCs and how those other services were staffed.

These two observations led us to reconsider our approach to the second decision we identified. Because our analysis showed that the five services that are always available are also the most popular, we did

not need to spend time developing decision points for them (although there is likely some causality between the fact that the services are always available and the fact that they are the most popular). And without additional information about the other services, we did not have much to work with to develop

relationships and thresholds for those services. Therefore we proceeded with the assumption that at least the five policy-mandated services would be available and focused our efforts on the other four decisions.

Additionally, as we began to identify relationships and thresholds for each of these decisions, we began to realize that many of the same indicators could be used to inform multiple decisions. As we proceeded through our approach, it became clear that the five decisions we initially identified coalesced into two general decisions:

1. Opening and closing DRCs 2. Staffing and equipping DRCs

The decisions to open and close DRC rely on comparing some estimate of demand-i.e. the expected number of visitors-to some threshold below which it is not worth operating a DRC. These decisions therefore require the development of a threshold and a way to estimate demand, initially and over time.

(24)

The decisions to staff and equip DRCs rely on an understanding of the throughput capacity of a DRC staffer or a piece of equipment (computer or telephone) in addition to an understanding of expected demand. These decisions therefore require similar demand estimates to the set of decisions. They also require some way to estimate how many visitors one staffer can assist in a given period of time.

3.2.1.1 Decisions and Decision Makers

One challenge we faced in identifying these decisions was the nature of the decision making process in operations. As we noted in Section 2.1, the individual(s) making DRC-related decisions varies from disaster to disaster, and different individuals or entities could be responsible for different decisions. Over the course of many discussions with the DRC Team, we determined that, although the decision makers themselves may differ, the nature of the decisions related to DRC resource allocation is the same from disaster to disaster. In other words, regardless of whether it is the FCO, the SCO, or someone else making the decision to open DRCs, the decision whether to open DRCs must be made. The decisions are independent of the individual making the decision; therefore the decision process should be

independent of the individual, organization, or disaster.

Further, in operations, there are often FEMA analysts at headquarters, in a Regional office, or in a Joint Field Office (JFO) making recommendations to the FCO or SCO. In the DRC CONOPs, this person is referred to as the DRC Group Supervisor. The DRC Group Supervisor does not, himself, make decisions; however he provides analysis and recommendations to the decision maker. We therefore decided to develop this process for use by a DRC Group Supervisor, allowing us to identify and analyze decisions independent of the decision maker.

3.2.2 Development of Relationships and Thresholds

As we analyzed decisions, we identified data points that we could use both to evaluate decision outcomes and to make decisions. We identified demand-side data points-how many visitors do we expect?-as well as supply-side data points-how many visitors can we serve? These data points enabled us to identify indicators, in particular those that could help us estimate the expected number of visitors and those that could help us understand the throughput capacity of staff in a DRC.

Once we had identified relationships, we could ask the questions that led to thresholds. These included minimum and maximum capacity thresholds-what is the minimum number of visitors that justifies opening a DRC? What is the maximum number of visitors a DRC can serve before we must staff up or open another DRC?

As we identified and developed these relationships and thresholds, it became clear that the data available for decision making falls into three broad categories:

* Historical data is based on past disasters or existing data sets (such as census data). Historical data is considered static during a post-disaster operation, but is continually revised between disasters.

" Initial data is data collected once at the onset of a disaster (such as PDAs).

" Trending data is data collected over the course of a post-disaster operation (such as daily DRC visits).

(25)

This Historical-initial-Trending (HIT) framework proved very useful in laying out indicators, including relationships, thresholds, and optimization models, and understanding which analyses could be

completed in advance and which must be disaster-specific. We used historical data to set thresholds and define relationships. We used initial and trending data in those relationships to determine the values that we would compare against the thresholds.

3.2.3 Decision Process

Once we developed the indicators themselves, we began to fit the decisions into an overarching decision process. This process described how to apply the relationships and thresholds applied above, including the specific data points needed to calculate each indicator. Critically, the process also included review periods, in other words the frequencies with which each decision should be revisited. This was for several reasons. First, initial data is often incomplete and difficult to come by, therefore initial decisions must be revisited as new and better information becomes available. Second, post-disaster circumstances often change rapidly in the first few weeks, and new developments could impact the way decisions should be made. Third, there is no guarantee that the first decision or decisions will be "right," therefore there must be a mechanism by which wrong decisions are corrected.

We formulated a process and reviewed it in detail with the DRC Team to ensure that the resulting process was (1) feasible in real-world disasters and (2) reflective of the timeline of post-disaster operations. This involved identifying points at which other factors might come into play or even supersede quantitative decisions and building enough flexibility into the model to allow for that. The

resulting decision process is described in Section 4.

Section 4 is also designed to inform the development of a decision tool. A preliminary tool might be based in Microsoft Excel with potential future development in a standalone platform.

3.3 Outcomes of this Work

The outcome of this project is a data-driven process for making decisions related to opening, staffing, and closing DRCs. Data-driven means based on the available historical, initial, and trending data. The process is based on a series of indicators in the form of relationships, thresholds, and models. The indicators and the decision process are intended to be used to make initial decisions then revise those decisions over time.

It is not the intent of this work to suggest that data or indicators can replace experience and instinct. Rather, the purpose of this work is to provide a meaningful basis for decision making that enhances experience and instinct. For this reason, all analysis was done in close coordination with subject matter experts and stakeholders.

3.4 Data

We were interested in analyzing data from a range of disasters; we also wanted our reviews with the DRC Team to include some disaster-specific discussions. Therefore we selected recent disasters with available data.

(26)

After DR 4145, the September, 2013 Colorado flash floods, FEMA opened 26 DRCs and collected the best and most complete DRC data in recent memory; therefore Colorado was the primary disaster for

analysis. Similar but less complete data existed for the following disasters: * May, 2013 tornadoes in Moore, Oklahoma (DR-4117)

* May, 2013 Yukon River floods in Alaska (DR-4122) * May, 2013 floods in Illinois (DR-4116)

* December, 2013 tornadoes in Illinois (DR-4157)

We used data from these disasters to supplement and enhance indicator development as possible. Prior to 2013, DRC data was collected in FEMA's RIMS system, which was cumbersome and did not allow for easy access to DRC-specific data. Therefore we focused on disasters occurring in 2013, as recent improvements in data collection allowed us to access and analyze data.

We used the following disaster-specific data sets:

* DRC Identifiers: Daily reports in Microsoft Excel including DRC types, locations, IA staff numbers, total visitors, and other identifying information.

* DRC Daily Activity Logs: Daily tallies in Microsoft Excel listing the number of visitors to each service within in each DRC.

* NEMIS Registrations: Outputs from NEMIS listing the daily registrations received by county. * PDAs: GIS files showing the number and locations of affected structures.

* RIMS Data: Historic outputs from the old system used to track DRC data. Outputs included a list of DRCs opened during past disasters and the durations open. Outputs also included the duration of individual DRC visits during Hurricane Sandy.

* Census Data: Publicly available information from the US Census including population density, household size, population size, and county classification.

The following table summarizes the data we used for each disaster. Table 3-1: Data Used

Disaster Daily Activity Logs DRC Identifiers Registration Data by County (days)

4122-AK 28 13 29 4145-CO 95 117 81 4116-IL 57 30 30 4157-IL 42 65 218 4117-OK 35 39 29 TOTAL 257 264 191

In addition, we had nearly 100,000 data points for DRC visits over 6 months after Hurricane Sandy. Detailed information about each data set is included in Appendix A.

(27)

4 Decision Process

In this section, we lay out the DRC resource allocation decision process. This section comprises the primary outcome of this project: A decision process for opening, staffing, and closing DRCs. We begin with a flow chart that provides an overview of the process (4.1), and revisit the HIT framework presented in Section 3. After defining the notation we use throughout the rest of the section (4.2), we present the details of each analysis that comprises the process, including the data and methodologies used (4.3). We present a step-by-step application of the decision process (4.4), then provide some additional notes (4.5) and a discussion of future work (4.6).

4.1 Overview

We introduce our decision process with a flowchart that includes five primary decision points. The process includes two types of decisions: 1. Comparisons of relationships against thresholds (denoted by diamonds) and 2. Optimizations (denoted by rectangles). Decisions are made at the county level (denoted by blue), meaning the decision is made for the entire county, or at the DRC level (denoted by orange), meaning the decision is made for each DRC. Each of the following five analyses are discussed individually and in detail in Section 4.4:

1. Analysis 1: Does expected demand justify opening DRCs?

2. Analysis 2: Determine the number and types of DRCs required in each county. 3. Analysis 3: Determine the registration equipment required in each DRC. 4. Analysis 4: Determine staffing and hours required at each DRC.

5. Analysis 5: Is trending demand at or above minimum capacity?

In addition, the process requires a review of other factors at two points. These points allow for the review of qualitative considerations, disaster-specific considerations, and other factors not otherwise included in the process. The consideration of other factors is discussed in Section 4.3.6.

(28)

Initial Decision: Start Here No~~i Yes Yes No a

...-

6004.1

o....o..

...

-0

Determine stffing and hours for each DRC

Is trending demand at

or above th- minimum Anl

Daiy/Wwky Review: Start Here No

-W

No

4,

Figure 4-1: Decision Framework for DRC Resource Allocation Decisions

Les

DRCevet DecIsions

(29)

4.1.1 The Role of Time in the Decision Process

Figure 4-1 includes two temporal elements. First, it specifies one starting point for the initial decision and a second starting point for daily and weekly review. Second, it specifies iteration, in other words the need to revisit decisions multiple times. In this section, we discuss the role of time in the decision process as it relates to the availability of data and the frequency with which decisions should be revisited.

As we discuss in Section 3, three types of data are available during post-disaster operations: Historic data from past disasters, initial data from the first day or days, and trending data over the course of operations. The availability and nature of these three data sets necessitate the considerations of three decision periods: the initial decision, the first two (or more) weeks, and subsequent weeks.

Initial data is the only basis we have to make decisions in the immediate aftermath of a disaster, and we can combine it with historical data to estimate needs. However initial and historical data form an imperfect basis for decisions. Historical data may not be directly relevant to the type and location of the disaster in question, and therefore the relationships developed based on historical data may not be highly accurate. Refining and revising the historic relationships over time will help to improve this issue, but all disasters are unique and therefore historical relationships will never tell the whole story. Initial data is gathered with the goal of speed rather than accuracy. As a result, it can paint a skewed or incomplete picture of the disaster. We must allow for the incorporation of more complete trending data into the decision process. Therefore, we must distinguish between the initial decision and later

decisions.

In addition to the distinction we make between the initial decision and later decisions, we must distinguish between decisions made in the first weeks of operation and decisions made after the first weeks of operations. We do this for two reasons. First, the situation changes rapidly in the first weeks after a disaster. An incident might continue to unfold, or there might be another incident such as an aftershock or a second storm; survivors' patterns or behaviors may change, particularly if shelters are involved; political or organizational factors might change the nature of response. Second, initial estimates-again, gathered with the goal of speed rather than accuracy-could vastly under- or over-estimate the number of affected individuals or the scale of the damage. To account for this uncertainty, we specify that decisions should be revisited and refined on a daily basis for the first weeks of operation. Based on our analysis of historical disasters, DRC operations tend to stabilize after the first two weeks of operations. We take two weeks as the basis for daily review period. However, two weeks is a heuristic, not a rule; there may be many factors that keep operations from stabilizing beyond the first two weeks. Therefore we specify that the second review period is the first two weeks (or more), and judgment should be used to determine whether this second period should extend beyond two weeks.

After the first two (or more) weeks, when operations stabilize, daily reviews are no longer necessary. Decisions should be reviewed regularly, but on a weekly basis rather than a daily basis.

Trending data allows us to revisit our assumptions and revise decisions over time. In the first two weeks (or more, if the situation is particularly complex or dynamic), evaluating trending data on a daily basis is

Figure

Figure 4-1:  Decision  Framework  for DRC  Resource Allocation  Decisions
Figure 4-2:  Registrations  and  DRC Visits,  Colorado DRCs
Figure 4-3:  Peak  DRC  Visits and Average  Daily  Registrations  including regression  lines
Table 4-1:  R 2  and  p-values  for Peak  visit Regressions Day  R2  Coefficient p-value  Power p-value
+7

Références

Documents relatifs

Golub almost exclusively uses dummy variables to control for historical time as defined by periods – such as before and after enlargement, treaty changes, Thatcher while prime

Sustainability is gaining more and more relevance on the manager’s agenda since it can positively contribute to the firm’s value creation process. The benefits are numer- ous and

Afin de créer un projet répondant au mieux aux attentes des clients, en l’occurrence les entreprises de petites tailles, composée majoritairement de (1 à 9

ﺕﺍﺮﻜﺸﺗ " ﻢﻬﻠﻟﺍ ﺎﻧﺇ ﲑﳋﺍ ﻚﻴﻠﻋ ﲏﺜﻧﻭ ﻚﻴﻟﺍ ﺏﻮﺘﻧﻭ ﻙﺮﻔﻐﺘﺴﻧﻭ ﻚﻳﺪﻬﺘﺴﻧﻭ ﻚﻨﻴﻌﺘﺴﻧﻭ ﻙﺪﻤﳓ ، ﲔﳌﺎﻌﻟﺍ ﺏﺭ ﷲ ﺪﻤﳊﺍﻭ ." ﻡﺪﻘﺗﺃ ﰲ ﱐﺪﻋﺎﺳ ﻦﻣ ﻞﻜﻟ ﻞﻳﺰﳉﺍ ﺮﻜﺸﻟﺎﺑ ﺇ ﻊﺿﺍﻮﺘﳌﺍ ﻞﻤﻌﻟﺍ

In this paper, we present a budget visualization tool, which can be used to support and improve budget negotiation, and to analyze the possible impacts on IT services, caused

Deux décennies plus tard, Maria Spychiger 2013, p. 43 déclare dans une interview que l’éducation musicale en Suisse est une discipline appliquée «angewandte Disziplin», mais,

In Section 3.1, we provide a detailed analysis of the anisotropic equation and the proof of the theorems for the light species case, the challenge being to prove 2D-like results on

L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des