Next Article in Journal
Comparison of Neurological Function in Males and Females from Two Substrains of C57BL/6 Mice
Next Article in Special Issue
Modeling Short-Term Maximum Individual Exposure from Airborne Hazardous Releases in Urban Environments. Part I: Validation of a Deterministic Model with Field Experimental Data
Previous Article in Journal
Developmental Neurotoxicity of 3,3',4,4'-Tetrachloroazobenzene with Thyroxine Deficit: Sensitivity of Glia and Dentate Granule Neurons in the Absence of Behavioral Changes
Previous Article in Special Issue
Environmental Risk Communication through Qualitative Risk Assessment
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Concept Paper

A Pathway to Linking Risk and Sustainability Assessments

by
Stephen H. Linder
1,*,† and
Ken Sexton
2,†
1
Institute for Health Policy, Division of Management, Policy and Community Health, The University of Texas School of Public Health, 1200 Pressler, E-1023, Houston, TX 77030, USA
2
Division of Epidemiology, Human Genetics and Environmental Sciences, The University of Texas School of Public Health, Brownsville Regional Campus, Fort Brown Road, RAHC, Brownsville, TX 78520, USA
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Toxics 2014, 2(4), 533-550; https://doi.org/10.3390/toxics2040533
Submission received: 22 November 2013 / Revised: 20 August 2014 / Accepted: 14 October 2014 / Published: 28 October 2014
(This article belongs to the Special Issue Risk Assessment of Environmental Contaminants)

Abstract

:
The US National Research Council recently released a report promoting sustainability assessment as the future of environmental regulation. Thirty years earlier, this organization (under the same senior author) had issued a similar report promoting risk assessment as a new method for improving the science behind regulatory decisions. Tools for risk assessment were subsequently developed and adopted in state and federal agencies throughout the US. Since then, limitations of the traditional forms of risk assessment have prompted some dramatic modifications toward cumulative assessments that combine multiple chemical and non-chemical stressors in community settings. At present, however, there is little momentum within the risk assessment community for abandoning this evolved system in favor of a new sustainability-based one. The key question is, how best to proceed? Should sustainability principles be incorporated into current risk assessment procedures, or vice versa? Widespread recognition of the importance of sustainability offers no clear guidance for the risk assessment community, especially in light of institutional commitments to sustainability tools and definitions that appear to have little in common with cumulative risk notions. The purpose of this paper is to reframe the sustainability challenge for risk assessors by offering analytical guidance to chart a way out. We adopt a decision analysis framework to overcome some conceptual barriers separating these two forms of assessment, and thereby, both escape the either/or choice and accept the inevitability of sustainability as a central regulatory concern in the U.S.

Graphical Abstract

1. Introduction

The paradigm for assessment and management of environmental health risks has evolved since the early 1980s in response to newer, more complicated problems, advancements in scientific knowledge, and enhanced analytical methodologies. This evolution can be traced through a series of influential public-sector reports, starting with the 1983 National Research Council’s (NRC’s) report, Risk Assessment in the Federal Government: Managing the Process (known as the “Red Book”), which established the foundation for the risk assessment-risk management (RA-RM) paradigm as we know it today [1]. At approximately the same time, an independent evolution was taking place in the concept of “sustainable development” (or “sustainability”) and its application to decision making. Expansion of the RA-RM paradigm to include cumulative risk and inclusion of sustainability in policy decision-making have proceeded along parallel but separate tracks in the U.S. But in 2011, the NRC published Sustainability and the U.S. EPA (known informally as the Green Book), which proposed an operational structure for integrating sustainability into decision making at the U.S. Environmental Protection Agency (USEPA), and examined how the conventional RA-RM paradigm could be integrated into this new sustainability framework [2]. While there have been scattered attempts in the past to reconcile environmental risk management and sustainability as complementary or overlapping concepts [3,4,5,6,7,8], including efforts to develop new approaches and processes [9,10,11,12,13,14,15,16,17,18], there are few examples of applications that juxtapose the distinct methodologies for sustainability evaluation and risk assessment/management [19,20,21,22].
Strategies for bringing risk and sustainability assessments together have so far specified three approaches: let the risk assessment process subsume sustainability as a subsidiary concern (the unofficial USEPA position); have a sustainability-centered process incorporate risk as an added concern (the NRC version); or keep the two separate and accommodate their divergent results in management-focused deliberations (typical assessment practice). We argue for a fourth alternative: that the two forms of assessment—for risk and for sustainability—are best combined into a joint methodology for parity’s sake that is both transparent and relatively simple to operate. The plausibility of such a joint-assessment strategy is bolstered by several recent developments (occurring in parallel) that, arguably, have brought the two forms closer together.
On the risk side, assessment methodology has been extended to include cumulative impacts on human health, whose characterization relies on sets of empirical indicators [23,24,25,26]. The role of indicators in sustainability assessment is well established and is growing more prominent as an orthodox approach, especially in community settings [27,28,29,30]. Although the particular indicators selected as contributors to health risks vary widely, they are typically drawn from across the three domains that are common concerns in sustainability considerations—the economic, the social and the environmental. Secondly, the once solid separation of risk assessment from risk management considerations—a key provision in the RA-RM paradigm—has been worn away in practice and subject to recent weighty criticism by the NRC [2]. Part of the impetus for this change was the incorporation of environmental justice into federal regulatory decision-making [31]. An environmental justice movement in the U.S. has coalesced around “bottom-up” input into agency decision making, mobilizing communities to be involved in both management and assessment. The sustainability literature has a parallel notion of integrated assessment, where top-down expertise can be combined with bottom-up, or local experiential, input from those potentially affected [27,32]. These are substantial areas of agreement, despite their separation in different scholarly and professional communities.
Certainly, differences remain. Sustainability considerations are typically framed by Three Pillars (economic, social and environmental) and assume that tradeoffs among them are an essential feature of every assessment. A prominent family of assessment methods employs multi-criteria decision analysis (MCDA) to structure and quantify these tradeoffs [33,34,35,36]. In contrast, MCDA has played little or no role in cumulative risk assessment. More importantly, the basic logic of tradeoffs for sustainability is replaced by the idea of risk thresholds and ordering principles, as we will see. Accordingly, the issue of how and when (or whether) to employ tradeoffs represents a key challenge to developing a common method of assessment.
Our overall intent is to help achieve parity between the assessments of risk and sustainability by supporting their eventual consolidation into a joint method. We begin by taking on some of the existing conceptual barriers that have kept them separate. Given the magnitude of this task, we limit our attention to three. First, the current definition of sustainability endorsed by U.S. federal agencies and the NRC frames assessments in a relatively narrow way. A broader approach, spanning more content domains and admitting qualitative metrics, can better accommodate the range of considerations in cumulative risk assessment. Second, although cumulative risk assessment continues to evolve, it does so in a fragmentary way, without benefit of a unifying decision-analytical framework. We propose an adaptation of MCDA as a means of bridging risk and sustainability assessments. Thirdly, the use of indicator sets in sustainability assessment offers a useful direction for not only expanding measures of cumulative risk, but also establishing shared measures. We consider each of these in turn.

1.1. Expanding the Definition of Sustainability

There is a substantial literature on the various meanings of “sustainability” and how these influence its operational definition and assessment. The UN’s Earth Summit in Rio de Janeiro in 1992 set the stage with Agenda 21, a foundational text informing much of the subsequent discussion [37]. A wide spectrum of approaches appear there in nascent form. At one end of the spectrum, the definitional issue is taken as settled. The US EPA’s treatment of sustainability [38] is a case in point: “A sustainable approach is a system-based approach that seeks to understand the interactions which exist among the three pillars (environment, social, and economic) in an effort to better understand the consequences of our actions.” The assessment phase can then proceed to optimize these three pillars and make tradeoffs, without a look back at alternative definitions or less formalized approaches. The Brundtland Commission of 1987 is invoked, and the three pillars notion from the Johannesburg Declaration in 2002, built on Agenda 21, structures the content.
At the other extreme, stipulated definitions of sustainability are viewed skeptically, as unremittingly political and, therefore, contestable. Each definitional choice is understood as a marker for unspoken value commitments and political or economic world views [39]. From this standpoint, rather than being settled ahead of time, defining the meaning of sustainability remains a critical task to be accomplished anew in each problem context through a consensus-building process of stakeholder negotiation. The specifics of what gets assessed and how the assessment is accomplished, then, depends on the outcome of a deliberative process aimed at reaching definitional agreement. This effectively moves stakeholder interaction from the implementation to the formulation phase—at odds with the stakeholder role suggested in the Green Book [2] and implicit in the US EPA definition—and opens more parameters to scrutiny and local input.
This contrast extends beyond whether defining sustainability comes at the beginning or closer to the end of the assessment process. It includes: what the indicators and their metrics will measure, and who defines them; whether tradeoffs will be made to reach an overall value for each option under consideration; and what subject domains will be represented. For our purposes, we will sketch two general families of sustainability approaches that fall on opposite ends of the definitional spectrum. The first, more traditional one, is associated with the three pillars metaphor and is well-represented among federal agencies in the US. The second assumes a reflexive approach and effectively builds a definition through the assessment process. Early work on this approach was supported by the Canadian Environmental Assessment Agency [40]. A more-elaborate, contemporary version comes from the Cities Program of the UN’s Global Compact, an initiative for enhancing social accountability [27,41]. The Cities Program replaces the three pillars with the notion of thematic circles to capture a wider range of locally-defined concerns [42]. We argue that this latter approach has closer parallels to current practices in cumulative risk assessment.

1.1.1. Three Pillars

The Three Pillars notion of sustainability traces its origins to the international development community and its intent to usher in a new era of post-colonial sensibility regarding human and natural capital. From the rationale employed, it was understood to serve a reparative purpose—as a corrective against past failings—and did so by introducing an alternate set of social and ecological goals designed to counter adverse economic impacts. The attempt, on the surface, was to establish parity among these goals, establishing each as a necessary “pillar” and requiring all three (social, ecologic and economic) to support more balanced forms of development. The first institutional setting, beyond the UN, where this version of sustainability and its compensatory framing played a central role was the World Bank, which has supported the refinement and application of these ideas in a series of papers published throughout the 1980s [43].
The practical application of the Three Pillars definition can be found in the sustainability accounting reports of Executive Branch agencies in the U.S. (e.g., Department of Energy [44]) and more generally in the guidance documents issued by the Global Reporting Initiatives Project [45]. This definitional account ties sustainability to fiduciary notions of responsible accounting, emphasizing values of conservation and conscientious management, as opposed to the peremptory pursuit of ecological or social protections. All three domains (ecological, social and economic) are present but their contents change, as does the focus. Rather than overlapping collections of distinctive kinds of activities, functions or relations, domains become figurative locations for stocks of resources to be managed, conserved and controlled. Sustainability shifts from signifying an inclusive or protective status to being a register of managerial efficiencies.
In place of discrete systems within domains, we have a superordinate system that includes all three domains as constituent parts. Each domain is valued for its instrumental contribution. That is to say, its stocks are to be utilized. And if these stocks are utilized in well-managed ways, producing little waste, then overall the system’s production value is optimized. Although conventional models of economic production influence the metaphors used here, the values in this version of sustainability reinforce the ethical import of how ecological resources should best be managed relative to other resources. The valued products are those whose production, use and disposal incurs the least wasted resources across the system. Once again, monetary valuations reappear; but this time, their purpose is not to foster compensatory adjustments and tradeoffs. These valuations are assigned to various components to permit their aggregation as net cost savings and returns on investment.
The federal government’s accounting version moves the focus from plans, options and impacts down to the level of individual products. Sustainability, then, becomes an attribute of the overall “production, utilization and recovery” process or its products, rather than a set of social objectives. The gains in resource accounting and modeling capability derived from such an integrated, product-focused assessment appear to be offset by a potential loss of the engagement with stakeholder aspirations favored by other definitions. Ultimately, this social accounting approach depends upon money, and not achievement, as its key register of value. Sustainability, in this context, confers a judgment that enough money will be saved or earned to justify the extension of production and managerial controls to non-traditional kinds of capital and resources.

1.1.2. Objectives and Orderings

The basis of an alternative approach was developed for the Canadian government, motivated in part by the claims of social and political movements that were pressing for greater ecological awareness and concern [40]. This version promotes goal setting and the forming of collective commitment. Here, the emphasis shifts from backward-looking reparation to forward-looking prioritization. Sustainability is not so much about the inclusion of non-economic values in societal decision-making, as it is about treating ecological considerations as primary, or even as peremptory. There is an implicit mistrust of the compensatory practices supported by the Three Pillars definition. From this viewpoint, ecological issues are systematically undervalued by the analytical methods employed in framing tradeoffs; these methods more accurately reflect the subsidiary status of non-economic issues than the concepts of inclusion, described earlier. Hence, the decision logic of this goal-oriented version of sustainability relies on rankings and seeks to maximize performance on at least one dimension. In other words, adverse impact on ecological processes and conditions should not only be assessed first, but, more generally, ecological integrity should serve as a key objective to be satisfied before all other considerations. Environmental impact assessment practices in Australia [46] and the Endangered Species Act in the U.S. both draw on the logic of this version of sustainability.
A more recent variation of the non-compensatory version places social rather than ecological concerns at the center of sustainability determinations [47]. Values from this perspective seek to ensure material capacity for meeting future needs and build resilience and adaptability into communities. As with ecological sustainability, the performance metrics for judging improvements in this instance depend upon multiple indicators. Unlike the ecological case, however, these social values admit degrees of difference; some communities evidence more resiliency and adaptive capacity than others. Still, the relative ordering that results, consistent with satisfying multiple goals in a piecemeal way, does not pit the social against the ecological or the economic. Rather, it takes the social as first in priority and considers decision options based on their relative performance across multiple criteria without benefit of a common measurement scale. Other considerations that cannot be captured by indicators are left to be treated in subsequent rounds.
The advantage of featuring the social domain as a key feature in sustainability assessment is twofold. First, it frames the pursuit of sustainability in positive terms, as objectives to be met. And second, it moves from one-dimensional decision rules, common in efforts to limit adverse ecological impact, to rules that focus on multiple objectives at the same time. Sustainability is now defined to reside in the satisfaction of as many, possibly incommensurable, objectives as stakeholders find legitimate. Satisfaction, in this context, could be a threshold along any given criterion or a point of relative comparison across planning options or public investments. The important idea is that the objectives do not compete with one another, and thus no tradeoffs are built into the comparisons. By the middle of the 1990’s, a multi-objective procedure was introduced by statute as an alternative planning system in England. Sustainability appraisals then were framed in terms of stakeholder aspirations, and targets replaced thresholds as markers of achievement [48].
One of the best, local implementations of the objectives and indicators approach in the U.S. began in Seattle after the UN Rio Summit in 1992. The Sustainable Seattle project developed objectives through extensive public engagement and derived a set of 40 indicators for tracking their progress toward these sustainability objectives [28]. Their definition of sustainability expanded the domains to ensure wider coverage of local concerns than under the Three Pillars approach: “We define sustainability as long-term health and vitality—cultural, economic, environmental and social. Understanding sustainability means considering all the connections between various elements of a healthy society and thinking in longer time spans than we may be accustomed to [28].” Along with social, environmental and economic indicators, they included ones on civic and cultural life, community, youth, education and health. Their periodic assessments focus on improvement in performance on each indicator relative to its prior level. They characterize trends as moving toward sustainability, away from it, or remaining unchanged for each of their domains. There are no tradeoffs across domains; no weights assigned to indicators; and no compensatory effects from high performance on one adjusting for low performance on another. Summary judgments remain qualitative.
Substantial progress along these lines has been made by the Cities Program of the UN Global Compact [41,42]. Theirs is an objectives and indicators approach that admits four domains as subsidiaries of the social: the ecologic, the economic, the political and the cultural. Here, an effort is made to accommodate the underpinnings of sustainability in local power relations by making them explicit and working out practical objectives unique to each setting. In place of the Three Pillars notion, they present thematic circles as a means of avoiding premature closure when it comes to the characterization of objectives or their indicators. It is process intensive, not unlike the cumulative risk-screening efforts that have been community-based, and attempts to blend both expert and local knowledge in a negotiation over priorities and local practices.
The assessments that result are displayed distinctively as concentric circles, divided into quarters by the four domains. Judgments about relative goal-achievement correspond to positions moving away from the center, with the outermost circle signifying excellence. Similar displays apply to judgments organized by seven or more social themes, for example, belonging-mobility or inclusion-exclusion; again, the performance ordering corresponds to distance from the center and placement on the outermost circle represents optimal performance. The circles display represents a radical departure from the rectangular matrix form that indicator-based assessments typically employ (especially those adopting MCDA methods). Conventional numerical metrics are replaced with positional orderings that create an array of profiles across themes and domains. Qualitative judgments play a more significant role here than for Sustainable Seattle, as do ethical and political values. Table 1 summarizes the principal differences among the three approaches to assessment.
Table 1. Three views of sustainability and its assessment.
Table 1. Three views of sustainability and its assessment.
Concepts of SustainabilityBases of AssessmentExamples
Three Pillars (e.g., [49])Aggregate ValuationSustainable Energy Decision-making (e.g., multi-criteria weighting methods, benefit and cost-value analyses)
  • tradeoffs to maximize overall value with a focus on quantifiable improvement
Social Accounting (e.g., [45])Production-Related AnalysesReporting Templates: ISO 2600, GSI G4 (e.g., life-cycle assessment, carbon footprint)
  • prospective/retrospective with a focus on commercial products
Objectives & Orderings (e.g., [28,40])Indicator-by-Indicator OrderingsUrban Planning: Sustainable Seattle (e.g., ecological, human development indicators, political and cultural impact)
  • objectives-oriented analysis with focus on targets and thresholds
Including the necessary components of a cumulative risk assessment in this last variation would involve more than a marginal addition to the ecology or politics domain. It would require some initial structuring of judgments so that the regulatory science bearing on exposures could be included. Metrics would be required early on, as well, even if relative positions served as a summary later in the process. Still, there could be plenty of room for local deliberations and support for non-compensatory judgment with this form of assessment.
We turn now to the second barrier to bringing the two kinds of assessments closer together—the absence of a common framework. While sustainability assessment developed around comparative decisions over projects and investments, and has found the tools of decision analysis very useful [13,35,36], human risk assessment grew out of toxicological models of dose-response. Decisions, typically about compliance or legal limits, were relegated to management and legal proceedings, at least according to the guidelines [1]. As noted earlier, the recent orientation to cumulative risks, especially in community settings, has ushered in a more inclusive notion of health risk (arising from both chemical and non-chemical sources) and highlighted the importance of local input. In these circumstances, a decision analysis framework becomes more useful as a way to organize risk assessments.

2. Bringing MCDA to Cumulative Risk

In a recent review of the application of multi-criteria decision analysis (MCDA) across the environmental field, Huang and her colleagues identified over 300 studies published between 2000 and 2009 [34]. Clearly, MCDA has made a lasting imprint on environmental problem-solving, and most notably on efforts to enhance sustainability [34,36,49]. Although some of the computational procedures vary across these studies, they appear to share at least four primary characteristics. Consider them as four basic steps in a sequential process. Step (1) an environmental challenge, say, improving the sustainability of an energy system [13,49], is framed as a rational choice among well-defined, alternative courses of action—a decision problem. Alternatives might include locations, investments, project options, scenarios, energy systems, or, as we saw with Sustainable Seattle, current vs. baseline profiles from prior years. Step (2) there are criteria for comparing the alternatives, typically, based on their relative performance with respect to the decision-maker’s goals. As noted above, the Three Pillars definition of sustainability rests on three goals—economic, social and environmental improvements—with corresponding criteria for comparing how well each alternative achieves them.
In practical terms, one or more metrics (or indicators) will be selected for each criterion, so that relative performance can be represented numerically. For sustainability in energy systems, for example, criteria might include technical (safety and reliability), economic (net present value), environmental (CO2 emissions), and social factors (job creation). For untested alternatives, prospective performance may be modeled rather than measured or based on expert judgment, for instance, in estimating the dollar amount of net present value as a performance metric. Step (3) the criteria are then weighted, so that performance on more important criteria will have greater impact on the overall value of an alternative. Weighting also establishes how a performance level on one criterion translates into a corresponding level on another; in other words, it establishes the rate of exchange (or equivalent value) between criteria. If technical factors are weighted more heavily than economic ones, an alternative with high safety scores but low net present value may be viewed more favorably overall than an alternative with moderate safety but high net present value. In effect, one is trading off net present value for safety.
Most weighting schemes employ subjective estimates that are typically normalized to sum to one. The key assumption in this case is that the appropriate criteria and their associated performance metrics (or indicators) have been included, since the relative values of the weights will change with each addition or deletion. The effect of applying a weight to a given indicator will also be sensitive to the appropriateness of the scale used and how comparable the scale values are across metrics. As we will see, an approach that stops short of step 3 avoids these pitfalls. In that instance, no direct cross-criterion comparisons are made. Assessment proceeds one criterion at-a-time across the available alternatives, rather than one alternative at-a-time across the assembled criteria. Hence, there is no requirement for a common measurement scale (or for quantitative metrics for that matter). The final decision is based on detectable differences between pairs of alternatives on at least some criteria. In the Sustainable Seattle example, a set of indicators was chosen and revised over time to represent objectives linked by citizen, expert and stakeholder opinion to sustainability enhancements. Progress was judged based on a profile of annual improvements across these indicators. In effect, the profile from prior years served as a baseline for comparing each subsequent year’s achievements. Some years, there were positive gains on a majority of indicators. Areas in need of attention, where values fell below levels of achievement in prior years, could be identified for remedial action [28].
In the final step of MCDA, component values get tallied. Step (4) the weighted scores for each alternative are added together, across criteria, to form an aggregate measure of value. The rational choice, then, corresponds to the alternative with the largest summary value. Clearly, the simplest way to combine scores is to add them up, and addition is the default combination rule in all of the studies reviewed by Huang, Keisler and Linkov [34]. This summing operation has several important implications. Low scores on some criteria can be compensated for by high scores on others, and vice versa. This feature puts tradeoffs into action. Further, there needs to be a common scale of counts, amounts or ratings across criteria to permit adding scores together. And finally, all of the methods that use an overall value or sum to order alternatives from best to worst are sensitive to which alternatives have been included and, by implication, which ones have been excluded. Omitting an important criterion, as noted above, can have the same effect. Nevertheless, there are approaches within the MCDA family, known as “outranking methods” [50,51], intended to counter this sensitivity by supporting a partial rather than full ordering of alternatives, and permitting some criteria to escape weighting. In other words, steps 3 and 4 are still essential, but a few alternatives or criteria (depending on the method) may be excluded from the necessary comparisons.
We are persuaded by the evidence that the 4-step MCDA approaches work remarkably well under the conventional Three Pillars definition of sustainability, where achievements on one criterion may require sacrifices on another, and different weightings for criteria are a central feature. That is to say, it appears to work best when criteria are potentially conflicting and the resulting tradeoffs need to be well-defined and quantified, so that decision-makers can make an informed choice. For cumulative risk assessment, however, where the criteria are more likely to be classes of stressors, such as chemical exposures and economic disadvantage (rather than, say, desirable characteristics), requiring a preference ordering across criteria or tradeoffs to improve overall value makes little sense. Fortunately, the alternative approach to sustainability assessment, discussed earlier under the rubric of Objectives & Orderings, shares these same reservations. In place of the 4-step methods connected with Three Pillars, an Objectives & Orderings assessment typically employs only the first two steps [40,42,46]. This 2-step constraint will bring us substantially closer to finding common ground between risk and sustainability assessments with MCDA providing the pathway.
Relatively recent attempts to develop methods for assessing cumulative health risks [52,53,54] emphasize the use of indicators as a robust and reliable way to capture the broad range of influences on health. These include chemical, social and behavioral factors implicated in adverse health outcomes within communities and among certain vulnerable populations. This turning point for risk assessment was prompted, in large part, by concerns for racial disparities in toxicant exposures and by US EPA’s experience with the complexities of assessing “Superfund” toxic waste sites. Not only would non-chemical risks be considered relevant, the focus of these risks would shift to populations, their characteristics, and where they lived. As had long been the case with sustainability assessments, the selection of indicators is now a key step in assessing cumulative risk. Luckily, there are a large number of possible indicators. The Federal government’s Health Indicator Warehouse, for example, lists 73 indicators of risk with data on hand [55]. Sustainable Seattle eventually agreed on 40 core indicators to measure factors that were considered crucial to local sustainability. The UN Commission on Sustainable Development spent two decades specifying a set of indicators that would be valid cross-nationally, have data available, and be meaningful for local scale assessments [56]. Table 2 below is their final list.
Table 2. UN indicators of sustainable development [57].
Table 2. UN indicators of sustainable development [57].
SocialEnvironmental
EducationFreshwater/groundwater
EmploymentAgriculture/secure food supply
Health/water supply/sanitationUrban
HousingCoastal zone
Welfare and quality of lifeMarine environment/coral reef protection
Cultural heritageFisheries
Poverty/income distributionBiodiversity/biotechnology
CrimeSustainable forest management
PopulationAir pollution and ozone depletion
Social and ethical valuesGlobal climate change/sea level rise
Role of womenSustainable use of natural resources
Access to land and resourcesSustainable tourism
Community structureRestricted carrying capacity
Equity/social exclusionLand use change
EconomicInstitutional
Economic dependency/indebtedness/ODAIntegrated decision-making
EnergyCapacity building
Consumption and production patternsScience and technology
Waste managementPublic awareness and information
TransportationInternational conventions and cooperation
MiningGovernance/role of civil society
Economic structure and developmentInstitutional and legislative frameworks
TradeDisaster preparedness
ProductivityPublic participation
Reliance on indicators introduces an important source of compatibility with MCDA. The next consideration is how an assessment process that parallels the indicator-by-indicator comparisons common to our Objectives & Orderings approach (See Table 1) might work for cumulative risk.

2.1. An Indicator-by-Indicator Scenario

Consider a hypothetical assessment of cumulative risk in a community setting. Studies based on the stressor model [23,53] employ at least three types of indicators. One type consists of metrics for chemical exposures: these may be modeled cancer risks from the National-scale Air Toxics Assessment [58], measured ambient concentrations of toxicants, and proximity to waste sites or petrochemical production facilities. The second type is non-chemical stressors that reflect social and economic disadvantage and are linked to adverse health effects, such as poverty, unemployment, crime rates, and racial discrimination. The third type includes indicators of overall population health. The prevalence of chronic disease, such as diabetes, hypertension, smoking and physical inactivity, all contribute to adverse health outcomes and increase premature mortality risk; they might also compound the negative impact of the stressors included in the other two types. The question is: how does one fully characterize these multiple risks and then use this characterization to either identify those burdened with the highest relative risk or pinpoint risk sources that should receive priority for mitigation efforts?
First, assume the simplest case: all indicators can be scaled to reflect rank orders of adverse effects. Assessment proceeds on an indicator-by-indicator basis. To assess cumulative risk, it is necessary to construct a profile, say, of each neighborhood’s rank orderings, and then sort the profiles to identify the poorest performers across the most indicators. In rare cases, there may be a neighborhood that stands out as worst on every indicator. More commonly, the worst case will vary across the indicators. Comparisons, then, will involve judgment (or stakeholder involvement) to separate the worst profiles from the rest. This implicitly assumes that each indicator makes an equivalent contribution to health risk, in the absence of more discriminating information. This is a common approach in health resource planning, for example, in the World Health Organization’s project, Urban HEART (Health Equity Assessment and Response Tool) [59].
At the next level of complexity, one can add order to the indicators themselves, where some are given priority over others. One could designate “poverty” and “exposure to high ambient concentrations of diesel exhaust” as more important contributors to cumulative risk, than, say, the prevalence of hypertension or unemployment rates. Identifying a few indicators for priority consideration (either considering them first or using them to break ties) reduces necessary effort and supports consistent application of participants’ priorities. Decision theorists term this a lexicographic ordering, since the assignment of relative standing among indicators is not made by numerical means [60,61]. These indicator-ordered assessments are still non-compensatory by design and rule out tradeoffs between indicators. This feature can be an advantage in stakeholder-mediated assessments, where, to some groups, tradeoffs can signal unacceptable political concessions. Using ordered indicators is a familiar strategy in the health impact assessments (HIAs) advocated by the U.S. Centers for Disease Control and Prevention [16,17,62].

2.2. Reference Values for Indicators: Thresholds, Ceilings and Benchmarks

Every indicator-by-indicator assessment involves sequential judgment—each indicator is considered, one-at-a-time, whether they are pre-ordered or not. Within each indicator, we make relative judgments about the performance of our alternatives. In the case of cumulative risk, we may compare neighborhoods and conclude that some are high on chemical stressors. If all of the neighborhoods appear high, we may have no way to know how important this fact is for health effects without some point of reference. Ceilings, such as the maximally-permissible exposure limits for toxicants and unit risk estimates for carcinogens, can take the place of a relative ranking across neighborhoods. The comparisons, then, are between each indicator’s value for a given neighborhood and the corresponding standard value. In other words, we move from grading-on-the-curve to a pass-fail system. Currently, human risk assessments that define a cutoff for maximally acceptable risk of adverse health effects operate on a reference-value basis, as do ecological risk assessments that set maximum levels for contaminants that are systemic threats. The reference value may also come from performance in prior years that serves as a benchmark for documenting progress. Sustainable Seattle assesses whether the city’s performance is better, worse or about the same as the last time [28]. Cumulative risk assessments in Port Arthur use a similar approach to benchmarking their progress in reducing health risks, as a means of concentrating their efforts on problem areas that lag behind [25,63].
There are two, well-known indicator-based procedures, described in the decision literature [60,64,65,66], for using thresholds as reference values rather than ceilings or benchmarks; these are called, conjunctive and disjunctive. A conjunctive procedure establishes thresholds on all indicators that serve as minimal values for acceptable performance, and holds that all must be satisfied. Not meeting any one represents inadequate performance overall, regardless of how many other thresholds have been met. A disjunctive procedure is similar but requires only a single threshold to be exceeded for adequate performance overall. None of the applications using indicator-by-indicator methods would permit a single indicator, as in a disjunctive procedure, to be determinative. On the other hand, the use of thresholds with a conjunctive procedure is not uncommon. For a cumulative risk assessment intended to identify the communities at greatest risk, such thresholds might represent baseline expectations for indicators, such as the proportion of the population at or below 100% of the Federal Poverty Level or a minimum level of educational attainment (e.g., high school or GED). Cumulative cancer risk exceeding 1 case per 100,000 population could be used a threshold for chemical stressors. Here, the conjunctive rule identifies communities with a disproportionate burden of cumulative risk as those exceeding all of the thresholds. To be successful, it is critical to include a wide variety of indicators tapping into different sources of risk and to establish the thresholds independently of the communities under assessment. This logic is found in multi-indicator systems serving as diagnostics for environmental injustice, as in the U.S. EPA’s C-FERST (Community-Focused Exposure and Risk Screening Tool) and EJSEAT (Environmental Justice Strategic Enforcement Tool) tools [67,68], for example. Table 3 briefly summarizes these similarities and contrasts.
Table 3. Reference values in indicator-by-indicator assessments.
Table 3. Reference values in indicator-by-indicator assessments.
Threshold ValuesBenchmark ValuesCeiling Values
AimsMinimize negative effectsMaximize positive effectsMinimize negative effects
Assessment ProcessCompare to thresholds for minimally acceptable valuesCompare to benchmarks for goals or prior performanceCompare to cutoffs for maximally tolerable values
ApplicationsUS EPA’s C-FERSTUN Global Compact Cities ProgramUS EPA’s EJSEAT
Many of the same indicators may be relevant whether risk or sustainability is at issue. Moreover, both kinds of “no-tradeoffs” assessments may call on indicators for the presence of collective assets that exert a positive rather than negative influence on both sustainability and risk mitigation. The important point is that once reference values are supported by the admission of indicator-by-indicator (non-compensatory) rules, there is no analytical reason preventing the consolidation of both cumulative risk and sustainability assessments into a single, more comprehensive process. This suggestion reframes the question of whether one kind of assessment should be a subsidiary of the other or, by default, whether each should remain independent. With this analytical opening, the next task is to refine these indicator-based approaches so that both sustainability and cumulative risk can be assessed together. This kind of assessment will likely rely on a shared set of core indicators relevant to communities and community-based projects. Specifying such a core indicator set becomes a viable next step. Moreover, the experience gained from designating reference values, and recasting the thresholds and ceilings among them in positive and negative terms, could help to structure stakeholder interactions and enhance their meaningfulness to non-expert participants.

3. Prospects

The need to undertake assessments for both cumulative risk and sustainability in support of informed policy decisions is now generally appreciated. The question is, how should the two analyses be undertaken, given the current separation between them, both politically and administratively? Logically, there are four plausible outcomes. One outcome, consistent with the status quo in the US, is that sustainability concerns are added onto the results of existing risk assessment practices, with sequencing intended to preserve the integrity of risk assessments and to keep sustainability evaluations as a supportive, but subsequent, analytical exercise. Of course, their respective value to the exercise will be reflected in the size of allotted resources (historically, in the context of regulatory decision making—more for risk assessment-management and less for sustainability evaluation) and their order of relative priority (for regulatory purposes, risk-based decision making predominates and subsumes sustainability analysis). Under this regime, sustainability’s advocates are left to screen risk assessment results in an ex post fashion. Their concerns operate more as constraints on choices across already-framed decision options, than as objectives to be pursued. In effect, appraisals of sustainability would be nested inside the management portions of the risk analysis process. From there, they would serve as a final check on decisions reached largely on other grounds.
A second outcome, proposed by the NRC [2] (see Figures 3-1 and 3-2, pages 37–38) adopts sustainability as the governing principle for decision making, with RA-RM embedded in the overall decision-making structure. According to the NRC [2], the “sustainability assessment and management” step in the proposed sustainability decision framework “can be viewed as representing the risk paradigm expanded and adapted to address sustainability goals”. In the NRC schema, risk assessment is part of a toolbox for sustainability appraisal, which includes other methods such as life-cycle assessment, benefit-cost analysis, ecosystem services valuation, integrated assessment models, sustainability impact assessment, environmental justice analysis, and present/future scenario evaluation. This is an influential blueprint that has been taken seriously, at least within the US EPA, but as yet remains only one possible future.
A third outcome is to pursue parity through separation. Both risk and sustainability appraisals are acknowledged as critical sources of evidence for regulatory decisions, and they are kept separate, but operate in parallel processes rather than in tandem. The more independent each is from the other, the less likely one is to be nested as subsidiary to the other, but the more likely they are to compete for resources, relevance and scientific status. Accordingly, some political and rhetorical weight will need to be added to the newcomer’s side (sustainability for the US EPA), which begins at a procedural and familiarity disadvantage. The major unresolved issue with independent arrangements is when (or where) the two modes of appraisal come together to make a mutually reinforcing contribution to regulatory decision making. This leads us back, full circle, to the status quo of one mode being subordinated to the other.
A fourth outcome, as supported by the argument presented here, involves merging the assessment of risk and sustainability into one, seamless, joint procedure that does not treat risk or sustainability as separate and distinct (neither independent nor subordinate) from the other. Today, given the obvious obstacles, this prospect may seem outlandish, but the possibility begins to appear more plausible if one focuses primarily on the conceptual and analytical issues. As discussed earlier, there are certain similarities not only at abstract and methodological levels but also in the developmental paths followed by risk assessment and sustainability evaluation. If one looks closely, there are lines of convergence that lend credence to the idea of fashioning a joint method for assessing both risk and sustainability in the context of regulatory decision-making. The goal must be to conduct holistic assessments that provide decision makers with timely information necessary to choose a sustainable future.

Acknowledgments

Partial funding was provided by the U.S. Environmental Protection Agency under a contract to The Scientific Consulting Group, Inc. Thanks to Lawrence Martin and Tim Barzyk of the U.S. EPA for scholarly exchanges on this topic and for constructive comments on an earlier version of this manuscript. All of the views expressed, herein, remain our responsibility.

Author Contributions

Jointly conceived and developed the structure and arguments for the paper: S.H.L. and K.S. Wrote the first draft of the manuscript: S.H.L. Made critical revisions and contributed to the final version of the manuscript: S.H.L. and K.S. Agree with manuscript results and conclusions: S.H.L. and K.S. Reviewed and approved the final submittal: S.H.L. and K.S.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. NRC (National Research Council). Risk Assessment in the Federal Government: Managing the Process; National Academy Press: Washington, DC, USA, 1983. [Google Scholar]
  2. NRC (National Research Council). Sustainability and the U.S. EPA; National Academies Press: Washington, DC, USA, 2011. [Google Scholar]
  3. Anastas, P.T. Fundamental Changes to EPA’s Research Enterprise: The Path Forward. Environ. Sci. Technol. 2012, 46, 580–586. [Google Scholar] [CrossRef] [PubMed]
  4. Briggs, D.J. A Framework for Integrated Environmental Health Impact Assessment of Systematic Risks. Environ. Health 2008, 7, 61. [Google Scholar] [CrossRef] [PubMed]
  5. Mehta, M.D. Risk Assessment and Sustainable Development: Towards a Concept of Sustainable Risk. Risk Health Saf. Environ. 1997, 8, 137–154. [Google Scholar]
  6. Missimer, M.; Robert, K.H.; Broman, G.; Sverdrup, H. Exploring the Possibility of a Systematic and Generic Approach to Social Sustainability. J. Clean. Prod. 2010, 18, 1107–1112. [Google Scholar] [CrossRef]
  7. Pope, J.; Annadale, D.; Morrison-Saunders, A. Conceptualizing Sustainability Assessment. Environ. Impact Assess. Rev. 2004, 24, 595–616. [Google Scholar] [CrossRef]
  8. Wennersten, R.; Fidler, J. Methods for Risk Assessment within the Framework of Sustainable Development; Department of Industrial Ecology, Royal Institute of Technology: Stockholm, Sweden, 2008. [Google Scholar]
  9. Davies, J.C. Comment on “Solutions-Focused” Risk Assessment. Hum. Ecol. Risk Assess. 2011, 17, 788–789. [Google Scholar] [CrossRef]
  10. Finkel, A.M. Solution-Focused Risk Assessment: A Proposal for the Fusion of Environmental Analysis and Action. Hum. Ecol. Risk Assess. 2011, 17, 754–787. [Google Scholar] [CrossRef]
  11. Goldstein, B.D. The Cultures of Environmental Health Protection: Risk Assessment, Precautionary Principle, Public Health, and Sustainability. Hum. Ecol. Risk Assess. 2011, 17, 795–799. [Google Scholar] [CrossRef]
  12. Hope, J.K. A Commentary on Dr. Finkel’s Proposal for Solutions-Focused Risk Assessment. Hum. Ecol. Risk Assess. 2011, 17, 790–794. [Google Scholar] [CrossRef]
  13. Ness, B.; Urbel-Piirsalu, E.; Anderberg, S.; Olsson, L. Categorizing Tools for Sustainability Assessment. Ecol. Econ. 2007, 60, 598–608. [Google Scholar] [CrossRef]
  14. Omenn, G.S. Making Credible Scientific Judgments about Important Health and Ecological Risks and Ways to Efficiently Reduce Those Risks. Hum. Ecol. Risk Assess. 2011, 17, 800–806. [Google Scholar] [CrossRef]
  15. Paustenbach, D.J. Comments on Dr. Finkel’s Paper on Solutions Focused Risk Assessment (SFRA). Hum. Ecol. Risk Assess. 2011, 17, 807–812. [Google Scholar] [CrossRef]
  16. Sennor, R.G.B.; Colonell, J.M.; Isaacs, J.K.; Davis, S.K.; Ban, S.M.; Bowers, J.P.; Erikson, D.E. A Systematic But Not-Too-Complicated Approach to Cumulative Effects Assessment. In Proceedings of the 22nd Annual Conference of the International Association for Impact Assessment, The Hague, The Netherlands, 15–22 June 2002.
  17. Sennor, R. Assessing the Sustainability of Project Alternatives: An Increasing Role for Cumulative Effects Assessment. Paper presented at IAIA Calgary. November 2008. Available online: http://www.iaia.org/IAIA08Calgary/documents/Assessing%20the%20Sustainability%20of%20Project%20Alternatives.pdf (accessed on 15 August 2014).
  18. Sexton, K.; Linder, S.H. Integrated Assessment of Risk and Sustainability in the Context of Regulatory Decision Making. Environ. Sci. Technol. 2014, 48, 1409–1418. [Google Scholar] [CrossRef] [PubMed]
  19. Alvarez-Flores, C.M.; Heide-Jorgensen, M.P. A Risk Assessment of the Sustainability of the Harvest of Beluga in West Greenland. ICES J. Mar. Sci. 2004, 61, 274–286. [Google Scholar] [CrossRef]
  20. Evans, R.; Brereton, D. Risk Assessment as a Tool to Explore Sustainable Development Issues: Lessons from the Australian Coal Industry. Int. J. Risk Assess. Risk Manag. 2007, 7, 607–619. [Google Scholar] [CrossRef]
  21. Smith, D. Understanding Risk and Sustainability—The Future for Risk Managers? 2007. Available online: http://riskmatters.com.au/doc/Understandingriskandsustainability.pdf (accessed on 23 May 2012).
  22. Vairavamoorthy, K.; Khatri, K.B. Risk Assessment for Sustainable Urban Water Management. USESCO-IHP. 10 August 2007. Available online: http://www.delftcluster.nl/website/files/risk_assessment_for_sustainable_urban_water_management.pdf (accessed on 24 May 2012).
  23. Gee, G.; Payne-Sturges, D. Environmental Health Disparities: A Framework Integrating Psychosocial and Environmental Concepts. Environ. Health Perspect. 2004, 112, 1645–1653. [Google Scholar] [CrossRef] [PubMed]
  24. Morello-Frosch, R.; Pastor, R.; Sadd, M. Environmental Justice and the Southern California Riskscape. Urban Aff. Rev. 2001, 36, 551–578. [Google Scholar] [CrossRef]
  25. Prochaska, J.; Kelley, H.; Linder, S.H.; Sexton, K.; Sullivan, J.; Nolen, A.B. Health Inequities in Environmental Justice Communities. Int. J. Equity Health 2012, 11, A7–A8. [Google Scholar] [CrossRef]
  26. Sexton, K.; Linder, S.H. Cumulative Risk Assessment for Combined Health Effects from Chemical and Non-Chemical Stressors. Am. J. Pub. Health 2011, 101, S81–S88. [Google Scholar] [CrossRef]
  27. Magee, L.; Scerri, A.; James, P.; Thom, J.; Padgham, L.; Hickmott, S.; Deng, H.; Cahill, F. Reframing Social Sustainability Reporting: Towards an Engaged Approach. Environ. Dev. Sustain. 2013, 15, 225–243. [Google Scholar] [CrossRef]
  28. Sustainable Seattle. Indicators of Sustainable Community: A Status Report on Long-Term Cultural, Economic, and Environmental Health. 1997. Available online: http://sustainableseattle.org/images/indicators/1995/1995indicators.pdf (accessed on 20 July 2014).
  29. Tatham, E.K.; Eisenberg, D.; Linkov, I. Sustainable Urban Systems: A Review of How Sustainability Indicators Inform Decisions. In Sustainable Cities and Military Installations; Linkov, I., Ed.; Springer Science: Dordrecht, The Netherlands, 2014. [Google Scholar]
  30. Turcu, C. Rethinking Sustainability Indicators: Local Perspectives of Urban Sustainability. J. Environ. Plan. Manag. 2013, 56, 695–719. [Google Scholar] [CrossRef]
  31. USFR (United States Federal Register). Executive Order 12898: Federal Action to Address Environmental Justice in Minority Populations and Low-Income Populations. Fed. Regist. 1994, 59, 7629–7634. [Google Scholar]
  32. Hunsberger, C.; Gibson, R.; Wismer, S. Citizen Involvement in Sustainability-Centered Environmental Assessment Follow-Up. Environ. Impact Assess. Rev. 2005, 25, 609–627. [Google Scholar] [CrossRef]
  33. Collier, Z.A.; Wang, D.; Vogel, J.T.; Tatham, E.K.; Linkov, I. Sustainable Roofing Technology under Multiple Constraints: A Decision Analytical Approach. Environ. Syst. Decis. 2013, 33, 261–271. [Google Scholar] [CrossRef]
  34. Huang, I.B.; Kessler, J.; Linkov, I. Multi-criteria Decision Analysis in Environmental Sciences. Sci. Total Environ. 2011, 409, 3578–3594. [Google Scholar] [CrossRef] [PubMed]
  35. Linkov, I.; Satterstrom, F.K.; Kiker, G.; Batchelor, C.; Bridges, T.; Ferguson, E. From Comparative Risk Assessment to Multi-Criteria Decision Analysis and Adaptive Management: Recent Developments and Applications. Environ. Int. 2006, 32, 1072–1093. [Google Scholar] [CrossRef] [PubMed]
  36. Merad, M.; Dechy, N.; Marcel, F.; Linkov, I. Multiple-Criteria Decision-Aiding Framework to Analyze and Assess the Governance of Sustainability. Environ. Syst. Decis. 2013, 33, 305–321. [Google Scholar] [CrossRef]
  37. UNCED (United Nations Conference on Environment and Development). Agenda 21. 1992. Available online: http://sustainabledevelopment.un.org/content/documents/Agenda21.pdf (accessed on 20 July 2014).
  38. USEPA. Sustainability Primer.v.9. 2013. Available online: http://www.epa.gov/ncer/rfa/forms/sustainability_primer_v9.pdf (accessed on 20 July 2014). [Google Scholar]
  39. Bond, A.J.; Dockerty, T.; Lovett, A.; Riche, A.; Haughton, A.; Bohan, D.; Sage, R.; Shield, I.; Finch, J.; Tuner, M.; et al. Learning How to Deal With Values, Frames and Governance in Sustainability Appraisal. Reg. Stud. 2011, 45, 1157–1170. [Google Scholar] [CrossRef]
  40. Gibson, R.B. Specification of Sustainability-Based Environmental Assessment Decision Criteria and Implications for Determining “Significance” in Environmental Assessment; Canadian Environmental Assessment Agency EN 105–67/2001E: Ottawa, ON, Canada, 2001. [Google Scholar]
  41. UNGCCP (United Nations Global Compact Cities Program). Accounting for Sustainability, Briefing Paper 1. 2008. Available online: http://citiespro.pmhclients.com/images/country_flags/Draft_Accounting_for_Sustainability_Briefing_Paper.pdf (accessed on 20 July 2014).
  42. UNGCCP. Circles of Sustainability: An Integrated Approach. 2010. Available online: http://citiespro.pmhclients.com/images/uploads/Indicators_-_Briefing_Paper.pdf (accessed on 20 July 2014).
  43. Dixon, J.; Fallon, L. The Concept of Sustainability. Soc. Natl. Res. 1989, 2, 73–84. [Google Scholar] [CrossRef]
  44. 2010. USDOE (United States Department of Energy). Strategic Sustainability Performance Plan. Available online: http://energy.gov/downloads/2010-doe-strategic-sustainability-performance-plan-report-white-house-council (accessed on 12 July 2011).
  45. UNGRI (United Nations Global Reporting Initiative). Sustainability Reporting Framework. 2014. Available online: http://www.globalreporting.org/reporting/reporting-framework-overview/pages/default.aspx (accessed on 20 July 2014).
  46. Thomas, I. Environmental Impact Assessment in Australia, 3rd ed.; The Federation Press: Sydney, Australia, 2001. [Google Scholar]
  47. Dalal-Clayton, B.; Sadler, B. Strategic Environmental Assessment: A Sourcebook and Reference Guide to International Experience; Earthscan: New York, NY, USA, 2005. [Google Scholar]
  48. Brown, C.; Duhr, S. Understanding Sustainability and Planning in England: An exploration of the Sustainability Content of Planning Policy at the National, Regional and Local Levels. In Planning in the UK: Agenda for the New Millennium; Rydin, Y., Thornley, A., Eds.; Ashgate: Burlington, VT, USA, 2002; pp. 257–278. [Google Scholar]
  49. Wang, J.J.; Jing, Y.; Zhang, C.; Zhao, J. Review of Multi-Criteria Decision Analysis Aid in Sustainable Energy Decision-Making. Renew. Sustain. Energy Rev. 2009, 13, 2263–2278. [Google Scholar] [CrossRef]
  50. Brans, J.P.; Vincke, P. Note—A Preference Ranking Organization Method. Manag. Sci. 1985, 31, 647–656. [Google Scholar] [CrossRef]
  51. Roy, B. The Outranking Approach and the Foundations of Electre Methods. Theory Decis. 1991, 31, 49–73. [Google Scholar] [CrossRef]
  52. Callahan, M.A.; Sexton, K. If Cumulative Risk Assessment Is the Answer, What Is the Question? Environ. Health Perspect. 2007, 115, 799–806. [Google Scholar] [CrossRef]
  53. Linder, S.H.; Sexton, K. Conceptual Models for Cumulative Risk Assessment. Am. J. Public Health 2011, 101, S74–S81. [Google Scholar] [CrossRef] [PubMed]
  54. USEPA (United States Environmental Protection Agency). Framework for Cumulative Risk Assessment; Risk Assessment Forum: Washington, DC, USA, 2003.
  55. NCHS (National Center for Health Statistics). Health Indicators Warehouse. 2014. Available online: http://www.healthindicators.gov (accessed on 15 August 2014). [Google Scholar]
  56. UNCmSD (United Nations Commission on Sustainable Development). Report of the Commission on Sustainable Development on the 20th Session. UN CSD-20 E/CN.17/2013/4. 2013. Available online: http://www.un.org/ga/search/view_doc.asp?symbol=E/CN.17/2013/4&Lang=E (accessed on 20 July 2014).
  57. UNCmSD (UN Commission on Sustainable Development). Indicators of Sustainable Development: Guidelines and Methodology. 2001. Available online: http://www.un.org/esa/sustdev/publications/indisd-mg2001.pdf (accessed on 15 July 2014).
  58. USEPA. 2005. NATA (National-scale Air Toxics Assessment). Available online: http://www.epa.gov/ttn/atw/nata2005 (accessed on 20 July 2014).
  59. WHO (World Health Organization), Centre for Health Development. Urban HEART: Urban Health Equity Assessment and Response Tool. Available online: http://www.who.int/kobe_centre/measuring/urbanheart/en/index.html (accessed on 15 September 2008).
  60. Einhorn, H.J. The Use of Nonlinear, Noncompensatory Models in Decision Making. Psychol. Bull. 1970, 73, 221–230. [Google Scholar] [CrossRef] [PubMed]
  61. Keeney, R.L.; Raiffa, H. Decisions with Multiple Objectives: Preferences and Value Tradeoffs; John Wiley & Sons: New York, NY, USA, 1976. [Google Scholar]
  62. CDC (Centers for Disease Control and Prevention). Healthy Places: Health Impact Assessment. 2006. Available online: http://www.cdc.gov/healthyplaces/hia.htm (accessed on 20 August 2011). [Google Scholar]
  63. Prochaska, J.; Nolen, A.B.; Kelley, H.; Sexton, K.; Linder, S.H.; Sullivan, J. Social Determinants of Health in Environmental Justice Communities. Hum. Ecol. Risk Assess. 2014, 20, 980–994. [Google Scholar] [CrossRef] [PubMed]
  64. Coombs, C.H. A Theory of Data; John Wiley & Sons: New York, NY, USA, 1960. [Google Scholar]
  65. Coombs, C.H.; Kao, R.C. Non-Metric Factor Analysis; Engineering Research Bulletin No. 38; University of Michigan Press: Ann Arbor, MI, USA, 1955. [Google Scholar]
  66. Elrod, T.; Johnson, R.D.; White, J. A New Integrated Model of Non-compensatory and Compensatory Decision Strategies. Organ. Behav. Hum. Decis. Process. 2004, 95, 1–19. [Google Scholar] [CrossRef]
  67. USEPA. C-FERST (Community-Focused Exposure and Risk Screening Tool). Available online: http://www.epa.gov/heasd/c-ferst (accessed on 15 May 2012).
  68. USEPA. EJSEAT (Environmental Justice Strategic Enforcement Tool). Available online: http://www.epa.gov/compliance/ej/resources/policy/ejseat.html (accessed on 14 December 2012).

Share and Cite

MDPI and ACS Style

Linder, S.H.; Sexton, K. A Pathway to Linking Risk and Sustainability Assessments. Toxics 2014, 2, 533-550. https://doi.org/10.3390/toxics2040533

AMA Style

Linder SH, Sexton K. A Pathway to Linking Risk and Sustainability Assessments. Toxics. 2014; 2(4):533-550. https://doi.org/10.3390/toxics2040533

Chicago/Turabian Style

Linder, Stephen H., and Ken Sexton. 2014. "A Pathway to Linking Risk and Sustainability Assessments" Toxics 2, no. 4: 533-550. https://doi.org/10.3390/toxics2040533

APA Style

Linder, S. H., & Sexton, K. (2014). A Pathway to Linking Risk and Sustainability Assessments. Toxics, 2(4), 533-550. https://doi.org/10.3390/toxics2040533

Article Metrics

Back to TopTop