9. EVALUATION OF ALTERNATIVES
The use of the DSS and Scenario Generator (SG) discussed in this document could lead to the generation of large amounts of information. However, more information does not necessarily lead to better decisions (Dent et al., 2000). The belief that all managers need more information to make better decisions misses the critical aspect of information interpretation (Senge et al., 1995). In many cases more information can complicate the decision making process, bombarding the decision maker or model user with large amounts of irrelevant and difficult to interpret information can lead to misinformation and can result in poor decisions. Stertman (1989) has shown that knowledgeable and experienced decision makers filter information through non-systematic mental models before making a decision. Large amounts of information make the filtering process more difficult for the decision maker to sift out the relevant information needed to make a decision.
It is hence critical to refine the information into quantities that can easily be used and interpreted by model users and decision makers. In terms of modelling and the development of a Decision Support System (DSS) the evaluation process requires certain specific sets of information that decision makers can interpret with relative ease. It is thus important to, in consultation with decision makers, identify the critical information requirements necessary for them to make certain sets of decisions.
The model developer and DSS system designer can then summarise the information requirements from the large amounts of information generated, filtering out all unnecessary and irrelevant information before it is shown to the decision maker. It is this summarised information that will be referred to as indicators and is a small set of the necessary information required to make certain decisions. The philosophy in this approach is to use complex accurate modelling and computer systems with highly sophisticated mathematical algorithms to produce simple and easily understandable outputs (indicators) that a model user can base decisions on with confidence.
Essentially this comes down to a problem of visualisation and it is necessary for model developers to identify critical indicators and an appropriate display method. Extensive consultation with stakeholders, water resource managers and other decision makers involved in the CMAs, is necessary to determine the types of indicators that will be needed to make water related decisions within a WMA. It is not in the mandate of this design to identify the different indicators, however, it is possible to hypothesise on the types of indicators that will be needed and the display format that will be most appropriate.
Visualisation can take three main forms namely
The types of indicators that may be used for the SEA process could be divided into several different categories each with its own set of unique indicators which may aid in the decision making process. The list provided below is not exhaustive and is only the authors own thoughts on the matter which needs to be investigated in a lot more detail with the SEA team, stakeholders and water managers alike.
Once the different indicators have been identified or derived and the best method of visualisation chosen to display the different indicators is selected, it is necessary for the decision maker to use the information to make the decision. The problem the decision maker is now faced with is how to compare the information produced from the different scenarios being tested with one another. It is also necessary for the decision maker to make decision from indicators, which have come from vastly different disciplines. The decision maker must therefore attempt to weigh up the decision using different indicators from various disciplines on an equal footing. Added to this problem is the aspect that certain social and environmental data is ordinal and subjective while other aspects are derived through scientific means and represent real quantities such as water use.
It may hence be necessary to have an objective criterion of ranking and scoring the different indicators in order to transform them into a specific solution that can then be used to compare different scenarios. This can be achieved through several different methods such as Multi Criteria Decision Analysis (MCDA), Analytical Hierarchy Process (AHP) and Cost Benefit Analysis to name a few (Stewart et al., 1997). Each method attempts to assess the alternative by translating variables (indicators) into quantities that can be assessed on an equal footing. Once this type of scoring has been collated together it is possible for the decision maker to then analyse the different options that have been produced by the different scenarios being tested. In such a manner the decision maker can weigh up the different options available and understand the trade offs that result from the different options.
Working within a structured framework with proper indicators will result in the filtering out of irrelevant information and provide the decision maker with that information needed to produce a responsible decision. The assessment of alternatives is a difficult process to automate as it tends to include many subjective criteria. While it is the responsibility of the DSS tools and models to provide the necessary information (indicators) that will aid the assessment process it is not its responsibility to provide the ultimate output and decision. It would be better for the DSS to provide the framework that can automate the assessment of alternative criteria through extensive input required from the user in terms of weighting and scoring criteria. It may also be necessary to provide the outline on the methodology to follow in the assessment of different alternatives. This, however, falls outside the scope of this component of the project and will need to be pursued when the DSS is being developed.
Once the information on different alternatives has been generated and assessed this may lead to the identification of new problems and a feedback loop as shown in Figure 1 could occur where and iterative approach to problem solving is taken as more information becomes available and can be assessed.