Advertisement

A Group Multicriteria Approach

  • Guy Camilleri
  • Pascale ZaratéEmail author
Living reference work entry
  • 66 Downloads

Abstract

Group decision support systems are developed to support a group engaged in decision-making process. These decision-making processes can also be seen from a multicriteria perspective. We present in this chapter group support system called GRoUp Support (GRUS) based on a multicriteria paradigm. The GRUS system has been tested and used in several contexts. Some experiments are conducted in a European project called RUC-APS and some other in a French national project called VGI4Bio, and finally we conducted some experiments in three different countries with Master and PhD students.

Keywords

Group multicriteria decision analysis (G-MCDA) Group decision support systems (GDSS) Group support systems (GSS) GRoUp Support (GRUS) Group decision-making 

Introduction

Making decision with several stakeholders remains a very complex task. This complexity is linked to the number of persons, the chosen rule of working, the influence of some agents inside the group, the agents’ behavior, etc. (Dong et al. 2018). The more these factors are complex, the more the group has a chance to make subjective decisions and also to enter in a conflict. The main objective of this work is to present a group support system based on a multicriteria approach and to show how this approach allows the group to generate more objective and consensual solutions (see also “Group Support Systems: Past, Present and Future”; “Group Decision Support Systems: A Case Study”; “Group Support Systems: Experiments with an Online System and Implications for Same-Time/Different Places Working”).

An important motivation of this work is to design a multicriteria decision approach (see “MCDA Methods for Group Decision Processes: An Overview”; “Analytic Hierarchy Process and Group Decision Support”; “Group Decision Support using the Analytic Hierarchy Process”). The approach is embedded with the general philosophy of group support systems. Our proposal is based on sharing criteria and alternatives, which are generated by the group. Moreover, each decision-maker can have its own criteria (called private criteria), and she/he evaluates all alternatives according to public (shared) and her/his private criteria. The group evaluation is computed from individual evaluations. This approach is formalized by a multicriteria decision analysis workflow, and some collaborative tools are proposed to support the designed workflow.

This chapter is organized as follows: In the section “Related Works,” we described the related works in two different sub-parts: group support systems and multicriteria decision-making group decision support systems. In the section “Group MCDA Workflow (G-MCDA-W),” the group multicriteria decision analysis workflow is presented. The section “GRoUp Support (GRUS)” describes the GRoUp Support (GRUS) system, which is decomposed in three sub-parts: an introduction, the facilitator in GRUS, and the multicriteria evaluation tool. In the section “Use Cases,” we exposed three conducted experiments which constitute three sub-parts of this section. The first experiment was made with Master and Doctorate students in three different countries. This subsection, sub-part 1, is decomposed in three parts: first description of the use case, then description of the used process, and finally presentation of the sessions done in the three countries. The second sub-part, sub-part 2, describes the experiments conducted with partners of a European Union-funded project (enhancing and implementing knowledge-based ICT solutions within high-risk and uncertain conditions for agriculture production systems – RUC-APS). The last experiment is presented in the last sub-part, sub-part 3, and is about a French research project (VGI users and data-centered methods for the analysis of farmland biodiversity indicators: a participative SOLAP (Spatial OnLine Analytical Processing) approach for opportunistic data – VGI4Bio). The final sub-part, sub-part 4, of section “Use Cases” presents the main results of these experiments. The section “Conclusions and Perspectives” concludes this chapter and presents some perspectives on future work.

Related Works

This section introduces the group support systems (GSS) and the multicriteria decision-making group decision support systems research domains.

Group Support Systems (GSS)

The original purpose of group support systems (GSS), also called group decision support systems (GDSS), is to exploit opportunities that information technology tools can offer to support group work (See “Group Support Systems: Past, Present and Future”). In the early of 1980s, many studies started to explore how collaboration technologies (as email, chat, teleconferencing, etc. – see “Collaboration Engineering for Group Decision and Negotiation”) can be used to improve the efficiency of the group work. Most of these studies focused on collaborative group decision-making and problem-solving activities (See “Group Support Systems: Experiments with an Online System and Implications for Same-Time/Different Places Working”; “Group Decision Support Systems: A Case Study”). Researchers proposed several definitions of GSS. In their work, DeSanctis and Gallupe (1987) defined GSS as a system which combines communication, computer, and decision technologies to support problem formulation and solution in group meetings. For Sprague and Carlson, a GSS is a combination of hardware, software, people, and processes that enables collaboration between groups of individuals (Sprague and Carlson 1983). These definitions (and many others) point out four important aspects: devices (computers, communication network, etc.), software (decision technologies, communication software, etc.), people (meeting participants, etc.), and group processes (as nominal group technique, etc.).

GSS can be used in decision rooms or in different locations through the Internet (See “Group Support Systems: Experiments with an Online System and Implications for Same-Time/Different Places Working”). In decision rooms, all participants are in the same location (a room) and have a terminal (personal computer) connected to a local area network. They have a private space in their terminal allowing them to achieve personal tasks (such as consulting documents or making some calculations, etc.), and they can send their contributions to a public space (often a large screen located in front of them). Participants can communicate through electronic messages or directly verbally. In the configuration where participants can be distributed (different locations), usually the GSS is a web application (not a standalone software) accessible through a web site. These web applications integrate communication in the form of electronic messages and can also be used with web conferencing facilities. Today, the majority of GSS are web applications. Ackermann provides an overview of the development of GSSs.

Many studies have shown that GSS can improve the group productivity by increasing information flow between participants, by generating a more objective evaluation of information, and by creating synergy inside the group (see Nunamaker et al. 1996; De Vreede et al. 2003). Nunamaker and his colleagues (Nunamaker et al. 1996) identified two kinds of benefits: tangible and intangible. Tangible benefits refer to money savings through greater productivity and time reduction to reach decisions, increased number of higher quality of ideas in brainstorming, etc. Intangible benefits comprise higher level of group cohesiveness, improved problem definition, higher quantity and greater quality of solutions, and stronger commitment to these solutions. De Vreede in (de Vreede 2014) has also confirmed these results in two case studies where he introduced GSS in two organizations. Group facilitation is defined as a process in which a person who is acceptable to all members of the group intervenes to help improving the way it identifies and solves problems and makes decision (Schwarz 2002; see also Franco). For Bostrom et al. (Bostrom et al. 1993), facilitation is the set of group activities that a facilitator carries out before, during, and after a meeting to help the group achieve its own outcomes. Facilitation can also be viewed as a dynamic process that involves managing relationships among people, tasks, and technology, as well as structuring tasks and contributing to the effective accomplishment of the meeting’s outcomes (Den Hengst and Adkins 2007).

Since groups using GSS are often more effective than those not supported by GSS, an important question remains: why are GSS not widely used in organizations? Briggs et al. (2003) noted that the situation can be even worse. In some cases where GSS facilities are successfully utilized in organizations (i.e., they produce measurable economic benefits), their use tends to self-extinguish and in other cases tends to flourish. The facilitator plays an important role for maintaining the use of GSS facilities.

Many researchers attempt to answer to the problem of self-extinguishing of GSS. We identify three main streams of works. The first stream tries to integrate elements of automated facilitation inside GDSS tools (Limayem and DeSanctis 2000; Zhao et al. 2002; Wong and Aiken 2003; Alabdulkarim and Macaulay 2005; Adla et al. 2011). These works show the interest of automating some facilitation elements. Studies such as those of Wong and Aiken (Wong and Aiken 2003) and Limayem et al. (Limayem et al. 2006) further demonstrate that the integration of automated facilitation in GDSS can be as effective as using the same tools with skilled-human facilitation. It also enhances a faithfulness appropriation of the technology. Of course, the ultimate goal of the automation approach is to replace human facilitation by a machine facilitation. Today this goal is not yet reached; automation simplifies some facilitation activities but not replaces them (see “Group Support Systems: Experiments with an Online System and Implications for Same-Time/Different Places Working”). The second stream refers to the works of Helquist et al. (2008) where he developed an approach called participant-driven GSS (PD-GSS). In this approach, all participants play a full role in the collaborative process, providing the necessary work and effort to guide the group in its collaborative activities (in the group process). The role of the facilitator is reduced; she/he is responsible for configuring and initializing and managing the process.

The collaboration engineering (CE) (See “Collaboration Engineering for Group Decision and Negotiation”) approach is the most mature and developed approach from both a theoretical and a practical point of view. De Vreede (2014) defines the collaboration engineering as “an approach for designing collaborative work practices for high-value recurring tasks and deploying those designs for practitioners, who are domain experts, to execute for themselves without ongoing support from professional facilitators.” According to de Vreede and Briggs (2018), for a recurring specific task, CE introduces two new roles: the collaboration engineer and the practitioner. The collaboration engineer is a collaboration expert which designs a collaborative process that she/he can transfer to practitioners. Being an engineering approach, CE framework has developed some methods and theories in order to guide the collaborative engineer in her/his work on the design and deployment of collaborative processes. Usually, a collaborative process is a sequence of activities performed by a group in order to reach a joint goal. In CE, collaborative processes are designed from “thinklets.” Thinklet is a core concept of the CE framework. Briggs et al. (2001) defined a “thinklet” as “the smallest unit of intellectual capital required to create one repeatable, predictable pattern of thinking among people working toward a goal.” Thinklets capture and operationalize collaborative techniques used by professional facilitators in many situations. Thinklet was originally composed of a tool (specific version of collaborative technology used to create a pattern of thinking), configuration (tool configuration), and script (sequence of events and instructions given to the group to create the pattern of thinking) (see Briggs and De Vreede 2009). In their work, Briggs et al. (e.g., Briggs et al. 2001; de Vreede and Briggs 2018) proposed six patterns of collaborations which are frequently observed in collaborative activities. These patterns are defined in (de Vreede and Briggs 2018) in the following way:

Generate: To move from having fewer concepts to having more concepts. Reduce: To move from having many concepts to having a focus on fewer concepts deemed worthy of further attention. Clarify: Moving from less to more shared meaning of the concepts under consideration. Organize: To move from less to more understanding of the relationships among the concepts, e.g., by sorting a set of ideas into categories. Evaluate: To move from less to more understanding of the value of concepts toward a goal. Build Consensus: To move from having more to having less disagreement among stakeholders on proposed courses of action.

Multiple Criteria Decision-Making GDSS

In a survey of group decision support systems, Zaraté et al. (2013) analyze 14 tools with regard to a set of 12 functionalities. Some of these tools are for commercial systems and others developed in research laboratories. The authors conclude this survey noting that there are tools on the market that address certain steps of collaborative decision-making process. These GDSS tools are collaborative tools, and they offer functionalities supporting all or important part of decision-making processes. They are proven tools that most of them are from market and used by great companies and universities.

Nevertheless, the authors conclude that the effective use of these tools requires facilitators who master them and who are able to bring decision-making team their goals through a given process (See “Group Support Systems: Experiments with an Online System and Implications for Same-Time/Different Places Working”; “Group Support Practice: Decision Support ‘As it Happens’”). These facilitators must also be supported in order to adapt the methodology to tune these tools depending on the type of groups and the context of the situations.

Multicriteria decision-making is widely developed and used for single decision-maker. Several systems have been developed based on different kinds of methodologies like PROMETHEE (Silva Oliveira and de Almeida-Filho 2018), for example, but all of them are devoted for one stakeholder. These MCDM methodologies and systems are not often used for group decision-making. Marttunen et al. (2017) have shown that problem structuring methods (PSMs) for multicriteria decision analysis (MCDA) can be efficient. Problem structuring methods (PSMs) have proved to be an effective means for skilled facilitators to support groups facing decision-making challenges (Rosenhead and Mingers (2001)). Nevertheless, GSS are not frequently used through a MCDM approach.

Mareschal et al. (1998) proposed a methodology including the use of the PROMETHEE multicriteria decision analysis GDSS in a group decision-making context. They propose that every decision-maker fulfills his own individual preferences in a performance matrix. Then a global evaluation of each alternative is performed thanks to weighted sum aggregation techniques. The decision-makers could have the same weight or different weights. This is interesting in order to conduct a sensitive analysis among the stakeholders. Nevertheless, the decision-makers have no possibility to share with the other participants their preferences or to co-build a decision. In this chapter, we propose a methodology for aggregating the decision-makers’ preferences in a collaborative way, i.e., allowing that participants co-built the decision; they exchange their viewpoints trying to design a common representation of the problem at hand and then to reach an agreement or a consensus. This does not imply that all decision-makers must share all criteria, preferences, and weights, in other words all parameters of the decision. In our approach, the decision-makers will agree on several criteria that are called collective criteria, but they also can defend individual criteria that are personal to each stakeholder.

Nevertheless, and if the group agrees to use a multicriteria approach, for all these tools, all the stakeholders must agree on the criteria to be used. We propose a methodology able to take into account collective criteria, shared by anyone, as well as private criteria, defined by one participant or some of them. The approach is a flexible methodology that allows more facilities to participants involved in a problem-solving process.

Group MCDA Workflow (G-MCDA-W)

Our MCDA methodology is based on the sharing of alternatives and public criteria between decision-makers. In addition, a decision-maker can use private criteria which are unknown to other decision-makers.

The group workflow of our MCDA methodology is composed of four steps (see Fig. 1):
  1. 1.

    Importance of participants and scales definition. During this step the facilitator assigns a weight for each participant (representing the expertise of the participant on the decision subject) and two scales: one for criteria and the other for alternatives.

     
  2. 2.

    Criteria and alternatives generation. Each decision-maker (participant to the decision) generates and shares criteria and alternatives. They can also define some private criteria. A private criterion is only visible and accessible to the decision-maker which provides it.

     
  3. 3.

    Multicriteria evaluation. In this step, all decision-makers apply individually a private multicriteria analysis (described later).

     
  4. 4.
    Results presentation. This step displays individual and group rankings of alternatives according to the previous multicriteria analysis. Group ranking is a weighted average of individual scores. Only group ranking is shared between decision-makers; individual remains private.
    Fig. 1

    Group MCDA Workflow

     
A common way to implement this workflow is to statically integrate it into a system, as is usually done in MCDA works. In this case, the way of performing each step of this workflow cannot be changed/adapted to the decision context. Therefore, a tool with one or several graphical user interfaces is designed for each step, and the workflow achievement comes out of the sequential execution of these step tools (Importance of participants and scales definition tool, then Criteria and alternatives generation, and so on; see Fig. 2). An important consequence of this implementation strategy is that it implicitly embeds some elements of the group process. For example, one way to design a tool for the Criteria and alternatives generation step could be to build one user interface in which users can submit criteria and alternatives into two different lists (one for criteria and the other for alternatives). However, this choice will strongly reduce the usability of the system because it freezes a process into the overall workflow. Indeed, in a situation where a decision session involves many decision-makers (e.g., more than 50) on an open subject, it would be likely that the group would generate many alternatives and/or criteria. Two lists containing all alternatives and criteria (e.g., over 30) would be not manageable by users because this would increase the cognitive load on each user (it would risk participants not reading, not remembering, etc. all alternatives or criteria). A more efficient way to carry out this Criteria and alternatives generation step could be to use, for example, a collaborative technique such as FreeBrainstorm thinklet (see Briggs and De Vreede 2009). In summary, this thinklet uses several pages containing a limited number of contributions (criteria/alternatives). At each time, one page is assigned to one participant in which she/he can only submit or comment one more contribution. Then pages are reassigned to participants randomly. In this way, this thinklet will reduce the number of contributions to read and to remember at each time (only one page).
Fig. 2

Example of common G-MCDA-W implementation

In this work, we follow the philosophy of GSS development by dissociating as much as possible the group process and the tools. It is why we only design some specific collaborative tools dedicated to our group MCDA workflow. These tools are:
  • The parameters tool. This tool is used for defining the weight (importance) of each participant (facilitator included).

  • The criteria and alternatives generation tool, in which decision-makers can submit at the same time criteria and alternatives.

  • The multicriteria evaluation tool. This is the main tool, which applies our multicriteria analysis method individually (for each decision-maker).

  • The consensus tool, which computes and presents the results from evaluations of the multicriteria evaluation tool.

The main advantage of this choice is that the facilitator can define a group process adapted to a particular decision context, by, for example, choosing appropriate collaborative techniques (as thinklets) and tools (see Fig. 3). Moreover, this choice is consistent with the collaboration engineering approach, and therefore it can be directly applied to our multicriteria methodology and tools. In this way, a facilitator can be guided and can benefit from all works on collaboration engineering (see de Vreede and Briggs 2018).
Fig. 3

Example of G-MCDA-W with collaborative techniques and tools

GRoUp Support (GRUS)

In this section, the GRUS system is presented with the collaborative tools designed for supporting our G-MCDA-W.

Introduction

The GRUS (GRoUp Support) system is a group support system (GSS) in the form of a web application. GRUS can be used for organizing collaborative sessions (meetings) in synchronous and asynchronous modes. In synchronous mode, all participants are connected to the system at the same time, while in asynchronous mode participants use the system at different time. It is also possible to use GRUS in mixed mode, synchronously and asynchronously by performing, for example, a collaborative process with some activities in synchronous mode and others in asynchronous mode. As GRUS is a web application, users can achieve session in distributed (all participants are not in the same location) and non-distributed situations (all participants are in the same room). The only prerequisite is to have an Internet connection with a web browser.

GRUS is an open source project, developed on the Grails framework (which is also open source). It integrates classical functionalities of multiuser applications: sign in/sign out and management of users (list of users, remove a user, change the password of a user, etc.).

A user in GRUS can participate to several meetings in parallel. For some meetings, a user can have the role of a standard participant, and for other meetings she/he can have the role of facilitator. In GRUS, the facilitator of a collaborative process can always participate to all activities of her/his process.

The GRUS system proposes several collaborative tools, with the following main tools:
  • Electronic brainstorming tools: these tools allow participants to submit contributions (ideas) to the group. A contribution is composed of a title and a short description. A brainstorming tool dedicated to multicriteria meetings is also available.

  • Clustering tools: with this tool, the facilitator defines a set of clusters and puts items inside of these clusters. Participants discuss verbally and can only see in their interface actions performed by the facilitator.

  • Vote tools: this class of tools refers to voting procedures; currently voting methods like Borda and de Condorcet are implemented. In Borda procedure, the candidates are ordered by decision-makers in their preference total order given a number of points. The candidate who has the maximum of points is selected. This voting procedure is very well known and can be used to select several candidates. The Condorcet procedure is based on pairwise comparison. Each decision-maker orders the candidates according to his/her preference order. The mark for each candidate is calculated by pairwise comparison. Each possible pairwise comparison is simulated. For each pairwise comparison, we determine the number of decision-makers voting for one or another candidate. If one candidate wins all pairwise comparison, it is the winner.

  • The candidate who has the maximum of points is elected. This voting procedure is very well known and can be used to select several candidates. See also “Single-Winner Voting Systems”; “Multi-Winner Voting Systems”.

  • Multicriteria evaluation tool: with this tool, users can evaluate alternatives according to criteria.

  • Consensus tool: this tool displays statistics on the multicriteria evaluation outcomes.

  • Miscellaneous tools: reporting (automatic report generation), feedback (participant questionnaire for evaluating the meeting quality), conclusion (for integrating conclusions of the meeting), direct vote (the facilitator directly assigns a value to items), etc.

All these collaborative tools can be used for different collaborative activities. For example, the voting tool can be used for achieving a Reduce pattern (by selecting the ten best candidates, and in this way, this will reduce the number of elements to consider) or can be used for an Evaluation pattern (by ranking all candidates).

Facilitator Tools in GRUS

A collaborative session carried out with GRUS requires one facilitator. A meeting with GRUS is roughly composed by the following steps:
  1. 1.
    Meeting creation (see Fig. 4). A user defines the topic of the meeting, gives a short description, and chooses the user which will be the facilitator (in GRUS, it is possible to create a meeting for another user), duration (which is optional), and participants (it is also possible to add or remove participants during the meeting). The user has to choose the collaborative process either by reusing an existing process or by defining a new one (see step 2). Let us note that in GRUS, all processes are shared by all users of the system.
    Fig. 4

    Meeting creation

     
  1. 2.
    Process definition (see Fig. 5). In cases where a user wants to define a new process, she/he has to give a name (which should be unique) and select the sequence of tools to apply.
    Fig. 5

    Process creation

     
  1. 3.
    Meeting achievement. During the meeting, the facilitator can dynamically manage the meeting thanks to an icon bar (see Fig. 6). With this icon bar, she/he can start/stop the meeting, invite/remove participants, change the process by adding/moving/removing tools which are not yet used, and go to the next tool (next activity in the process). Standard participants do not have this icon bar (see Fig. 7). Facilitator and participants have a progress bar indicating the current activity, activities already done, and remaining activities. They also have a timing bar which helps them to be aware about the meeting duration (see Camilleri et al. (2011) for more details). The part below the timing bar is dedicated of the current collaborative tool and will change according to the used tool.
    Fig. 6

    GRUS meeting – facilitator interface

    Fig. 7

    GRUS meeting – standard participant interface

     

Multicriteria Evaluation Tool

The participants here have the possibility to give their individual preferences based on a multicriteria approach (Sibertin-Blanc and Zaraté 2014). In a first step, they fill in the preference matrix, which means that they give one mark to each alternative on each criterion. At this stage, if the number of criteria and alternatives is high, it implies that the matrix will be very large and therefore decision-makers should give all matrix values which is a quite heavy work. The given marks are based on the scale that has been chosen at the beginning of the process by the facilitator.

For the second step, they give the importance of each criterion, called also weight of criteria but also the suitability function. This function is determined by three thresholds: a minimum number, a maximum number, and a desired number. The minimum number represents a veto threshold. Every alternative, which obtains a mark less than this minimum number, will be eliminated. The maximum number represents preferred maximum number. The desired number is the indifference threshold. It means that in the suitability function, between the desired point and the maximum point, there is no difference for the end user. If the decision-maker does not have any indifference threshold, the desired number and the maximum number are the same.

At the final step, the decision-maker gives the relationships that he considers between two criteria. If she/he sees that one criterion is linked to another, she/he will notify this element in the relations matrix on a scale from 1 to 3. She/he can also quantify this relation on this scale (from 1 to 3) by answering the question: How much this criterion is linked to the second one (see Fig. 8)?
Fig. 8

Criteria relations matrix in the multicriteria process

This last matrix is used in only one aggregation operator the Choquet integral (see Bottero et al. 2018; Choquet 1987). The two previous matrices are used in all aggregation operators.

Collaborative Tools

In this section, we briefly present some tools specifically designed to support our defined workflow (parameters, criteria and alternatives generation, and consensus tools).

Parameters Tool

This tool aims at defining important parameters of our MCDA methodology. Only the facilitator can use this tool. In the first part (see Fig. 9), the facilitator attributes a weight for each participant. Weights are scaled on five points (very weak, weak, moderately important, strong, very strong). In Fig. 9, the participant “part1” is judged to be strongly important (strong), the facilitator “fac1” weakly important, and so on. In the second part, the facilitator chooses the scales for criteria and alternatives (3, 5, 10, 20, and 100 point scales are available). In Fig. 9, a scale of five points has been chosen for the criteria and a scale of ten points for alternatives. In the last part, the facilitator authorizes the use of private criteria. It is important to note that this parameter is required for the use of multicriteria evaluation tool.
Fig. 9

Parameters tool

Criteria and Alternatives Generation Tool

The criteria and alternatives generation tool allows participants to generate criteria and alternatives at the same time, that is, they can generate a criterion and then an alternative, etc. It is often convenient to generate at the same time criteria and alternatives because an alternative may suggest a new criterion and vice versa. In Fig. 10, the facilitator interface is presented. The facilitator can choose if the generation will be anonymous (as in Fig. 10) or not. Only the facilitator can define and change the anonymity mode. Therefore, in the standard participant interface, this part is absent. The criteria and alternative parts are identical; participants can submit criterion/alternative and can add a comment. It is not mandatory to use this tool for the multicriteria evaluation, and it can be replaced by other tools according to the decision context.
Fig. 10

Criteria and alternatives generation tool

Another specificity of the GRUS system is that the end users can define two kinds of criteria – public and private – by clicking on a check box (see Fig. 10). For the public criteria, all the stakeholders must agree on its definition and scale. In that case, in the first step of the global process, each decision-maker proposes their own criteria, and in a second step, all of them must agree on the semantic. For the private criteria, each stakeholder proposes his own criteria, which will not be seen by the other members of the group. These private criteria will be taken into account by the global calculation in the system. This functionality offers the possibility to the group members to express their preferences even if they are not shared with the other. It is one way to avoid conflict even if the decision model is not shared by all stakeholders.

Consensus Tool

The objective of this tool is to compute group scores and to display some statistics. From the individual scores of all participants provided by multicriteria evaluation tool, two weighted averages are calculated for both methods (weighted sum and Choquet integral; see below). In the first part of this tool (see Fig. 11), statistics on criteria scores (average, standard deviation, etc.) are given. Participant’s criterion score is also provided in order to allow them to compare their score to the average. Participant scores are private and will thus change according to the participant interface. The second part presents, for each alternative, the group averages and the individual scores of the weighted sum and the Choquet integral methods. In Fig. 11, only one criterion (PRICE) and one alternative (WIKO) are shown; the tool displays in the same way all criteria and all alternatives. For example, in Fig. 11, the participant has weighed the criterion PRICE to 4 and the group average is 3.35. For the WIKO alternative, the group average for Choquet integral is about 0.294, and the participant score is 0.32. For the weighted sum, the group average is about 0.3675 and the participant value is 0.4.
Fig. 11

Consensus tool

Two aggregation operators are implemented: weighted sum and Choquet integral.

The weighted sum is a very simple aggregation operator to understand. It is used generally for quantitative evaluations. It requires a weight for each criterion to reflect its degree of importance in the decision problem. Weighted sum aggregation is defined by
$$ \psi (a)=\psi \left({a}_1,\dots, {a}_n\right)=\sum \limits_{i=1}^n{w}_i{a}_i $$
where a = (a1, …, an) represents the vector of (normalized) performance scores for alternative a and i = 1, 2, …, n, wi ∈ [0, 1] is the weight assigned to criterion i, where
$$ {\sum}_{i=1}^n{w}_i=1 $$

The weighted sum is a very simple operator; it is largely used in daily life. Nevertheless, it is very limited because it does not consider any dependencies or relationships among criteria. Moreover, the preferences of the decision-maker are included in a simplistic way using the fixed weight assigned to each criterion.

Aggregation operators such as weighted sum and OWA (ordered weighted average) (Yager 1988) cannot model interactions because they depend on weight vectors. In order to take into account dependencies among criteria, it is necessary to have a nonadditive function that defines a weight, not only for each criterion but also for each subset of criteria. These nonadditive functions can model both the importance of criteria and the positive and negative synergies between them. A suitable aggregation operator can be based on the Choquet integral (Choquet 1987). The Choquet integral is used for quantitative evaluations.

Definition: A fuzzy measure μ on N is a function μ: 2N → [0, 1] that is monotonic in the sense that μ(S) ≤ μ (T) whenever S ⊆ T and that satisfies limit conditions μ(∅) = 0 and μ(N = 1.

Fuzzy measures are used as aggregation operators, as the interactions within a subset of criteria are represented by the weight of that subset in comparison to the weights of subsets of that subset. There exist several classes of fuzzy integrals, of which one of the most representative and simplest is the Choquet integral.

The Choquet integral is defined as follows: Let μ be a fuzzy measure on N. The Choquet integral of x ∈ Rn with respect to μ is defined by:

$$ \mathrm{C}\upmu \left(\mathrm{x}\right):= {\sum}_{\mathrm{i}=1}^{\mathrm{n}}{\mathrm{x}}_{\left(\mathrm{i}\right)}\left[\upmu \left({\mathrm{A}}_{\left(\mathrm{i}\right)}\right)-\upmu \left({\mathrm{A}}_{\left(\mathrm{i}+1\right)}\right)\right] $$
where (.) denotes the permutation of the components of x = (x1, …, xn) such that x(1) ≤ … ≤ x(n). As well, A(i) = {(i ), …, (n)} and A(n + 1) = ∅.

The Choquet integral gives the possibility to calculate the index of interaction between the criteria and the global importance of each criterion, called the Shapley value. For more information, see (Choquet, 1987).

Use Cases

In this part, three groups of experimentations are presented. The first is an experimentation with Master and Doctorate students in different countries, and the second took place in the European project RUCAPS and the last in the French project VGI4BIO. This section finishes with a brief description of the main outcomes of these experimentations.

Students Use Cases

Context

The case study decision problem was presented to each group and is described below.

“You are member of the Administrative Committee of the Play-On-Line Company. This company develops Software Games. It includes 150 collaborators represented as follows:
  • 80% Computer Engineers

  • 15% Business Staff

  • 5% Administrative Staff.

During a previous meeting, the Board decided to buy new mobile phones for all collaborators (the whole company). The use of the phones will not be the same for the three groups of collaborators. The computer engineers need to test the software as it is developed, on every operating system (Android, iPhone, etc.); the business staff will demonstrate the software to potential clients (and need large screens, for example). The administrative needs are simpler and more basic, such as communication (email, text, telephone, etc.).

The aim of today’s meeting is to make together a decision about the best solution for Play-on-Line. The budget is strictly limited, so costs must be minimized. In order to satisfy the requirements of all stakeholders, your group must think up several solutions or scenarios but you must remember that company survival, from a financial point of view, is mandatory.

You can, for example, decide to buy the same Smartphones for everybody, or you can buy different models of smartphones for different collaborators, including some to be used only for testing. The technical characteristics and prices of five preselected Smartphones are given in the attached documents.

First, you have to define the set of criteria to be used (4–5) to solve this problem, and identify several alternatives (4–5). One alternative is defined as a combination of several smartphones, for example: 80% of Type A + 20% of Type B. You will be guided by the facilitator, and then you will enter in the GRUS system your own preferences used for calculating the group decision.”

Used Process

In this experimentation, it was very likely that the group produces many alternatives and maybe many criteria, even if it was indicated to generate only 4–5 alternatives/criteria. Therefore, for the step 2 (criteria and alternatives generation) of the Group MCDA Workflow (G-MCDA-W), we used generate followed by reduce/clarify collaborative patterns in order to generate and to have few alternatives and criteria. The used G-MCDA-W implementation is presented in Fig. 3. This process is completed by a vote procedure for selecting from the multicriteria analysis an alternative (e.g., 10% of smartphones of type A, 40% of type B, and 50% of type C) plus other classical GSS tools.

This process was instantiated in GRUS in the following way:
  1. 1.

    The parameters tool.

     
  2. 2.

    The criteria and alternatives generation tool for generating criteria and alternatives. This tool has been configured in an anonymous mode (decision-maker’s inputs are anonymous).

     
  3. 3.

    The alternative reduction tool to reduce the number of alternatives to 4 or 5. The facilitator orally conducts this step. Each decision-maker expresses their own views about the categorization of alternatives. The facilitator then assigns each alternative to a category of alternatives.

     
  4. 4.

    The criteria reduction tool to reduce the number of criteria to 4 or 5. The same collaborative technique than for the step 3 is used.

     
  5. 5.

    The multicriteria evaluation tool, in which each decision-maker gives their own assessment, on a scale of 0 to 20, of the performance of each alternative on each criterion, the weight of each criterion, and a suitability function reflecting the interpretation of each criterion (i.e., an indifference threshold as well as the pair-by-pair dependencies among criteria).

     
  6. 6.

    The direct vote tool, where all preferences given by all users are combined using two techniques, weighted sum and Choquet integral. During this step, the facilitator shows the results of the multicriteria evaluation. All alternatives are then ranked according to the two techniques, producing two total orders. A discussion is then initiated by the facilitator in order to classify all alternatives into three categories: saved, possible, and removed.

     
  7. 7.

    The conclusion tool, in which the facilitator proposes a conclusion for the meeting – the set of saved alternatives. If the group must decide on one specific alternative, it is still possible to go back to the multicriteria evaluation step in order to refine the solution.

     
  8. 8.

    The report tool. The facilitator generates a report of the meeting as a PDF file.

     

Three Different Countries

On this Play-On-Line problem, a first set of tests was conducted at Toulouse Capitole 1 University. One Master-level computer science class comprising 14 students was selected to participate. Three groups were created, including 4, 4, and 6 participants, respectively. Each group worked independently, in a one-session meeting of 90 min.

Each group was required to find four to five criteria and four to five alternatives, in order to restrict each session to 90 min. If the number of criteria and alternatives was decided by the group, we would not have been able to control the time of each session (Zaraté et al. 2017).

The second set of tests (always on Play-On-Line problem) was conducted at Wilfrid Laurier University and the University of Waterloo in Waterloo, Canada. A group of 15 persons, mostly PhD students and visiting researchers, was selected to participate in the test.

Three groups of five participants each were created. Each group worked within a meeting session of 60 min (Zaraté et al. 2016).

The third set of tests (Play-On-Line problem) was conducted in the Postgraduate Program of Production Engineering (Management Engineering.) at Universidade Federal de Pernambuco, Recife, Brazil (http://ppgep.org.br/). A group of Master students attending a course on MCDA was selected to participate in the test. 15 persons were selected to be part of the test. The 15 students are divided in four groups. Each group must simulate the decision-making for 90 min.

Using the GRUS system, the same process than for Toulouse and Waterloo was applied except for the steps six, seven, and eight. These three last steps were not conducted because of the time consumption. The reduction steps took a long time and it was not possible to finish totally the process. The report was generated based on the alternatives and criteria generation and alternatives and criteria reduction.

For the three countries, after the decision process, each participant filled in a questionnaire composed of seven questions, five about the common versus private criteria (Research Questions 1 and 2) and two about facilitation (Research Question 3) (questionnaire will be included in Annex).

Implementing Knowledge-Based ICT Solutions Within High-Risk and Uncertain Conditions for Agriculture Production Systems (RUC-APS)

The project was about enhancing and implementing knowledge-based ICT solutions within high-risk and uncertain conditions for agriculture production systems. It aims to enhance agriculture-based decision-making in agriculture value chain.

Uncertainty in agriculture is not new. Since the 1970s, agricultural economics has primarily focused on seven main topics: agricultural environment and resources; risk and uncertainty; food and consumer economics; prices and incomes; market structures; trade and development; and technical change and human capital. These topics still remain open in terms of required support and solutions. Uncertainties on these topics are, most of the times, the cause for not effective decision-making processes for farmers and each related participant who belongs to the agriculture value chain. Actually, there are large ranges of uncertainties to uncover.

From the genetic design of the seed till their planting and harvest-related processes, covering farmers’ desired productivity as well as the expected end customer service level, the RUC-APS project aims to provide a knowledge advancing in agriculture-based decision-making through the development of a high-impact research in terms of integrating real-life agriculture-based value chain requirements, land management alternatives for a variety of scales, unexpected weather and environmental conditions, as well as the innovation for the development of agriculture production systems and their impact over the end users under participatory ICT developments.

This project allowed us to test the developed system for roughly 15 tests. We included in these tests academics but also nonacademic persons working in the agriculture domain. Most of the times, the multicriteria processes were used but also a vote process.

The same multicriteria GRUS process than for Toulouse, Waterloo, and Recife was applied; we also modified this process for some tests by deleting the reduction steps (alternative reduction and criteria reduction tools). In these tests, few criteria and alternatives were generated; then it was not useful to group them in clusters (reduction tools). Therefore, alternatives and criteria were directly used in the multicriteria evaluation tool. A vote process quite simple was also designed composed of a generation step (brainstorming tool) and a vote step (vote tool).

VGI4Bio

The VGI4Bio (VGI users and data-centered methods for the analysis of farmland biodiversity indicators: A participative SOLAP approach for opportunistic data) aims to produce meaningful farmland biodiversity indicators.

The conservation of biodiversity and its link with agriculture currently represents a major challenge. Observation data may be needed at large spatial or temporal scales to encompass a wide range of situations in order to achieve meaningful results. This implies that thousands of observers need to be mobilized, at a cost which would be prohibitive if they had to be paid. Therefore, in this project we will define a set of statistical tools and observers’ behavior modelling to extract and visualize accurate and relevant data from opportunistic data (VGI data), in order to produce meaningful farmland biodiversity indicators. Moreover, since VGI systems do not provide advanced analysis tools, in this project we will use Spatial OLAP to analyze those farmland biodiversity indicators. Since final users are different and numerous, in this project we will define a new group decision-making SOLAP design methodology to implement Spatial OLAP models for farmland biodiversity indicators.

We used the GRUS system for several tests in this context. Two processes were used: the multicriteria process as well as the vote process (as for the RUC-APS project).

Use Cases Main Results

Based on these three different projects, and according to the feedback given by the end users, we can conclude that GRUS is a good support for a group engaged in a decision-making process. End users are confident to the use sessions thanks to the anonymous functionality. They also appreciate very much to give their own preferences.

Nevertheless, one issue using this system is to understand the multicriteria process, which is quite complex including several nonbasic steps. In order to avoid this issue, we design a methodology to make the system easier to understand. We firstly present the system in a classical presentation, and we then present a video recording one session of GRUS. Using this video, the end users feel more comfortable (see Grigera et al. 2019).

Another process was used, the vote process, which is very easy to understand. Nevertheless, we present through the GRUS system the results coming from two aggregation rules: Borda (see Zahid and De Swart 2015) and Condorcet (see Bottero et al. 2018). For some case, the results (then ranked final list) were not the same with the two rules. This issue was difficult to interpret for the end users. We then decided to present the result of only one rule for each session.

Conclusion and Perspectives

In this chapter, we have presented a group multicriteria approach which has been implemented in the GRUS system. The proposed multicriteria approach is formalized by a workflow following the general GSS philosophy. This workflow can be used in different decision contexts and different purposes. For example, it can be integrated to a process for solving a problem, for choosing an option, for ranking alternatives, and so on. The GRUS system offers flexible processes. The group can build its own process using several implemented tools. The two processes that have been tested are the multicriteria process and the vote process.

The first process, the multicriteria process, allows the users to give their own preferences and to construct all together a global evaluation to the different options (alternatives) of the problem to solve. It is a co-construction of the evaluation, which permits the users to feel more comfortable and happy with the result even if the chosen option is not their preferred option. All the conducted tests showed that this process allows an easy reaching of consensus. Nevertheless, we also saw that this process is not easy to understand.

The second process, the vote process, is very easy to understand. Nevertheless, we can assume that it is more appropriate when the options of the problem to solve are quite simple and easy to compare or when there is no need to pay attention on the nature of options or in situations where a decision must be done quickly.

We then can conclude that if the alternatives are complex, the multicriteria is suitable. In the opposite case, if the alternatives are simple, the vote process is more efficient.

MCDM approach is not widely used in group support systems. With this study we show how it is useful for a group engaged in decision-making to proceed by structured processes using MCDM approach in at least one step. From a toolbox point of view, using GSS improves flexibility in MCDM methodologies benefiting of all collaborative tools and processes existing in the GSS literature. In a more conceptual aspect, a combination of MCDM methodologies and GSS allows a more efficient support for a group of decision-makers.

As perspectives of this work, we still have to study other processes using other implemented tools.

Cross-References

Notes

Acknowledgment

Authors of this publication acknowledge the contribution of the Project 691249, RUC-APS: Enhancing and implementing knowledge-based ICT solutions within high-risk and uncertain conditions for agriculture production systems (www.ruc-aps.eu), funded by the European Union under their funding Scheme H2020-MSCA-RISE-2015

This work is partially supported by the project ANR-17-CE04-0012 VGI4bio.

References

  1. Adla A, Zarate P, Soubie JL (2011) A proposal of toolkit for GDSS facilitators. Gr Decis Negot.  https://doi.org/10.1007/s10726-010-9204-8 CrossRefGoogle Scholar
  2. Alabdulkarim A, Macaulay LA (2005) Facilitation of e-meetings: state-of-the-art review. In: Proceedings. The 2005 IEEE international conference on e-technology, e-commerce and e-service (EEE). pp 728–735Google Scholar
  3. Bostrom R, Anson R, Clawson et al (1993) Group facilitation and group support systems. J Manag Inf Syst. Jessup L, Valacich J (eds), pp 146–168Google Scholar
  4. Bottero M, Ferretti V, Figueira JR et al (2018) On the Choquet multiple criteria preference aggregation model: theoretical and practical insights from a real-world application. Eur J Oper Res.  https://doi.org/10.1016/j.ejor.2018.04.022 CrossRefGoogle Scholar
  5. Briggs RO, De Vreede GJ (2009) Thinklets: building blocks for concerted collaboration. University of Nebraska, Center for Collaboration ScienceGoogle Scholar
  6. Briggs RO, De Vreede GJ, Nunamaker JF, Tobey D (2001) ThinkLets: achieving predictable, repeatable patterns of group interaction with Group Support Systems (GSS). In: Proceedings of the annual Hawaii international conference on system sciences.  https://doi.org/10.1109/HICSS.2001.926238
  7. Briggs RO, De Vreede GJ, Nunamaker JF (2003) Collaboration engineering with thinklets to pursue sustained success with group support systems. J Manag Inf Syst 19:31–64CrossRefGoogle Scholar
  8. Camilleri G, Zarate P, Viguie P (2011) A timing management banner for supporting group decision making. In: Proceedings of the 2011 15th international conference on computer supported cooperative work in design, CSCWD 2011Google Scholar
  9. Choquet G (1987) Theory of capacities. Ann Inst Fourier.  https://doi.org/10.5802/aif.53 CrossRefGoogle Scholar
  10. de Silva Oliveira LG, de Almeida-Filho AT (2018) A new PROMETHEE-based approach applied within a framework for conflict analysis in evidence theory integrating three conflict measures. Expert Syst Appl.  https://doi.org/10.1016/j.eswa.2018.07.002 CrossRefGoogle Scholar
  11. de Vreede G-J (2014) Two case studies of achieving repeatable team performance through collaboration engineering. MIS Q Exec 13:115–129Google Scholar
  12. de Vreede G-J, Briggs R (2018) Collaboration engineering: reflections on 15 years of research & practice. In Proceedings of the 51st annual Hawaii International Conference on System Sciences, HICSS 2018Google Scholar
  13. De Vreede GJ, Vogel D, Kolfschoten G, Wien J (2003) Fifteen years of GSS in the field: a comparison across time and national boundaries. In: Proceedings of the 36th annual Hawaii International Conference on System Sciences, HICSS 2003Google Scholar
  14. Den Hengst M, Adkins M (2007) Which collaboration patterns are most challenging: a global survey of facilitators. In: Proceedings of the annual Hawaii international conference on system sciencesGoogle Scholar
  15. DeSanctis G, Gallupe RB (1987) A foundation for the study of group decision support systems. Manag Sci 33:589–609.  https://doi.org/10.1287/mnsc.33.5.589 CrossRefGoogle Scholar
  16. Dong Y, Zha Q, Zhang H, Kou G, Fujita H, Chiclana Fr, Herrera-Viedma E (2018). Consensus reaching in social network group decision making: Research paradigms and challenges. In Knowledge-Based Systems, 165:3–13CrossRefGoogle Scholar
  17. Grigera J, Sakka A, Bosetti G et al (2019) UX challenges in GDSS: an experience report. In: Submitted to GDN conferenceGoogle Scholar
  18. Helquist JH, Kruse J, Adkins M (2008) Participant-driven collaborative convergence. In: Proceedings of the annual Hawaii international conference on system sciencesGoogle Scholar
  19. Limayem M, DeSanctis G (2000) Providing decisional guidance for multicriteria decision making in groups. Inf Syst Res.  https://doi.org/10.1287/isre.11.4.386.11874 CrossRefGoogle Scholar
  20. Limayem M, Khalifa M, Ma S (2006) Human versus automated facilitation in the GDSS context. In: IEEE international conference on systems, man and cyberneticsGoogle Scholar
  21. Mareschal B, Brans JP, Macharis C (1998) The GDSS PROMETHEE procedure: a PROMETHEE-GAIA based procedure for group decision support. J Decis Syst 7:283–307Google Scholar
  22. Marttunen M, Lienert J, Belton V (2017) Structuring problems for Multi-Criteria Decision Analysis in practice: a literature review of method combinations. Eur J Oper Res 263:1–17CrossRefGoogle Scholar
  23. Nunamaker JF, Briggs RO, Mittleman DD et al (1996) Lessons from a dozen years of group support systems research: a discussion of lab and field findings. J Manag Inf Syst.  https://doi.org/10.1080/07421222.1996.11518138 CrossRefGoogle Scholar
  24. Rosenhead J, Mingers J (2001) Rational analysis for problematic world revisited: problem structuring methods for complexity, uncertainty and conflict. Wiley, Chichester/LondonGoogle Scholar
  25. Schwarz R (2002) 5 Ground Rules for Effective Groups. In The Skilled Facilitator (pp120–160). Hoboken, NJ: Jossey-Bass.Google Scholar
  26. Sibertin-Blanc C, Zaraté P (2014) Cooperative decision making: a methodology based on collective preferences aggregation. In: Lecture notes in business information processingGoogle Scholar
  27. Sprague RH, Carlson ED (1983) Building effective decision support systems. Inf Process Manag.  https://doi.org/10.1016/0306-4573(83)90011-0 CrossRefGoogle Scholar
  28. Wong Z, Aiken M (2003) Automated facilitation of electronic meetings. Inf Manag.  https://doi.org/10.1016/S0378-7206(03)00042-9 CrossRefGoogle Scholar
  29. Yager RR (1988) On ordered weighted averaging aggregation operators in multicriteria decisionmaking. IEEE Trans Syst Man Cybern.  https://doi.org/10.1109/21.87068 CrossRefGoogle Scholar
  30. Zahid MA, De Swart H (2015) The borda majority count. Inf Sci (Ny).  https://doi.org/10.1016/j.ins.2014.10.044 CrossRefGoogle Scholar
  31. Zaraté P, Konate J, Camilleri G (2013) Collaborative decision making tools: a comparative study based on functionalities (regular paper). In: Martinovski B (ed) Group decision and negotiation (GDN), pp 111–122.Google Scholar
  32. Zaraté P, Marc Kilgour D, Hipel K (2016) Private or common criteria in a multi-criteria group decision support system: an experiment. In: Lecture notes in computer science (including subseries Lecture notes in artificial intelligence and Lecture notes in bioinformatics)Google Scholar
  33. Zaraté P, Camilleri G, Kilgour M (2017) Multi-criteria group decision making with private and shared criteria an experiment (regular paper). In: Bajwa D, Koeszegi ST, Vetschera R (eds) Group decision and negotiation (GDN), pp 31–42.Google Scholar
  34. Zhao JL, Nunamaker JF, Briggs RO (2002) Intelligent workflow techniques for distributed group facilitation. In: Proceedings of the annual Hawaii international conference on system sciencesGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.IRIT, Toulouse UniversitéToulouse Cedex 9France

Personalised recommendations