Collaboration Engineering for Group Decision and Negotiation

  • Gert-Jan de VreedeEmail author
  • Robert O. Briggs
  • Gwendolyn L. Kolfschoten
Living reference work entry


Collaborative work is essential to the success of modern organizations. Many organizations could benefit from the use of advanced collaboration technologies and collaboration professionals, such as facilitators. However, these technologies are often too complex for practitioners to use without professional support, and collaboration professionals are too expensive for many groups who could benefit from their help. Collaboration Engineering is an approach to designing collaborative work practices for high-value recurring tasks and deploying those designs for practitioners to execute for themselves without ongoing support from expert facilitators. Collaboration engineers design collaborative work practices using a facilitation pattern language consisting of “thinkLets” – facilitation techniques that create predictable patterns of collaboration. Extensive research and practice demonstrate the feasibility and effectiveness of the approach. This chapter summarizes the Collaboration Engineering approach in general and thinkLet concept in detail using an illustrative case in a governmental organization.


Group work is challenging, especially when it involves negotiation and decision making. Group collaboration processes can benefit from both tool support and process support. Key examples of these are Group Support Systems (GSS) and Facilitators. Groups can use a GSS software suite to focus and structure their deliberations in ways that reduce the cognitive costs of communication, deliberation, information access, and distraction among members as they make joint cognitive effort toward their goals (Davison and Briggs 2000; de Vreede 2014). A GSS offers an integrated set of tools to support groups moving through their work practices to achieve their goals. However, extensive field experience with GSS show that the technology can be challenging for an organization to use in a sustained way, and so to reap ongoing benefits (Agres et al. 2004; Briggs et al. 2003a). Researchers have developed the Collaboration Engineering (CE) approach to address this issue. CE is an approach to designing collaborative work practices for high-value recurring tasks, and deploying those designs for practitioners to execute for themselves without ongoing support from professional facilitators (de Vreede and Briggs 2019). CE offers a sustainable approach to the deployment of collaboration support to improve group decision-making and negotiations. This chapter will explain the CE approach and the ways in which it helps to overcome the challenges in the design and implementation of collaboration support to improve group work and group decision and negotiation.

Collaboration is a critical skill and competence in organizations (Boughzala and de Vreede 2015). Frost and Sullivan surveyed 946 decision makers using a collaboration index and found that collaboration is a key driver of performance in organizations. Its impact is twice the impact of strategic orientation and five times the impact of market and technological turbulence (Frost and Sullivan 2007). However, unsupervised groups face many challenges with collaboration, including but not limited to free riding, dominance, group think, and inefficiency (Nunamaker et al. 1997; Schwarz 1994). Especially when group size increases, productivity tends to decrease, while conflict tends to increase (Steiner 1972). Another factor that may increase the challenges of collaboration is the involvement of multiple actors and stakeholders, which intensifies interdependencies and the complexity of conflict resolution (Bruijn and Heuvelhof 2008). Thus, it is not surprising that there is a growing need for guidance, including social- and behavioral rules (Haan and Hof 2006). To overcome the challenges of collaboration, groups can benefit from collaboration support. Collaboration support can enable groups to accomplish their goals more efficiently and effectively (e.g., Fjermestad and Hiltz 2001; de Vreede et al. 2003b; de Vreede 2014). Groups can use support from facilitators, people that are skilled in creating interventions to support effective and efficient collaboration (Kolfschoten et al. 2012b), or they can use collaboration support technology such as Group (Decision) Support Systems (Ackermann, Ackermann and Eden, Parjit and Carreras, Yearworth and White).

Yet, the business case for return on collaboration support investment remains an issue (Agres et al. 2005; Briggs et al. 1999, 2003a; Post 1993). To address this issue, two strategies are possible: eliminating the need for the distinct role of process leader or facilitator, through integration of rules in the technology (e.g., Briggs et al. 2013), and task separation for the facilitation role, separating the design task from the execution task (e.g., Kolfschoten et al. 2008). CE is an approach in line with the second strategy. In CE, a master facilitator (called collaboration engineer) designs a collaborative work practice. This work practice is documented and then transferred through training to practitioners. Practitioners are domain experts without significant facilitation experience. The cornerstone of the CE approach is the thinkLet: The smallest unit of intellectual capital to create a pattern of collaboration (Briggs et al. 2003a). A thinkLet provides a transferable, reusable, and predictable building block for the design of a collaboration process (de Vreede et al. 2006a). In short, thinkLets are facilitation best practices. The use of thinkLets helps to increase the transferability and predictability of the process design (Kolfschoten et al. 2011, 2012a).

In this chapter we will first articulate the business case for collaboration support. Next, we will describe the CE approach and the thinkLet concept in more detail, and discuss their role in the design and deployment of sustainable collaborative work practices. Finally, we will present a case study in which the CE approach was used to support the transfer of a recurring collaborative work practice in a governmental setting.

The Business Case of Collaboration Support

In the field, GSS supported meetings have often been judged to be more efficient and effective than manual meetings and participants are more satisfied in a GSS meeting than in a traditional meeting (Fjermestad and Hiltz 2001; de Vreede 2014). In a benchmark study where Boeing, ING-NN, IBM, and EADS-M were compared, efficiency improvements of more than 50% were reported both in terms of meeting time (person hours) and project duration. In one organization, GSS users responded to a survey where they rated, “effectiveness compared to manual” and “user satisfaction” at 4.1 on a 5-point scale (de Vreede et al. 2003b). At each of these sites, the meetings were designed and guided by internal (IBM and Boeing) or external (ING-NN and EADS-M) facilitators.

A key task of facilitators lies in choosing the right tools and techniques, which requires significant skill and expertise that is not always available in the group. Such groups can therefore benefit from the support of facilitators (Ackermann 1996; Dennis and Wixom 2002; Griffith et al. 1998; Kolfschoten et al. 2012b; Wheeler and Valacich 1996). De Vreede et al. (2002) found that from a user perspective, the facilitator is the most critical success factor in a GSS meeting. As Clawson et al. (1993) point out, a facilitator has a large number of tasks that require skills and expertise.

Notwithstanding their reported benefits, case studies have indicated that implementing GSS and facilitation support in organizations is particularly difficult to sustain over the long term and may lead to abandonment (Agres et al. 2005; Briggs et al. 1999; Munkvold and Anson 2001; Vician et al. 1992). In the organizational setting, group meetings are diverse and present many difficulties to those organizing them (Clawson and Bostrom 1996). As a result, group facilitation requires complex cognitive skills (Ackermann 1996; Hengst et al. 2005). Training a GSS facilitator takes time and should involve the experience of facilitating and influencing group dynamics (Ackermann 1996; Clawson and Bostrom 1996; Kolfschoten et al. 2011, 2012b; Post 1993; Yoong 1995). This makes facilitation support difficult to implement and sustain in organizations. However, even if a skilled facilitator is found, sustaining such support in organizations is challenging. Sustained use is very dependent on a champion in the organization that advocates and stimulates use (Briggs et al. 2003a; Munkvold and Anson 2001; Pollard 2003).

Besides the deployment challenges discussed above, it is difficult to create a business case for the implementation of collaboration support in an organization (Agres et al. 2005; Briggs et al. 2003a; Post 1993). Although the added value is substantial (Fjermestad and Hiltz 2001; de Vreede et al. 2003b), it is difficult to predict and document this added value (Briggs et al. 2003a). This difficulty may be due, in part, to the fact that collaboration support (facilitator and GSS technology) poses highly visible costs whereas improvements may be less visible and are difficult to measure and assign to specific budget categories. Collaboration often contributes to important processes in the organization, but not often to the central production process. Further, collaboration support is often required for “special” events, which do not occur on a frequent basis, making the generated value unpredictable in a budget plan (Briggs 2006). This makes it easier to eliminate such facilities during a budget crunch (Agres et al. 2005; Briggs et al. 2003a). In the next section we introduce the CE approach that is aimed at addressing these challenges.

The Collaboration Engineering Approach to Designing and Deploying Collaboration Support

Collaboration Engineering (CE) is an approach to designing collaboration processes. The following definition outlines both the scope and key elements of the CE approach (Briggs et al. 2006):

Collaboration Engineering is an approach to create sustained collaboration support by designing collaborative work practices for high-value recurring tasks, and deploying those as collaboration process prescriptions for practitioners to execute for themselves without ongoing support from professionals.

In CE, we aim to offer process and/or technology support in a way that enables the organization to derive value from this collaboration support on an on-going basis without the need to rely on collaboration professionals such as facilitators (Briggs et al. 2003a; de Vreede and Briggs 2019). CE focuses on the design of collaborative work practices to accomplish a specific type of task in an organization: a recurring, high value task. This focus has several reasons. First, the return of investment on the resources devoted to the CE effort increases each time the work practice is executed. Second, the return of investment on the effort to train employees to run the collaboration process is high, and their learning curve will be faster as they can learn from previous mistakes instead of experiencing new challenges each unique session they facilitate. Additionally, the recurring benefits for a high value task make supporting it important so that it decreases the likelihood that it will be abandoned (Kolfschoten et al. 2008; de Vreede and Briggs 2019).

Collaboration support exists of process and technology or tool support. For these two types of support we can distinguish a design task (to design the process and the technology), an application task (to apply the process and to use the technology) and a management task (to manage the implementation and control of the process and to manage the maintenance of the technology). Many organizations, however, distinguish only one role for collaboration support: a facilitator (Kolfschoten et al. 2012b). The facilitator often is responsible for the design and execution of the collaboration process and in many cases also takes care of the project management (e.g., acquisition of sessions, management of the facilitation team, and business administration) and technology application (operating the technology). External roles are often the design of the technology and the maintenance of the technology (hardware and software maintenance) (Kolfschoten et al. 2008, 2012b).

In CE, the abovementioned tasks are divided among several roles which enables outsourcing and dividing the workload of collaboration support (Kolfschoten et al. 2012b). The two new roles introduced in the CE approach are the Practitioner and the Collaboration Engineer. Further, the project management with respect to the collaboration support is organized differently.

Practitioners are domain experts, trained to become experts in conducting one specific collaboration process. They execute the designed collaboration process as part of their regular work (de Vreede and Briggs 2019). Practitioners are not all-round facilitators. They neither have the skills to design collaborative work practices nor the experience to be flexible and adapt collaboration processes when the needs of a group change during the process’ execution. When using collaboration support technology, the technical execution can be performed by a single practitioner, or two practitioners may work together, one moderating the process while the other runs the technology. However, since this would be a standardized, routine process, there would be no need for skilled professional technical facilitators (also called chauffeurs or technographers) who know all features and functions of the technology platform and can make informed choices about which function to use in response to unanticipated demands. Rather, practitioners need to know only the configurations and operations relevant to their specific process (Briggs et al. 2013). The skills required for the application roles in collaboration support according to the CE approach are therefore much more limited compared to those of the professional facilitator.

Since the practitioner will not have the skills to adapt the process on the fly, and the collaboration engineer will not be on hand to correct any deficiencies in the process design as it is executed by the practitioner (Kolfschoten et al. 2005), there is a need for a much more robust and predictable collaboration process design. Therefore, the process design skills required by the collaboration engineer are much more extensive than those required by either a facilitator or a practitioner. The processes collaboration engineers create must be well-tested, predictable, reusable, and easily transferable to practitioners who are not group process professionals. To create such a process design, a collaboration engineer must be able to predict the effect of the interventions that are prescribed in the process design. Therefore, collaboration engineers need to be highly experienced facilitators (de Vreede and Briggs 2019).

In CE, the overall responsibility for the recurring task and the roll-out of the designed collaboration process is mostly not in the hands of a practitioner but of a process implementation manager. A process implementation manager is responsible for the organizational deployment process and for monitoring progress and outcomes. Also, the technology is often managed by another person. Most organizations have a special department for technology support and maintenance and such a department could also maintain the technology for collaboration support. The new role division is displayed in Fig. 1.
Fig. 1

Role division in Collaboration Engineering (Kolfschoten et al. 2008)

The CE approach consists of an iterative sequence of steps from an investment decision to collaboration process design and full deployment. The process is visualized in Fig. 2.
Fig. 2

The collaboration engineering approach

First the collaboration engineer evaluates if the work practice can be supported and improved by means of a repeatable collaboration process (Briggs and Murphy 2011). Next, the decision to invest in the design of the process and in the acquisition and training of collaboration support tools is made. To design the collaborative work practice, the task and stakeholders involved will be analyzed to determine relevant process requirements. Based on this, the collaboration process design will be composed as a sequence of steps. This process design is piloted and validated to ensure it fits the requirements and renders predictable, high quality results. Once the process design is approved, it is deployed in the organization. Practitioners are selected and trained, and the first practitioners will run the collaborative work practice. Based on this experience, the process can be adapted again. Finally, the complete practitioner team is trained, and they are encouraged to form a community of practice. This community will take ownership of the collaborative work practice and continuously improve it. We will describe these steps in more detail below.

Investment Decision

CE has a rather distinct scope. This scope has three components; the economic component, the collaboration component, and the domain of application. First, to meet the economic scope, the process should be recurring and of sufficient value to justify the development and deployment of collaboration support. Second, it should be a truly collaborative task, meaning that it requires high interaction between participants. Third, it should be a knowledge intensive and goal-oriented task. CE is not meant for general teambuilding, negotiation, or conflict resolution.

Task Analysis

In the task analysis phase, a team is formed with stakeholders from the organization among which the project manager of the CE project. The team analyzes the task and defines the goal, deliverable, and other requirements. Interviews or meetings with the relevant stakeholders will give insight into the goal and task. A goal can be to deliver a tangible result, for example, to make a decision or to solve a problem. A goal can also be a state or group experience, like increasing awareness about a problem or creating shared understanding.


In this phase, the collaboration process is build based on the requirements established in the task analysis phase. The approach for collaboration process design will resemble a design approach or problem-solving method, with one key difference: instead of creating solutions or alternatives from scratch, a library of known techniques is used as a source to select and combine techniques to form a collaboration process design. There are three key steps in the design phase: the decomposition of the process in small activities, the choice of facilitation techniques for each activity, and the validation of the design.

During the decomposition step, the discrete activities that a group has to complete to achieve their goal are determined. During the next step, facilitation techniques necessary to execute each of these activities collaboratively are selected. For this purpose, the CE design approach uses a repertoire of thinkLets. Experience has shown that practitioners and novice facilitators can use thinkLets and indeed create the intended patterns of collaboration (see, e.g., Giesbrecht et al. 2017; Kolfschoten and Veen 2005; Simmert et al. 2017; de Vreede 2014). In the third and final step, the design is validated based on several criteria, e.g., goal achievement and match between process complexity and practitioner competence.

The design steps have an iterative nature, similar to iterative approaches in software engineering. The validation is, however, a key step in the process; it is critical that the design has sufficient quality since flaws will result in unsuccessful transfer to, and execution by, the practitioners, which could lead to abandonment of the project.


In the transfer phase, the collaboration engineer transfers the collaboration process prescription to the practitioners through training and coaching effort (Kolfschoten et al. 2011; de Vreede 2014). To this end, the collaboration engineer documents the process prescription such that it becomes as easy as possible for the practitioners to grasp the process and internalize it (Kolfschoten et al. 2012a). The transfer phase also includes the first time that the practitioner prepares for the application of the process. She/he then has to execute the process prescription with a specific group in their organization and needs to prepare and instantiate different aspects of the process prescription. The last learning and transfer effort occurs during the first trials of the collaboration process execution, as the practitioners gain more and more experience with the process execution. During the transfer phase, shortcomings to the collaboration process design may be discovered and the design may consequently be updated.

Implementation and Sustained Use

When the transfer phase is complete, the process can be implemented on a full scale. This requires planning and organization. Like in facilitation, the success of the practitioner is key to the successful implementation of the process (Nunamaker et al. 1997; de Vreede et al. 2003a). When practitioners are trained and have performed well at their first sessions, the process should be rolled out in the organization and the organization should slowly take ownership of the process. To establish this, management should stimulate the use of the collaboration process through controls and incentives. Furthermore, when the project involves multiple practitioners, it may be valuable to set-up a community of practice to exchange experiences and lessons learned. Finally, it is important that the process and its benefits are evaluated on a regular basis.


To design a predictable, transferable, reusable collaboration process, the CE approach uses design patterns called thinkLets. ThinkLets represent a pattern language for designing collaborative work practices (Kolfschoten et al. 2006; de Vreede et al. 2006a). Design patterns were first described by Alexander (1979) as reusable solutions to address frequently occurring problems. In Alexander’s words: “a [design] pattern describes a problem which occurs over and over again and then describes the core of the solution to that problem, in such a way that you can use this solution a million times over, without ever doing it the same way twice” (1979). A thinkLet is a design pattern of a collaborative activity that moves a group toward its goals in predictable, repeatable ways (Kolfschoten et al. 2006; de Vreede et al. 2006a). ThinkLets can be combined to create a sequence of steps that can be used by a group to execute the steps of a collaborative work practice in order to achieve collaborative goals. As with other pattern languages, thinkLets are used as design patterns, as design documentation, as a language for discussing complex and subtle design choices, and as training devices for transferring designs to practitioners in organizations (Kolfschoten et al. 2011, 2012a; de Vreede et al. 2006a; de Vreede 2014).

A thinkLet provides all information required for a team to create a pattern of collaboration. Six generic patterns of collaboration have been identified, and for each, several subpatterns are recognized (Briggs et al. 2006; Kolfschoten et al. 2014):


The generate pattern is defined as moving from having fewer to having more concepts in the pool of concepts shared by a group. There are three subpatterns:
  • Creativity: Move from having fewer to having more new concepts in the pool of concepts shared by the group.

  • Gathering: Move from having fewer to having more complete and relevant information shared by the group.

  • Reflecting (see also Evaluate): Move from less to more understanding of the relative value or quality of a property or characteristic of a concept shared by the group.


The reduce pattern of collaboration deals with moving from having many concepts to a focus on fewer concepts that a group deems worthy of further attention. There are three subpatterns:
  • Filtering: Move from having many concepts to fewer concepts that meet specific criteria according to the group members.

  • Summarizing: Move from having many concepts to having a focus on fewer concepts that represent the knowledge shared by group members.

  • Abstracting: Move from having many detailed concepts to fewer more generic concepts that reduce complexity.


The clarify pattern of collaboration deals with moving from having less to having more shared understanding of concepts, words, and information. There are two subpatterns:
  • Sensemaking: Move from having less to having more shared meaning of context, and possible actions in order to support principled, informed action.

  • Building shared understanding: Move from having less to more shared understanding of the concepts shared by the group and the words and phrases used to express them.


The organize pattern involves moving from less to more understanding of the relationships among concepts the group is considering. There are three subpatterns:
  • Categorizing: Move from less to more understanding of the categorical relationships among concepts the group is considering.

  • Sequencing: Move from less to more understanding of the sequential relationships among concepts the group is considering.

  • Causal decomposition: Move from less to more understanding of the causal relationships among concepts the group is considering.


The evaluate pattern involves moving from less to more understanding of the relative value of the concepts under consideration. There are three subpatterns:
  • Choice social/rational: Move from less to more understanding of the concept(s) most preferred by the group.

  • Communication of preference: Move from less to more understanding of the perspective of participants with respect to the preference of concepts the group is considering.

  • Reflecting (see also Generate): Move from less to more understanding of the relative value or quality of a property or characteristic of a concept shared by the group.

Consensus Building

Consensus is usually defined as an agreement, acceptance, lack of disagreement, or some other indication that stakeholders commit to a proposal. There are two subpatterns:
  • Building agreement: Move from less to more shared preferences among participants with respect to concepts the group is considering.

  • Building commitment: Move from less to more willingness to commit among participants with respect to proposals the group is considering.

ThinkLet Structure

ThinkLets are based on a core set of elementary behavioral rules that, when combined, create predictable dynamics in the group and yield a deliverable with a predictable structure (Kolfschoten et al. 2006; Kolfschoten and Houten 2007; de Vreede et al. 2006a). To some extent, thinkLets also produce predictable states of mind among participants (e.g., greater understanding, broader perspectives, and more willingness to commit). Facilitators, collaboration engineers, and practitioners have executed thinkLets repeatedly in a variety of contexts for almost two decades and report that each execution produces a similar pattern of collaboration, and a similar result in terms of participants’ behaviors (see, e.g., Acosta and Guerrero 2006; Bragge et al. 2005; Fruhling and de Vreede 2005; Giesbrecht et al. 2017; Harder et al. 2005; Marques and Ochoa 2014; Simmert et al. 2017; de Vreede 2014). Thus, thinkLets have predictable effects on group process and their outcomes, and these effects have been recorded in thinkLet documentation. Researchers have also verified these effects by reviewing the transcripts of hundreds of GSS sessions (Kolfschoten et al. 2004). For some thinkLets, experimental research has been performed to compare their effects (Santanen et al. 2004). To further increase predictability, for some thinkLets theoretical models have been developed to understand their effects on the patterns of collaboration and results that are created when they are used (e.g., Briggs et al. 2006; Santanen et al. 2004; Seeber et al. 2017). Through the use of parsimonious rules, misunderstanding can be reduced, which is likely to strengthen the predictability of group behavior and process outcomes (Santanen 2005; de Vreede et al. 2006a).

Many books and websites describe useful, well-tested facilitation techniques. A key distinction between such techniques and thinkLets is in the degree to which they have been formally documented according to the design pattern principles. The current documentation convention (Kolfschoten et al. 2006, 2012a; de Vreede et al. 2006a) includes the following:


Each thinkLet has a unique name. These names are typically selected to be catchy and amusing so as to be memorable and easy to teach to others (Buzan 1974). The name is also selected to invoke a metaphor that reminds the user of the pattern of collaboration the thinkLet will invoke, and visualized with an icon. Further, thinkLets are summarized to give an overview of the technique. The names, combined with the metaphor and icon, constitute the basis for a shared language.

Rule–Based Script

Each thinkLet must specify a set of rules that prescribe the actions that people in different roles must take using the capabilities provided to them under some set of constraints specified in parameters. ThinkLets can include several roles. For example, during brainstorming there can be a regular participant role and a devil’s advocate role (Janis 1972). Everything a practitioner could do and say to instruct the group in performing their actions based on the rules in the thinkLet is captured in the script. The script makes the thinkLet more readily transferable, because it frames the rules as spoken instructions and guided actions for the user. With the rules as a basis for the script, practitioners can adjust the script to their style while keeping the instructions that are essential for the thinkLet to succeed.

Selection Guidance

Each thinkLet must explain the pattern of collaboration that will emerge when the thinkLet is executed, and must include guidance about the conditions under which the thinkLet would be useful, and the conditions under which it is known not to be useful. To further support thinkLet selection, combinations, alternatives, and variations to the thinkLet are documented. Further, thinkLets are classified to the pattern of collaboration they evoke and the type of result they intend to create. Last, to help the collaboration engineer in understanding the thinkLet, insights, tips, and lessons learned from the field to further clarify the way a thinkLet might be used and how it may affect a group are documented.

What Will Happen?

For the practitioner it is important to understand what will happen when the thinkLet is executed. In this part the result and effects of the thinkLet are explained. For this purpose, known pitfalls that might interfere with its success and suggested ways to avoid them are captured. Additionally, insights are offered to the practitioner about (a) the role of the thinkLet in the process, (b) the time allocated for the thinkLet, and (c) how to deal with delays in the process. Also, each thinkLet documentation must include at least one success story of how a thinkLet was used in a real-life task. Success stories help the user understand how the thinkLet might play out in a group working on a real task. Some documenters of thinkLet also include failure stories to illustrate the consequences of specific execution errors or misapplications of the thinkLet.

ThinkLets, like other design patterns, can be used in a variety of circumstances. They are documented in a way that a collaboration engineer can implement them with different technologies or tools, in different domains, and with different types of groups. Most thinkLets can be performed with pen and paper (de Vreede 2014). Some require data processing capacity as offered in GSS or stand-alone tools such as spreadsheets. Each thinkLet has a number of constraints that can be instantiated at process-design time or at execution time, to customize the thinkLet for a specific task in a specific domain. ThinkLets mostly define one participant role but can be modified to accommodate different roles. Last, thinkLets can be modified or instantiated to fit different time constraints within some range. These features enable collaboration engineers to create a reusable process with thinkLets, as they support accommodating the available resources, while at the same time offering the flexibility required to accommodate changes in the available resources among different instances of the recurring task. In this way, a recurring collaborative work practice can be supported using a thinkLet-based collaboration process design.

Many hundreds of facilitators, students, and practitioners have been trained to use thinkLets to support collaborative efforts (de Vreede and Briggs 2019). ThinkLets are easy to learn because their documentation is structured to contain the essential information thus limiting their complexity to a minimum (Kolfschoten et al. 2012a). Furthermore, they have mnemonics to make it easier to memorize them and to use them as a shared language in communities of practice (de Vreede et al. 2006a). Therefore, thinkLets offer a good basis for the training of practitioners to become skilled and independent in their ability to support the collaborative work practice (Kolfschoten et al. 2011).

Case Study: Transferring a ThinkLets-Based Collaboration Process Design for IntegrityAassessment

Integrity of government organizations and institutions is one of the key pillars of a successful democracy. While procedures and policy can be used to avoid integrity violations, integrity of the organization depends on the integrity of its agents. Nonetheless, a government organization is obliged to eliminate or control “tempting situations” in which agents have the opportunity to violate principles of integrity. Therefore, it is important for government organizations to assess the integrity risks in their organization and to find solutions for the most tempting situations regarding integrity violations. The integrity assessment described in this chapter was created by the Dutch national office for promoting ethics and integrity in the public sector. It was expected that many government organizations would want to use the integrity risk assessment instrument. For this purpose, additional facilitators needed to be trained in a relative short period to support groups in the assessments. This task was outsourced to one of five future centers in the Netherlands, named “het Buitenhuis.”

The integrity support agency and the future center embraced the CE approach for two reasons: First, it needed to expand its cadre of practitioners to run the assessments. Second, they wanted to structure and standardize the integrity workshops to ensure their quality, even when they would be performed by different practitioners. Furthermore, the center believes that groups would feel more comfortable in an integrity assessment facilitated by a member of their own or a similar organization, i.e., an integrity assessment practitioner. The session is an integrity assessment of the organization, which is similar to a risk assessment but focused on possible integrity violations. This topic is possibly sensitive and the anonymity of GSS support was therefore considered to be of great value. Each session would take a full day and contains mostly evaluation steps, both qualitative and quantitative. However, group discussion would be required to build consensus and to integrate brainstorming results to produce a group result.

The agency’s existing integrity assessment process was used as a starting point for the design of a repeatable thinkLets-based collaboration process that was to be transferred to other integrity assessment practitioners. The integrity assessment started with a guided discussion to increase awareness of integrity violations, followed by a “risk analysis” of integrity violations and an assessment of both hard and soft integrity measures to see how the organization dealt with integrity and how well that worked. Finally, suggestions for improvement were collected.

The actual design and deployment of the new integrity assessment process following the CE approach was performed by the third author of this chapter. We modified only a few steps in the original process to simplify the process and to avoid unpredictable outcomes of some of the steps. Furthermore, some of the instructions were changed to clarify the process and the intended result. To make these modifications, two practitioners from the future center were observed while they executed the process. Proposed changes were discussed with both the integrity support agency and the future center. Next, the thinkLets needed for the process were selected using the choice criteria as discussed in Kolfschoten and Rouwette (2006) and the collaboration process was documented according to the collaboration process prescription template (Kolfschoten and Hulst 2006; Kolfschoten et al. 2012a). To validate the resulting process design, it was discussed again with the practitioners from the future center and a pilot session based on the new process prescription was facilitated by the researcher.

To evaluate the value of the CE approach in the case study, we wanted to study whether practitioners, trained with a thinkLet-based collaboration process, could support the collaboration process with similar results as expert facilitators would. To this end, we tested the following hypothesis:

A practitioner who executes a collaboration process design created and transferred according to the CE approach is not outperformed by a professional facilitator in terms of collaboration process’ participant’s perceptions of quality of the process in terms of:
  1. (a)

    Satisfaction with the process.

  2. (b)

    Satisfaction with the results.

  3. (c)

    Commitment to the process.

  4. (d)

    Efficiency of the process.

  5. (e)

    Effectiveness of the process.

  6. (f)

    Productivity of the process.

As this is a so-called 0-hypothesis, it cannot be confirmed. However, we can collect evidence from different sources to show that the participants’ perceptions of the quality of this recurring collaborative task should not be significantly different in two treatments:
  • Process guidance by a practitioner (trained novice facilitator).

  • Process guidance by a professional facilitator.

Besides collecting quality perceptions from participants, we collected data that allowed us to distinguish practitioners from professional facilitators. Furthermore, we wanted to know whether the practitioners felt supported by the training and collaboration process prescription they received, and whether the process was executed as intended and resulted in predictable patterns of collaboration and results. We thus distinguished the following roles:
  • Practitioner: (trainee, novice facilitator) A person from a government organization, who is involved in or is an expert on integrity matters without significant facilitation experience, and to whom the process design will be transferred.

  • Professional facilitator: A person who facilitates group processes on a regular basis as part of his/her job.

  • Participant: a person participating in an integrity assessment workshop.

  • Chauffeur: A person operating the GSS during an integrity assessment to assist the facilitator or practitioner who does not address the group to give instructions.

The researcher performed the role of observer, professional facilitator, and chauffeur. For the study, the pilot of the new integrity assessment process was used as a benchmark. The pilot was executed with the researcher and several other professional facilitators in the role of the facilitator. At the conclusion of the pilot, the participant’s perceptions on the quality of collaboration process were measured.

Practitioners that were to execute future integrity assessments were trained using the CE training program described in Kolfschoten et al. (2009a) and Kolfschoten et al. (2011). In addition, the practitioners’ perception of the transfer and supportiveness of the collaboration process prescription and training were evaluated. After being trained, the practitioners executed the process design while being observed by the researcher. At the end of each process execution, the participants’ perception on the success of the process was measured. Finally, also the practitioners’ perception on their performance and on the transferability of the collaboration process design was evaluated.

Research Instruments

We used the following research instruments; details on the questionnaires and interview protocols can be found in Kolfschoten (2007):
  • A questionnaire to measure the participant’s perception on the quality of the collaboration process.

  • A questionnaire to evaluate the initial experience of the practitioners with facilitation, GSS, and group support.

  • A questionnaire to evaluate the practitioner’s perception on the transfer and supportiveness of the collaboration process prescription and training.

  • An interview protocol to evaluate the practitioner’s perception on his performance and the transferability of the collaboration process prescription.

Participant’s Perception on Quality of Collaboration

We evaluated the quality of a collaboration process from a participant perspective. Each group that performed the collaborative integrity assessment task can judge the quality of the process and the quality of the outcome. For integrity assessments, outside objective judgments of the quality of the results are very difficult to acquire, as the outcome of the process is a perception on the integrity risks in the organization, and as such can conflict with the perception of an outsider. The collaboration quality questionnaire measured six constructs: efficiency, effectiveness, productivity, commitment of resources, and satisfaction with results and process. For each construct, five questions were used with a Likert scale from 1 (strongly disagree) to 7 (strongly agree). The questions for satisfaction were taken from Briggs et al. (2003b).

Questionnaire for Practitioner Experience in Group Support

To evaluate the experience of the practitioner in group support, we used an interview protocol to determine different roles in group support (Kolfschoten et al. 2008). From this protocol, we used only the questions that addressed the respondents’ experience with group support.

Questionnaire for Training Evaluation

To evaluate the training, we collected perceptions on the usefulness of the thinkLets, the completeness of the training, the quality of the training, and the cognitive load of the training. The questions for this instrument were taken from Duivenvoorde et al. (2009).

Interview Protocol for Session Evaluation

To evaluate the practitioner performance and the support of the CE approach in transferring collaboration process designs we evaluated the following constructs:
  • Predictability of the process design.

  • Supportiveness of the process design.

  • Difficulty of execution.

  • Cognitive load of execution.


The Pilot Results

Both the researcher and the professional facilitators of the future center had facilitated many sessions with a variety of organizations. All facilitators charged a fee for the sessions they facilitated. They facilitated in service of clients of the organization for which they worked, and thus could be regarded as professional facilitators. For this study, each professional facilitator roughly performed the same process as described in the integrity assessment process design with only marginal differences in the way thinkLets were applied and instructions were given to the group. The results concerning the quality of the collaboration are presented in Table 1. The differences between the performances of the facilitators are marginal and the standard deviations are not very high either. We used these results as a benchmark to assess the practitioners’ performance.
Table 1

Quality of collaboration as a result of facilitation by professional facilitators. Scale 1–7, 1 being very low, 7 being very high





Satisfaction process




Satisfaction outcome




















The Practitioners

The practitioners in the study were all employed by large government organizations. Some had a function related to integrity and some had some affinity with (technical) facilitation. None of the practitioners had had to perform the integrity assessment process as part of their formal job description. Most of the practitioners had some experience with supporting groups, either in the role of trainer, teacher, or project leader. Some had facilitated workshops or had worked as a technical facilitator but not for many sessions. Most had received higher education. The average age was 43, four were female, and three were male. We were not involved in any way in the recruitment of practitioners for the study.

The Training

The seven practitioners participated in two separate training sessions, lasting two days each. Six handed in the evaluation of the training and integrity assessment design. The results are listed in Table 2. The manual describing the details of the process design was considered complete and all aspects were considered useful. Each aspect of the training was rated as sufficient. The manual was considered quite extensive, and some more organization of the different parts would have been useful. Most of the process steps were focused on the evaluation or assessment of an organization and since the practitioners worked at different organizations, it was difficult to exercise or simulate these steps. As a result, some steps could not be experienced. This was recommended by the practitioners as an improvement for the training, yet it was recognized that this would be difficult to implement. Some practitioners had the opportunity to attend a session before they first executed it. The difficulty and mental effort of the training were estimated medium. Practitioners felt equipped to execute integrity assessments but indicated that they wanted to see a real session before they executed their own, if possible. Overall, the training was evaluated satisfactory.
Table 2

Evaluation of the training and integrity assessment process design

Question scale: 1–7




Was the manual complete?




What did you think of the usefulness of the thinkLets?




How do you estimate the mental effort of preparation and training? (low-high)




How difficult was the training?




Do you feel equipped to facilitate the session?




Were you satisfied with the training?




The Pactitioner Performance

Four practitioners executed the process. The three “drop-out” practitioners either felt uncomfortable with the GSS technology (1 practitioner) or did not run a session due to inability to schedule such event within the timeframe of the study (2 practitioners). We observed the sessions and intervened to support the practitioner in guiding the group only when this was absolutely necessary. In one session, the researcher was not able to observe and act as chauffeur; the chauffeur role was performed by someone else. The practitioners reported back on several questions through written self-reflections and interviews. The observer also made notes about deviations from the script and interventions that were made to support the group that should have been made by the practitioner.

One practitioner did not prepare the execution and therefore presented the group with the instructions and background of the session by more or less “reading the slides out loud.” Although the participants noticed this, they were not disappointed in the results and were generally satisfied with the process. This indicates that the transferability of the instructions had become substantial. The integrity assessment process leads to an outcome that is in most cases instrumental for the organization, while it is generally not very instrumental to the participants, except when it enables the participants to reveal significant problems in which they are a stakeholder. This poses a challenge as commitment can be lower, but at the same time, the lack of significant stakes in the outcome makes the process less likely to evoke conflict and emotions.

Over all sessions, it was observed that the practitioners’ ratings of mental effort increased if they had to deal with conflict in the group. The practitioners that had a background in integrity were sometimes tempted to make normative comments with respect to the integrity risks of the organization, which could be problematic, as some risks might be very different in different cultures and contexts. The results of the practitioners are shown in Table 3.
Table 3

Quality of the practitioner sessions from a participant perspective





Satisfaction process




Satisfaction outcome




















We compared the results from the practitioners with the results of the professional facilitators using an independent-samples t-test with a significance level of 0.01.

The groups we compared are the participants in sessions performed by professional facilitators (n = 50) and the participants in sessions performed by practitioners (n = 46). The results are depicted in Table 4.
Table 4

Independent-samples t-test practitioner’s vs. professional facilitators


Sig. α 0.01

Effect size

Satisfaction process



Satisfaction outcome















We found that for all quality dimensions, there was no significant difference between practitioners and facilitators (α =0.01). Also, the effect size eta squared was calculated. According to Cohen (1988), this is a very small effect: less than 3% of the effects is explained by the difference between facilitators and practitioners.


A key limitation in this study is the observing role of the researcher. As the sessions are held in a commercial setting, the researcher cannot allow the session to go wrong entirely, and thus, when a practitioner mal-performs, the researcher has to intervene. Although interventions were limited to a few incidents, the interventions as reported may have had an effect on the quality ratings. Another limitation is that while the task is identical, the groups are not and due to the sensitive topic of this case, some sessions can be significantly more difficult than others. This poses a limitation to the comparisons across sessions. A last limitation is the relatively low number of practitioners and professional facilitators. A laboratory setting or noncommercial setting would not resolve these problems as the session and thus the facilitation challenges would not be as realistic and are actually different (Fjermestad and Hiltz 2001; Kolfschoten et al. 2009b). To increase the robustness of the results, the number of sessions should therefore be increased.

Discussion and Conclusions

During our study, no significant difference between facilitators and practitioners was found. With respect to both the training and the facilitation, practitioners did not report very high mental effort. This indicates that the facilitation task in this case has become transferable. Both practitioners and professional facilitators received positive scores on the perceived quality of the collaboration process. Practitioners could most improve their support to the group with respect to the outcomes of the sessions. Supporting the group to create high quality results is very difficult without a frame of reference with respect to the quality of the outcome. When practitioners execute the session for the first time, it is therefore difficult to manage the quality of the outcomes.

The results of the case study lend support for the value of the CE approach. We submit that this approach offers a learning path for novice facilitators, that is, more effective and efficient than traditional methods. The training for an all-round facilitator typically takes weeks, if not months, instead of 2 days. An apprenticeship with coaching is required, especially with respect to the preparation of the process for the first sessions. Therefore, the training investment and the quality of the first sessions are much more in balance when using the CE approach than when traditional facilitation training is used. Further, we expect that when practitioners will execute the process on a recurring basis, they will be able to correct mistakes and learn from recurring challenges, while a normal apprentice facilitator will be confronted with different challenges each session, resulting in less opportunity to experiment with solutions.

Examples of other CE projects include, but are not limited to, the following (see also de Vreede and Briggs 2019):
  • A process for collaborative usability testing was successfully employed for the development of a governmental health emergency management system (Fruhling and de Vreede 2005).

  • Dozens of groups engaged in effective software requirements negotiations using the EasyWinWin process (Boehm et al. 2001; Briggs and Grünbacher 2001).

  • A collaborative software code inspection process based on Fagan’s inspection standards was successfully employed at Union Pacific (de Vreede et al. 2006b).

  • A process for continuous end-user reflection on information systems development efforts was used in a large educational institution (Bragge et al. 2005).

  • Various collaborative learning practices where successfully designed and implemented leading to improvements in terms of learning effectiveness and student satisfaction (see e.g. Cheng et al. 2016).

  • A backlog creation process was developed and fine-tuned for Howard Hughes Medical Institute that adopted it as a key part of its new agile approach in their IT department (de Vreede 2014).

  • A process for innovation ideation was successfully designed and transferred at Verisk Analytics (de Vreede 2014).

These studies and others provide ample evidence that the CE approach helps towards overcoming the barriers that we identified with respect to the sustained deployment of GSS and collaborative work practices. CE facilitates the transfer of collaboration process designs to practitioners, who can run these by themselves with similar results as those obtained by professional facilitators. Using the CE approach, we can make collaboration support available for recurring high value collaboration processes in organizations. In such cases, the support tools and the practitioners are contributing to a recurring process and the added value of the training and technology investment can be more easily estimated and visualized. Costs can be assigned to the collaborative work practice and in this way the business case can be made more easily. The need for a champion will remain, but the role of the champion will be to ensure the performance and quality of a collaborative work practice, instead of maintaining and “selling” a support system. Further research is required both in terms of field studies to understand the impact of collaboration support according to the CE approach and in terms of theoretical understandings of collaboration and outcomes of group interaction.

From a practical perspective, to improve the transferability of thinkLets and thinkLet-based collaboration processes, it will be important to analyze the learning curve of the practitioners (how do they perform in subsequent sessions) and to apply the approach in more cases, possibly with the same practitioners to further evaluate the value of this approach compared to the master-apprentice approach. Also, longitudinal research is required to further evaluate the sustainability of new work practices that are designed and deployed using the CE approach. Next, recent advances with intelligent and configurable collaboration support tools to help practitioners in their task to instruct the group and to intervene in the collaboration process need to be expanded upon (Briggs et al. 2013). Also, it will help to use tools that are restricted to the functionalities that fulfill the capabilities required for the thinkLet (Briggs et al. 2013). This will reduce the cognitive load of using complex GSS technology for both practitioners and participants.

From a theoretical and empirical perspective, it would be interesting to further understand and predict the effects of thinkLets, and to gain empirical evidence of their effects. Previous research often evaluates the effect of “the GSS” without distinguishing specific capabilities and associated interventions to create specific effects. We think that thinkLets offer a new lens for research in collaboration support that enables more specific analysis of successful and unsuccessful interventions to support collaboration. From a theoretical perspective, additional research is also required on the patterns of collaboration. While some initial theoretical work on creativity, convergence (i.e., reduction and clarification), evaluation, and consensus building has been done (for an overview, see de Vreede and Briggs 2019), more work is needed, especially on the organizing pattern of collaboration. The patterns of collaboration describe complex cognitive processes in a group setting that are not yet fully understood.


  1. Ackermann F (1996) Participants perceptions on the role of facilitators using group decision support systems. Group Decis Negot 5:93–519CrossRefGoogle Scholar
  2. Acosta CE, Guerrero LA (2006) Supporting the collaborative collection of User’s requirements. In: Seifert S, Weinhardt C (eds) Group decision and negotiation 2006. Universitat Karlsruhe, Karlsruhe, pp 27–30Google Scholar
  3. Agres A, de Vreede GJ, Briggs RO (2004) A tale of two cities: case studies on GSS transition. Group Decis Negot 14(4), 267–284Google Scholar
  4. Agres A, de Vreede GJ, Briggs RO (2005) A tale of two cities: case studies of GSS transition in two organizations. Group Decis Negot 14(4):256–266CrossRefGoogle Scholar
  5. Alexander C (1979) The timeless way of building. Oxford University Press, New YorkGoogle Scholar
  6. Boehm B, Grünbacher P, Briggs RO (2001) Developing groupware for requirements negotiation: lessons learned. IEEE Softw 18(3):46–55CrossRefGoogle Scholar
  7. Boughzala I, de Vreede GJ (2015) Evaluating team collaboration quality: the development and field application of a collaboration maturity model. J Manag Inf Syst 32(3):129–157CrossRefGoogle Scholar
  8. Bragge J, Merisalo-Rantanen H, Hallikainen P (2005) Gathering innovative end-user feedback for continuous development of information systems: a repeatable and transferable E-collaboration process. IEEE Trans Prof Commun 48(1):55–67CrossRefGoogle Scholar
  9. Briggs RO (2006) The value frequency model: towards a theoretical understanding of organizational change. In: Seifert S, Weinhardt C (eds) Group decision and negotiation 2006. Universitat Karlsruhe, Karlsruhe, pp 36–39Google Scholar
  10. Briggs RO, Grünbacher P (2001) Surfacing tacit knowledge in requirements negotiation: experiences using EasyWinWin. In: Sprague RH (ed) Proceedings of the 34th Annual Hawaii International Conference on System Sciences: abstracts and CD-ROM of full papers: January 3–6, 2001, Maui (p 35)Google Scholar
  11. Briggs RO, Murphy JD (2011) Discovering and evaluating collaboration engineering opportunities: an interview protocol based on the value frequency model. Group Decis Negot 3:315CrossRefGoogle Scholar
  12. Briggs RO, Adkins M, Mittleman DD, Kruse J, Miller S, Nunamaker JF Jr (1999) A technology transition model derived from qualitative field investigation of GSS use aboard the U.S.S. Coronado. J Manag Inf Syst 15(3):151–196CrossRefGoogle Scholar
  13. Briggs RO, de Vreede GJ, Nunamaker JF Jr (2003a) Collaboration engineering with ThinkLets to pursue sustained success with group support systems. J Manag Inf Syst 19(4):31–63CrossRefGoogle Scholar
  14. Briggs RO, de Vreede GJ, Reinig B (2003b) A theory and measurement of meeting satisfaction. In: Sprague RH (ed) HICSS-36: Hawaii International Conference On System Sciences. Big Island, HI, (pp 23–26)Google Scholar
  15. Briggs RO, Kolfschoten GL, de Vreede GJ (2006a) Instrumentality theory of consensus. Paper presented at the first HICSS symposium on case and field studies of collaboration, KauaiGoogle Scholar
  16. Briggs RO, Kolfschoten GL, de Vreede GJ, Dean DL (2006b) Defining key concepts for collaboration engineering. Proceedings of the 12th Americas Conference on Information Systems. Acapulco, MexicoGoogle Scholar
  17. Briggs RO, Kolfschoten GL, de Vreede GJ, Lukosch S, Albrecht C (2013) Facilitator-in-the-box: process support applications and computer assisted collaboration engineering to help practitioners realize the potential of collaboration technology. J Manag Inf Syst 29(4):159–194CrossRefGoogle Scholar
  18. Buzan T (1974) Use your head. British Broadcasting Organization, LondonGoogle Scholar
  19. Cheng X, Li Y, Sun J, Huang J (2016) Application of a novel collaboration engineering method for learning design: a case study. Br J Educ Technol 47(4):803–818CrossRefGoogle Scholar
  20. Clawson VK, Bostrom RP (1996) Research-driven facilitation training for computer-supported environments. Group Decis Negot 5:7–29CrossRefGoogle Scholar
  21. Clawson VK, Bostrom R, Anson R (1993) The role of the facilitator in computer-supported meetings. Small Group Res 24(4):547–565CrossRefGoogle Scholar
  22. Cohen J (1988) Statistical power analysis for the behavioural sciences. Erlbaum, HillsdaleGoogle Scholar
  23. Davison RM, Briggs RO (2000) GSS for presentation support: supercharging the audience through simultaneous discussions during presentations. Commun ACM 43(9):91–97CrossRefGoogle Scholar
  24. de Bruijn JA, ten Heuvelhof EF (2008) Management in networks: on multi-actor decision making. Routledge, LondonGoogle Scholar
  25. de Haan J, van’t Hof C (eds) (2006) Jaarboek ICT en Samenleving 2006: De Digitale Generatie. Boom, AmsterdamGoogle Scholar
  26. de Vreede GJ (2014) Achieving repeatable team performance through collaboration engineering: experiences in two case studies. Manag Inf Syst Q Exec 13(2):115–129Google Scholar
  27. de Vreede GJ, Briggs RO (2019) A program of collaboration engineering research & practice: contributions, insights, and future directions. J Manag Inf Syst 36(1):74–119CrossRefGoogle Scholar
  28. de Vreede GJ, Boonstra J, Niederman FA (2002) What is effective GSS facilitation? A qualitative inquiry into Participants’ perceptions. Proceedings of the 35th Annual Hawaii International Conference on System Sciences: 2002. Big Island, HIGoogle Scholar
  29. de Vreede GJ, Davison R, Briggs RO (2003a) How a silver bullet may lose its shine – learning from failures with group support systems. Commun ACM 46(8):96–101CrossRefGoogle Scholar
  30. de Vreede GJ, Vogel DR, Kolfschoten GL, Wien JS (2003b) Fifteen years of in-situ GSS use: a comparison across time and National Boundaries. Proceedings of the 36th Annual Hawaii International Conference on System Sciences: 2003. Big Island, HIGoogle Scholar
  31. de Vreede GJ, Briggs RO, Kolfschoten GL (2006a) ThinkLets: a pattern language for facilitated and practitioner-guided collaboration processes. Int J Comput Appl Technol 25(2/3):140–154CrossRefGoogle Scholar
  32. de Vreede GJ, Koneri PG, Dean DL, Fruhling AL, Wolcott P (2006b) Collaborative software code inspection: the design and evaluation of a repeatable collaborative process in the field. Int J Coop Inf Syst 15(2):205–228CrossRefGoogle Scholar
  33. den Hengst M, Adkins M, van Keeken SJ, Lim ASC (2005) Which facilitation functions are most challenging: a global survey of facilitators. Proceedings of the Group Decision and Negotiation conference (GDN) 2005, ViennaGoogle Scholar
  34. Dennis AR, Wixom BH (2002) Investigating the moderators of the group support systems use with meta-analysis. J Manag Inf Syst 18(3):235–257CrossRefGoogle Scholar
  35. Duivenvoorde GPJ, Kolfschoten GL, de Vreede GJ, Briggs RO (2009) Towards an instrument to measure successfulness of collaborative effort from a participant perspective. Proceedings of the Hawaii International Conference on System Science (HICSS 2009), WaikoloaGoogle Scholar
  36. Fjermestad J, Hiltz SR (2001) A descriptive evaluation of group support systems case and field studies. J Manag Inf Syst 17(3):115–159Google Scholar
  37. Frost, Sullivan (2007) Meetings around the world: the impact of collaboration on business performance. Retrieved 10/27/2009, from
  38. Fruhling A, de Vreede GJ (2005) Collaborative usability testing to facilitate stakeholder involvement. In: Biffl S, Aurum A, Boehm B, Erdogmus H, Grünbacher P (eds) Value based software engineering. Springer, Berlin, pp 201–223Google Scholar
  39. Giesbrecht T, Schwabe G, Schenk B (2017) Service encounter thinkLets: how to empower service agents to put value co-creation into practice. Inf Syst J 27(2):171–196CrossRefGoogle Scholar
  40. Griffith TL, Fuller MA, Northcraft GB (1998) Facilitator influence in group support systems. Inf Syst Res 9(1):20–36CrossRefGoogle Scholar
  41. Harder RJ, Keeter JM, Woodcock BW, Ferguson JW, Wills FW (2005) Insights in implementing collaboration engineering. In: Sprague RH (ed) Proceedings of the 38th Annual Hawaii International Conference on System Sciences: 2005. Big Island, HI, p 15Google Scholar
  42. Janis IL (1972) Victims of groupthink: a psychological study of foreign-policy decisions and fiascoes. Houghton Mifflin Company, BostonGoogle Scholar
  43. Kolfschoten GL (2007) Theoretical foundations for collaboration engineering. Delft University of Technology, DelftGoogle Scholar
  44. Kolfschoten GL, Rouwette E (2006) Choice criteria for facilitation techniques: a preliminary classification. In: Seifert S, Weinhardt C (eds) Group decision and negotiation 2006. Universitat Karlsruhe, Karlsruhe, pp 49–52Google Scholar
  45. Kolfschoten GL, van der Hulst S (2006) Collaboration process design transition to practitioners: requirements from a cognitive load perspective. In: Seifert S, Weinhardt C (eds) Group decision and negotiation 2006. Universitat Karlsruhe, Karlsruhe, pp 45–48Google Scholar
  46. Kolfschoten GL, van Houten SPA (2007) Predictable patterns in group settings through the use of rule based facilitation interventions. In: Proceedings of the group decision and negotiation conference (GDN) 2007. Concordia University, Mt TremblantGoogle Scholar
  47. Kolfschoten GL, Veen W (2005) Tool support for GSS session design. In: Sprague RH (ed) Proceedings of the 38th Annual Hawaii International Conference on System Sciences: 2005. Big Island, HIGoogle Scholar
  48. Kolfschoten GL, Appelman JH, Briggs RO, de Vreede GJ (2004) Recurring patterns of facilitation interventions in GSS sessions. Proceedings of the 37th Annual Hawaii International Conference on System Sciences (HICSS) 2004. Big Island, HIGoogle Scholar
  49. Kolfschoten GL, den Hengst M, de Vreede GJ (2005) Issues in the design of facilitated collaboration processes. Proceedings of the Group Decision and Negotiation Conference (GDN) 2005, ViennaGoogle Scholar
  50. Kolfschoten GL, Briggs RO, de Vreede GJ, Jacobs PHM, Appelman JH (2006) Conceptual foundation of the ThinkLet concept for collaboration engineering. Int J Human Comput Sci 64(7):611–621CrossRefGoogle Scholar
  51. Kolfschoten GL, Niederman F, de Vreede GJ, Briggs RO (2008) Roles in collaboration support and the effect on sustained collaboration support. Proceedings of the 41st Annual Hawaii International Conference on System Science (HICSS) 2008. WaikoloaGoogle Scholar
  52. Kolfschoten GL, de Vreede GJ, Pietron L (2009a) A training approach for the transition of repeatable collaboration processes to practitioners. Group Decision and Negotiation, Working paper. Toronto, CanadaGoogle Scholar
  53. Kolfschoten GL, Duivenvoorde GPJ, Briggs RO, de Vreede GJ (2009b) Practitioners vs. facilitators a comparison of participant perceptions on success. Proceedings of the 42nd Annual Hawaii International Conference on System Science (HICSS) 2009. WaikoloaGoogle Scholar
  54. Kolfschoten GL, de Vreede GJ, Pietron LR (2011) A training approach for the transition of repeatable collaboration processes to practitioners. Group Decis Negot 20(3):347–371CrossRefGoogle Scholar
  55. Kolfschoten GL, van der Hulst S, den Hengst M, de Vreede GJ (2012a) Transferring collaboration process designs to practitioners: requirements from a cognitive load perspective. Int J e-Collaboration 8(3):36–57CrossRefGoogle Scholar
  56. Kolfschoten GL, Niederman F, Briggs RO, de Vreede GJ (2012b) Facilitation roles and responsibilities for sustained collaboration support in organizations. J Manag Inf Syst 28(2):129–162CrossRefGoogle Scholar
  57. Kolfschoten GL, Lowry PB, Dean DL, de Vreede GJ, Briggs RO (2014) Patterns in collaboration. In: Nunamaker JF Jr, Romano NC Jr, Briggs RO (eds) Collaboration systems: concept, value, and use. M.E. Sharpe, Inc., ArmonkGoogle Scholar
  58. Marques M, Ochoa SF (2014) Improving teamwork in students’ software projects. In 27th IEEE Conference on Software Engineering Education and Training (CSEE&T). Klagenfurt, Germany, pp. 99–108Google Scholar
  59. Munkvold BE, Anson R (2001) Organizational adoption and diffusion of electronic meeting systems: a case study. In: Ellis C, Zigurs I (eds) Proceedings of the 2001 ACM SIGGROUP conference on supporting group work. The Association for Computing Machinery, New York, pp 279–287CrossRefGoogle Scholar
  60. Nunamaker JF Jr, Briggs RO, Mittleman DD, Vogel D, Balthazard PA (1997) Lessons from a dozen years of group support systems research: a discussion of lab and field findings. J Manag Inf Syst 13(3):163–207CrossRefGoogle Scholar
  61. Pollard C (2003) Exploring continued and discontinued use of IT: a case study of OptionFinder, a group support system. Group Decis Negot 12:171–193CrossRefGoogle Scholar
  62. Post BQ (1993) A business case framework for group support technology. J Manag Inf Syst 9(3):7–26CrossRefGoogle Scholar
  63. Santanen EL (2005) Resolving ideation paradoxes: seeing apples as oranges through the clarity of ThinkLets. In: Sprague RH (ed) Proceedings of the 38th Annual Hawaii International Conference on System Sciences: 2005. Big Island, HI, p 16cGoogle Scholar
  64. Santanen EL, de Vreede GJ, Briggs RO (2004) Causal relationships in creative problem solving: comparing facilitation interventions for ideation. J Manag Inf Syst 20(4):167–197CrossRefGoogle Scholar
  65. Schwarz RM (1994) The skilled facilitator. Jossey-Bass Publishers, San FranciscoGoogle Scholar
  66. Seeber I, Maier R, de Vreede GJ, Weber B (2017) Beyond brainstorming: exploring convergence in teams. J Manag Inf Syst 34(4):939–969PubMedCrossRefGoogle Scholar
  67. Simmert B, Ebel P, Bittner EAC, Peters C (2017) Systematic and continuous business model development: design of a repeatable process using the collaboration engineering approach. In: 13th international conference on Wirtschaftsinformatik (WI)Google Scholar
  68. Steiner ID (1972) Group process and productivity. Academic Press, New YorkGoogle Scholar
  69. Vician C, DeSanctis G, Poole MS, Jackson BM (eds) (1992) Using group technologies to support the design of “lights out” computing systems. Elsevier Science Publishers, North-HollandGoogle Scholar
  70. Wheeler BC, Valacich JS (1996) Facilitation, GSS and training as sources of process restictiveness and guidance for structured group decision making an empirical assessment. Inf Syst Res 7(4):429–450CrossRefGoogle Scholar
  71. Yoong P (1995) Assessing competency in GSS skills: a pilot study in the certification of GSS facilitators. In: Olfman L (ed) Proceedings of the 1995 ACM SIGCPR conference on supporting teams, groups, and learning inside and ouside the IS function reinventing IS. Nashville, TN, pp 1–9Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Gert-Jan de Vreede
    • 1
    Email author
  • Robert O. Briggs
    • 2
  • Gwendolyn L. Kolfschoten
    • 3
  1. 1.University of South FloridaTampaUSA
  2. 2.San Diego State UniversitySan DiegoUSA
  3. 3.Better SamenwerkenDelftThe Netherlands

Personalised recommendations