Skip to main content
Log in

A comparative study of students and professionals in syntactical model comprehension experiments

  • Regular Paper
  • Published:
Software and Systems Modeling Aims and scope Submit manuscript

Abstract

Empirical evaluations can be conducted with students or professionals as subjects. Students are much more accessible than professionals and they are inexpensive, allowing a greater number of empirical studies to be conducted. Professionals are preferred over students due to concerns regarding the external validity of student-based experiments. Professionals are believed to perform differently, most likely better than students. But with respect to evaluating the cognitive effectiveness of software engineering notations, are professionals really better? The literature has suggested that the presentation of information is just as critical as the content it conveys, hence necessitating this type of empirical studies. If professionals are not much better than students, then such important finding can be a springboard to many much-needed empirical evaluations in this field. In this paper, we report on an experiment that compare the performances of professionals and students with respect to syntactical model comprehension, which is a core factor for evaluating the cognitive effectiveness of notations. The experiment involved two groups of professionals and two student groups, totaling 74 professionals and 75 students. The results of the experiment indicate that students can be used as an adequate replacement to professionals in such type of empirical studies.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

Notes

  1. International Software Engineering Research Network http://isern/iese.de/.

References

  1. Amoroso, E.G.: Fundamentals of Computer Security Technology. Prentice-Hall Inc, Upper Saddle River, NJ (1994)

    MATH  Google Scholar 

  2. Anda, B., Dreiem, H., Sjøberg, D., Jørgensen, M.: Estimating software development effort based on use cases—experiences from industry. Submitted to UML’2001 (Fourth International Conference on the Unified Modeling Language)

  3. Berander, P.: Using students as subjects in requirements prioritization. ISESE pp. 167–176 (2004)

  4. Booch, G., Rumbaugh, J., Jacobson, I.: The Unified Modeling Language User Guide, 2nd edn. Addison-Wesley, Boston (2005)

    Google Scholar 

  5. OMG: Business Process Model and Notation (BPMN): Specification 2.0 V0.9.15. Object Management Group Inc., Needham (2009)

    Google Scholar 

  6. Buhr, R.J.A., Casselman, R.S.: Use Case Maps for Object-Oriented Systems. Prentice Hall, Englewood Cliffs (1996)

    MATH  Google Scholar 

  7. Carver, J., Shull, F., Basili, V.: Observational studies to accelerate process experience in classroom studies: an evaluation. In: Proceedings of the 2003 International Symposium on Empirical Software Engineering (ISESE ‘03), pp. 72–79

  8. Ciolkowski, M.: What do we know about perspective-based reading? An approach for quantitative aggregation in software engineering. In: 2009 3rd International Symposium on Empirical Software Engineering and Measurement, ESEM 2009, pp. 133–144 (2009)

  9. Cliff, N.: Dominance statistics: ordinal analyses to answer ordinal questions. Psychol. Bull. 114(3), 494–509 (1993)

    Article  Google Scholar 

  10. Cliff, N.: Answering ordinal questions with ordinal data using ordinal statistics. Multivar. Behav. Res. 31(3), 331–350 (1996)

    Article  Google Scholar 

  11. Cliff, N.: Ordinal Methods for Behavioral Data Analysis. Lawrence Erlbaum Associates, New Jersey (1996)

    Google Scholar 

  12. Dubois, E., Wu, S.: A framework for dealing with and specifying security requirements in information systems. Inf. Syst. Secur. pp. 88–99 (1996)

  13. El-Attar, M.: Companion Website to the Analysis of the New Use Case and Statecharts Notations. https://faculty.alfaisal.edu/melattar/physicsofnotations.html. Last accessed December 2018

  14. Falessi, D., Juristo, N., Wohlin, C., Turhan, B, Münch, J., Jedlitschka, A., Oivo, M.: Empirical software engineering experts on the use of students and professionals in experiments. Empir. Softw. Eng. 23(1), 452–489 (2018)

    Article  Google Scholar 

  15. Genon, N., Heymans, P., Amyot, D.: Analysing the Cognitive Effectiveness of the BPMN 20 Visual Notation. Software Language Engineering, pp. 377–396. Springer, Berlin Heidelberg (2011)

    Google Scholar 

  16. Genon, N., Amyot, D., Heymans, P.: Analysing the Cognitive Effectiveness of the UCM Visual Notation. System Analysis and Modeling: About Models, pp. 221–240. Springer, Berlin (2011)

    Book  Google Scholar 

  17. Gopalakrishnan, S., Krogstie, J., Sindre, G. (2010) Adapting UML activity diagrams for mobile work process modelling: experimental comparison of two notation alternatives. In: PoEM. pp. 145–161

  18. Hassan, R., Bohner, S., El-Kassas, M.: Hinchey integrating formal analysis and design to preserve security properties. In: 42nd Hawaii International Conference on System Sciences, HICSS ‘09, pp. 1–10 (2009)

  19. Hess, M.R., Kromrey, J.D., Ferron, J.M., Hogarty, K.Y., Hines, C.V.: Robust inference in meta-analysis: an empirical comparison of point and interval estimates using the standardized mean difference and cliff’s delta. In: Annual meeting of the American Educational Research Association (2005). www.coedu.usf.edu/main/departments/me/documents/RobustMeta-AnalysisAERA2005.pdf. Last accessed July 2018

  20. Höfer, A., Tichy, W.F.: Status of empirical research in software engineering. No January 1996, pp. 10–19 (2007)

  21. Höst, M., Regnell, B., Wohlin, C.: Using students as subjects–a comparative study of students and professionals in lead-time impact assessment. Empir. Softw. Eng. 5, 210–214 (2000)

    Article  MATH  Google Scholar 

  22. Humphrey, W.S.: A Discipline for Software Engineering. Addison Wesley, Boston (1995)

    Google Scholar 

  23. Jacobson, I., Ericsson, M., Jacobson, A.: The Object Advantage. ACM Press, New York (1995)

    Google Scholar 

  24. Jürjens, J.: Secure Systems Development with UML. Springer, Berlin (2004)

    MATH  Google Scholar 

  25. Jürjens J.: UMLsec: extending UML for secure systems development. In: 5th International Conference UML 2002—The Unified Modelling Language. Model Engineering, Languages, Concepts, and Tools. Dresden, Germany, September/October 2002, Proceedings, volume 2460 of LNCS, pp. 412–425. Springer (2002)

  26. Kaner, C., Bach, J., Pettichord, B.: Lessons Learned in Software Testing. Wiley, New York (2003)

    Google Scholar 

  27. Kárpáti, P., Sindre, G., Opdahl, A.L.: Visualizing Cyber Attacks with Misuse Case Maps. In: REFSQ pp. 262–275 (2010)

  28. Katta, V., Karpati, P., Opdahl, A.L., Sindre, G., Raspotnig, C.: Comparing two misuse modeling techniques for intrusion visualization. In: Lecture Notes in Business Information Processing, vol. 68, pp. 1–15 (2010)

  29. Kromrey, J., Hogarty, K., Ferron, J., Hines, C., Hess, M.: Robustness in meta-analysis: an empirical comparison of point and interval estimates of standardized mean differences and Cliff’s delta. In: American Statistical Association 2005 Joint Statistical Meetings, pp. 7 (2005). http://luna.cas.usf.edu/~mbrannic/files/meta/Robust%20Estimates.pdf. Last accessed July 2018

  30. Kromrey, J., Hogarty, K.: Analysis options for testing group differences on ordered categorical variables: an empirical investigation of Type 1 error control and statistical power. Mult. Linear Regres. Viewp. 25, 70–82 (1998)

    Google Scholar 

  31. Lehmann, E.L.: Non-Parametrics: Statistical Methods Based On Ranks, Revised. Pearson (1998)

  32. Lin, L., Nuseibeh, B., Ince, D., Jackson, M.: Moffett introducing abuse frames for analysing security requirements. In: Proceedings of the 11th IEEE International Requirements Engineering Conference, pp. 371–37 (2003)

  33. Lodderstedt T.: SecureUML: a UML-based modelling language for model-driven security. In: UML 2002—The Unified Modelling Language. Model Engineering, Languages, Concepts, and Tools. 5th International Conference, Dresden, Germany, September/October 2002, Proceedings, volume 2460 of LNCS, pp. 426–441. Springer, Berlin (2002)

  34. Mader, P., Cleland-Huang, J.: A Visual Traceability Modeling Language. Model Driven Engineering Languages and Systems. Springer, Berlin (2010)

    Google Scholar 

  35. McMeekin, D.A., Von Konsky, B.R., Robey, M., Cooper, D.J. A.: The significance of participant experience when evaluating software inspection techniques. In: Proceedings of the Australian Software Engineering Conference, ASWEC, pp. 200–209 (2009)

  36. Moody, D.L.: The ‘physics’ of notations: toward a scientific basis for constructing visual notations in software engineering. IEEE Trans. Softw. Eng. 35(6), 756–779 (2009)

    Article  Google Scholar 

  37. Moody, D., van Hillegersberg, J.: Evaluating the Visual Syntax of UML: An Analysis of the Cognitive Effectiveness of the UML Family of Diagrams. Software Language Engineering. Springer, Berlin (2009)

    Google Scholar 

  38. Moody, D.L., Heymans, P., Matulevicius, R.: Visual syntax does matter: improving the cognitive effectiveness of the i* visual notation. Requir. Eng. 15(2), 141–175 (2010)

    Article  Google Scholar 

  39. Mouratidis, H., Giorgini, P.: Secure tropos: a security-oriented extension of the tropos methodology. Int. J. Softw. Eng. Knowledge Eng. 17(2), 285–309 (2007)

    Article  Google Scholar 

  40. OMG.: Unified Modeling Language, Version 2.4.1. Object Management Group, Inc, Needham (2012). http://www.uml.org

  41. Pickard, L.M., Kitchenham, B.A., Jones, P.W.: Combining empirical results in software engineering. Inf. Softw. Technol. 40(11), 811–821 (1998)

    Article  Google Scholar 

  42. Porter, A.A., Votta, L.G., Basili, V.R.: Comparing detection methods for software requirements inspections: a replicated experiment. IEEE Trans. Softw. Eng. 21, 563–575 (1995)

    Article  Google Scholar 

  43. Porter, A., Votta, L.: Comparing detection methods for software requirements inspections: a replication using professional subjects. Empir. Softw. Eng. 3, 355–379 (1998)

    Article  Google Scholar 

  44. Purchase, H.C., Carrington, D., Allder, J.-A.: Empirical evaluation of aesthetics-based graph layout. Empir. Softw. Eng. 7(3), 233–255 (2002)

    Article  MATH  Google Scholar 

  45. Purchase, H.C., Welland, R., McGill, M., Colpoys, L.: Comprehension of diagram syntax: an empirical study of entity relationship notations. Int. J. Hum. Comput. Stud. 61, 187–203 (2004)

    Article  Google Scholar 

  46. Reijers, H.A., Mendling, J.: A study into the factors that influence the understandability of business process models. IEEE Trans. Syst. Man Cybern. Part A Syst. Hum. 41(3), 449–462 (2011)

    Article  Google Scholar 

  47. Remus, W.: Using students as subjects in experiments on decision support systems. In: Proceeding of the Twenty-Second Annu. Hawaii Int. Conf. Syst. Sci. Vol. III Decision Support and Knowledge Based Systems Track, vol. 3 (1989)

  48. Ricca, F., Di Penta, M., Torchiano, M., Tonella, P., Ceccato, M.: How developers' experience and ability influence web application comprehension tasks supported by UML stereotypes: a series of four experiments. IEEE Trans. Softw. Eng. 36, 96–118 (2010)

    Article  Google Scholar 

  49. Røstad, L.: An extended misuse case notation, including vulnerabilities and the insider threat. In: Proceedings of the 12th International Working Conference on Requirements Engineering: Foundation for Software Quality, pp. 33–43 (2006)

  50. Runeson, P.: Using students as experiment subjects–an analysis on graduate and freshmen student data. In: Proceedings 7th International Conference on Empirical Assessment & Evaluation in Software Engineering, pp. 95–102 (2003)

  51. Salviulo, F., Scanniello, G.: Dealing with identifiers and comments in source code comprehension and maintenance: results from an ethnographically-informed study with students and professionals. In: Proceedings of the 18th International Conference on Evaluation and Assessment in Software Engineering, pp. 48:1–48:10 (2014)

  52. Salman, I. Misirli, A.T., Juristo, N. : Are students representatives of professionals in software engineering experiments?. In International Conference on Software Engineering (ICSE), pp. 666– 676 (2015)

  53. Schneier, B.: Attack trees. Dr Dobb’s J. 24(12), 21–29 (1999)

    Google Scholar 

  54. Shapiro, S.S., Wilk, M.B.: An analysis of variance test for the exponential distribution. TechnoMetrics 14, 355–370 (1972)

    Article  MATH  Google Scholar 

  55. Siegel, S., Castellan, Jr. N.J.: Non-Parametric Statistics for the Behavioral Sciences, 2nd edn. McGraw-Hill (1988)

  56. Sindre, G., Opdahl, A.: Eliciting security requirements with misuse cases. Requir. Eng. J. 10, 34–44 (2005)

    Article  Google Scholar 

  57. Sindre, G.: Mal-activity diagrams for capturing attacks on business processes. In: Proceedings of the 13th International Working Conference on Requirements Engineering: Foundation for Software Quality, pp. 355–366 (2007)

  58. Sindre, G., Opdahl, A.L., Breivik, G.F.: Generalization/specialization as a structuring mechanism for misuse cases. In: Proceedings of the 2nd Symposium on Requirements Engineering for Information Security (SREIS’02), pp. 1–16 (2002)

  59. Sjoberg, D.I., Anda, B., Arisholm, E., Dyba, T., Jorgensen, M., Karahasanovic, A., Vokác, M.: Conducting realistic experiments in software engineering. In: Proceedings of the 2002 International Symposium on Empirical Software Engineering (ISESE’02), Washington DC, 2002. IEEE Computer Society

  60. Svahnberg, M., Aurum, A., Wohlin, C.: Using students as subjects–an empirical evaluation. In: ESEM, pp. 288–290 (2008)

  61. van Lamsweerde, A.: Elaborating security requirements by construction of intentional anti-models. In: Proceedings of the 26th International Conference on Software Engineering, IEEE Computer Society ICSE ‘04, Washington, DC, USA, pp. 148–157 (2004)

  62. Wohlin, C., Runeson, P., Host, M., Ohlsson, M.C., Regnell, B., Wesslen, A.: Experimentation in Software Engineering–An Introduction. Kluwer, Alphen aan den Rijn (2000)

    Book  MATH  Google Scholar 

Download references

Acknowledgements

We would like to thank all the software engineering professionals and students who took part in this experiment.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mohamed El-Attar.

Additional information

Communicated by: Dr Timothy Lethbridge.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix A: Experimental artifacts

See Figs. 5, 6, 7, 8.

Fig. 5
figure 5

Use Case Diagram 1—using the new notation

Fig. 6
figure 6

Use Case Diagram 2—using the original notation

Fig. 7
figure 7

Statechart 1 using the new notation

Fig. 8
figure 8

Statechart 2 using the original notation

Appendix B: Use case and statechart diagrams notations

Figure 9 above provides an example use case diagram for illustrative purposes. Each notational element from presented in Fig. 9 is explained in Table 18. Figure 10 above provides an example statechart diagram representing the state-dependent behavior of a light bulb for illustrative purposes. Each notational element from presented in Fig. 10 is explained in Table 19.

Fig. 9
figure 9

Explanatory use case diagram

Table 18 Definition of the use case modeling notational constructs
Fig. 10
figure 10

Explanatory statechart diagram

Table 19 Definition of the statechart diagrams notational constructs

Appendix C: Questionnaires used

figure l
figure m
figure n
figure o
figure p
figure q
figure r
figure s
figure t
figure u
figure v
figure w

Appendix D: Shapiro–Wilk tests results and normality plots

The data sets are subjected to normality assumptions using the Shapiro–Wilk test [54]. Results of using the Shapiro–Wilk test on all 32 data sets are presented in Table 20. The results indicate that more than half the data set to not conform to a normal distribution. Normal probability plots provide more reliable interpretation of the results. Normal probability plots are displayed in “Appendix D”.

Table 20 Shapiro–Wilk test results

Below are normality plots as histograms for all 32 datasets used in this experiment (see Table 21. Source files for the normality plots as histograms and as scatter diagrams is available for download on [13].

Table 21 Normality plots as histograms

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

El-Attar, M. A comparative study of students and professionals in syntactical model comprehension experiments. Softw Syst Model 18, 3283–3329 (2019). https://doi.org/10.1007/s10270-019-00720-5

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10270-019-00720-5

Keywords

Navigation