Defining User Interface Design Patterns in the Insurance Sector

8 downloads 164 Views 6MB Size Report
patterns and finding and evaluating these in online insurance applications. This is ...... .3. RAD. Rapid Application De
Interface Insured Defining User Interface Design Patterns in the Insurance Sector

A.D.G. van Helbergen, B.Sc.

Thesis to acquire the degree of Master of Information Science

Thesis to acquire the degree of Master of Information Science

Interface Insured Defining User Interface Design Patterns in the Insurance Sector

A.D.G. van Helbergen, B.Sc. August 2009

Supervisors

prof. dr. G.C. van der Veer dr. M. van Welie

Vrije Universiteit Faculty of Sciences – Department of Computer Science Information Science – Multimedia and Culture Amsterdam

Quinity B.V. Utrecht .

Interface Insured Defining User Interface Design Patterns in the Insurance Sector Thesis submitted in partial fulfillment of the requirements of the degree of Master of Information Science Author: Studentnumber: E-mail: Coursecode:

A.D.G. van Helbergen, B.Sc. 1275070 [email protected] 400284, 30 ects

Vrije Universiteit Faculty of Sciences Department of Computer Science Information Science Multimedia and Culture De Boelelaan 1081 1081 HV Amsterdam Internal Supervisors First supervisor: Second supervisor:

prof. dr. G.C. van der Veer dr. M. van Welie

[email protected] [email protected]

ir. M.P.H. Vossen drs. J. Snijders drs. R.W. Guitink

[email protected] [email protected] [email protected]

Quinity B.V. Maliebaan 50 3581 CS Utrecht External Supervisors Daily supervisor: General supervisor: Management supervisor:

The complete version of this document contains confidential appendixes, which are not available in the public version. These appendixes may be requested via the Quinity office which can be found at http://www.quinity.com.

Summary In this day and age the need for User-Centred Design is growing more essential for an application to be successful. Software Engineers are generally not schooled in User-Centred Design or Human Computer Interaction techniques and they need guidance in creating User Interfaces. This guidance can be accomplished in the form of patterns. Patterns are a structural way to describe a proven solution to a recurring design problem. They can aid designers in their decision making process. The use of patterns are manifold. Primarily they enable the reuse of solutions and prevent designers from reinventing the wheel. Secondly, patterns help cut down development costs, -time and -errors because of this. Patterns can be found in everywhere and during this research we focussed on patterns in the insurance sector. Insurance is a means for people to secure their wealth and property against unforeseen events. It is defined by a contract, or policy, in which an agreement is made that a premium is paid for the possibility that a remission is paid in return, if and when an uncertain event occurs. Insurance policies can become very complicated and their administration even more so. Insurance companies have to deliver their services to many different parties that are involved in the insurance process. They are therefore eager to relocate their transactions to an online system and automate as much of it as possible. The goals of this research were to formalise a method for extracting User Interface Design patterns and finding and evaluating these in online insurance applications. This is interesting because the field of User Interface Design patterns is still young and there are still many patterns to document. Moreover the world of insurance is usually quite closed-off because companies are very protective of their trade secrets. Because of this, not much is known about insurance software implementations as most of them are custom-made. We accomplished the above by formulating a pattern extraction method and using an existing software engineering method called DUTCH, which we adapted to our needs, to evaluate three insurance software modules. By comparing certain task areas in the modules to other software with similar task descriptions, we extracted patterns for these tasks areas. We then created prototypes of these patterns and evaluated them against the original software modules to see if our patterns had increased the usability of the module. The research taught us the DUTCH is quite an elaborate method which tends to steer us in the direction functional analysis instead of interface analysis. Furthermore, this research delivered us with five new patterns: Incremental Search, Unified Edit, Calculator Tool, Power Text Edit and Leveled Search. We evaluated the first three of these and received positive results.

i

Acknowledgements The writing this thesis has been a long journey. The path I have followed has taken quite a few sharp turns along the way and, at times, has led me to crossroads and steep climbs that have made me feel lost and hopeless. I could not have navigated this path alone and therefore I would like to thank the people who helped me read the map. First and foremost, I would like to thank Mark Vossen and Jeroen Snijders, my external supervisors at Quinity, for their selfless dedication to my cause. Their input has been invaluable to my success and the quality of this research would not have been as high without them. A special acknowledgement is due for Evert-Jan Oppelaar for his interest and extra input and words of encouragement. A general thank you goes out to the whole Quinity firm for their support and warm hospitality. Sebastiaan, Eric and Arnold, thank you for the good times! Secondly, I would like to thank Gerrit van der Veer and Martijn van Welie, my internal supervisors, for providing me with the possibility to do this research under their wing. Gerrit and Martijn are both big players in the HCI scene and working with them has been an honour and a great learning experience. I hope that the patterns resulting from this project will be a useful addition to Martijn’s collection. I would like to thank my parents for being the rock solid foundation of my life that they are. Thank you for supporting me throughout my whole student career, where the choices I have made would not always have been your own. Special appreciation goes to my father who was able to give me the paternal boot once in a while. Last but not least, I would like to thank my late girlfriend, Juliette van Baal, for joining me on my path the past three years and providing me with her guidance, her knowledge and especially her comfort. As the paths of peoples’ life-journey often do, they split up, leading in different directions. I hope that that our paths will not stray too far and that they may cross again someday.

— Go big, or go home — Allard

iii

Contents Summary . . . . . Acknowledgements Table of contents . List of figures . . . List of tables . . . Acronyms . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

i ii iv viii x xii

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Science Curriculum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

1 1 1 1 2 2 3 3 3 3

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

5 5 5 6 7 9 9 10 11 11 11 12 13 14 15 15 15 16 17 17 19

3 On Insurance and Insurance Software 3.1 What is Insurance? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.1 The Principles of Insurance . . . . . . . . . . . . . . . . . . . . . . . . . .

21 21 21

1 Introduction 1.1 Context . . . . . . . . . . . . . . . . . . . . 1.2 Thesis Background . . . . . . . . . . . . . . 1.2.1 Objectives . . . . . . . . . . . . . . . 1.2.2 Relevance to the Master Information 1.3 Research Questions . . . . . . . . . . . . . . 1.4 Research Approach . . . . . . . . . . . . . . 1.4.1 Formalising a Method . . . . . . . . 1.4.2 Analysing Insurance Software . . . . 1.5 Thesis Structure . . . . . . . . . . . . . . . 2 On 2.1 2.2 2.3

. . . . . .

. . . . . .

HCI, Software Engineering and Patterns Human Computer Interaction . . . . . . . . . . Usability . . . . . . . . . . . . . . . . . . . . . . Software Engineering Lifecycle Models . . . . . 2.3.1 Waterfalls, Prototyping and Incremental 2.3.2 RAD . . . . . . . . . . . . . . . . . . . . 2.3.3 Spiral Model . . . . . . . . . . . . . . . 2.3.4 DUTCH . . . . . . . . . . . . . . . . . . 2.4 Metrics . . . . . . . . . . . . . . . . . . . . . . 2.4.1 Performance Metrics . . . . . . . . . . . 2.4.2 Issues-Based Metrics . . . . . . . . . . . 2.4.3 Self-Reported Metrics . . . . . . . . . . 2.4.4 Other Metrics . . . . . . . . . . . . . . . 2.5 Patterns . . . . . . . . . . . . . . . . . . . . . . 2.5.1 Pattern History . . . . . . . . . . . . . . 2.5.2 Anti-Patterns . . . . . . . . . . . . . . . 2.5.3 Pattern types . . . . . . . . . . . . . . . 2.5.4 User Interface Design Patterns . . . . . 2.5.5 Guidelines versus Patterns . . . . . . . . 2.5.6 Writing Patterns and Pattern Structure 2.5.7 Patterns Languages . . . . . . . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

v

Contents

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

22 22 22 24 24 25 26 26

4 Method 4.1 Our Approach . . . . . . . . . . . . . . . . . . . . . . . . 4.1.1 Software Engineering Versus Pattern Engineering 4.1.2 Where We Looked for Patterns . . . . . . . . . . 4.1.3 How to Extract and Evaluate Patterns . . . . . . 4.2 The Generic Research Method . . . . . . . . . . . . . . 4.2.1 Combining DUTCH with Pattern Engineering . 4.2.2 A Single Cycle through Altered DUTCH . . . . . 4.3 Application of the Research Method . . . . . . . . . . . 4.3.1 A Description of our Implementation . . . . . . . 4.3.2 What we did not measure . . . . . . . . . . . . . 4.3.3 Constraints of the Test Group and Test Schedule

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

33 33 33 34 35 37 37 37 40 40 41 41

5 The Results of the Case Studies 5.1 General elements . . . . . . . . . . . . . . 5.1.1 User profiles . . . . . . . . . . . . . 5.1.2 Interviews . . . . . . . . . . . . . . 5.1.3 Case Software . . . . . . . . . . . . 5.1.4 Collecting Issues in Task Model 1 . 5.2 Reviewed New Patterns . . . . . . . . . . 5.2.1 Incremental Search . . . . . . . . . 5.2.2 Unified Edit . . . . . . . . . . . . . 5.2.3 Calculator Tool . . . . . . . . . . . 5.3 Non-Reviewed New Patterns . . . . . . . 5.3.1 Power Text Edit . . . . . . . . . . 5.3.2 Levelled Search . . . . . . . . . . . 5.4 Overview of Applicable Existing Patterns

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

43 43 43 43 44 45 45 46 47 48 49 49 51 51

6 The 6.1 6.2 6.3 6.4 6.5

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

55 55 56 58 59 61

. . . . . . . . . . . . Pattern Engineering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

63 63 63 63 64 65

3.2

3.1.2 The Premium . . . . . . . . . . . 3.1.3 Distribution of the Products . . 3.1.4 The Market Targetgroup . . . . Insurance Software . . . . . . . . . . . . 3.2.1 User groups . . . . . . . . . . . . 3.2.2 The Future of Insurance Software 3.2.3 Quinity Insurance Solution . . . 3.2.4 Other Software . . . . . . . . . .

New Patterns Incremental Search . . . Unified Edit . . . . . . . Calculator Tool . . . . . Power Text Edit . . . . Levelled Search Results

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

7 Discussion and Conclusion 7.1 Evaluation . . . . . . . . . . . . . . . 7.1.1 DUTCH in Combination with 7.1.2 Our Initial Method . . . . . . 7.1.3 The Patterns . . . . . . . . . 7.2 Answers to the Research Questions .

vi

. . . . .

. . . . .

. . . . . . . .

. . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

Contents

7.3

Open Issues and Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Bibliography

65 67

A Surveys A.1 User Profile Survey . . . . . . . . . . A.2 User Profile Survey Results . . . . . A.3 USE Questionnaire . . . . . . . . . . A.4 USE Questionnaire Results . . . . . A.5 After-Scenario Questionnaire . . . . A.6 After-Scenario Questionnaire Results

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

73 73 76 77 79 80 81

B Interviews B.1 Interview B.2 Interview B.3 Interview B.4 Interview B.5 Interview B.6 Interview B.7 Interview

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

83 83 84 86 89 90 93 96

C Use-case Scenarios C.1 Scenarios for Task Model 1 . . . C.1.1 Forms Administration . . C.1.2 Policy Administration . . C.1.3 Claims Administration . . C.2 Scenarios for Pattern Evaluation C.2.1 Incremental Search . . . . C.2.2 Unified Edit . . . . . . . . C.2.3 Calculator Tool . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

99 99 99 99 100 100 100 101 101

Questions 1 . . . . . 2 . . . . . 3 . . . . . 4 . . . . . 5 . . . . . 6 . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

D Issues 103 D.1 Scenario-based Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 E Performance Metric Results 107 E.1 Incremental Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 E.2 Unified Edit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 E.3 Calculator Tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 F The F.1 F.2 F.3

Initial Method 111 Why a different plan? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 The Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 The Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112

vii

List of Figures 1.1

RIA screenshots

2.1 2.2 2.3 2.4 2.5

Waterfall Model . . Prototyping Model Spiral Model . . . DUTCH Method . Pattern Parts . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

7 8 9 10 18

3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9

QIS position . . . . . QIS . . . . . . . . . Elvia Policy System Unigarant . . . . . . Coda 2go . . . . . . Siebel . . . . . . . . SAP ERP . . . . . . Norma EMD/EPD . AMC Zorg Dekstop .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

26 27 29 29 30 30 31 31 32

4.1 4.2 4.3

Generic Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Specific Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . DUTCH altered . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

36 38 38

5.1 5.2 5.3

Prototype Incremental Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Prototype Unified Edit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Prototype Calculator Tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

47 48 50

A.1 USE Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.2 ASQ Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

79 81

E.1 E.2 E.3 E.4 E.5 E.6 E.7 E.8 E.9

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Time-on-task Incremental Search Clicks Incremental Search . . . . Time-on-task Unified Edit . . . . Clicks Unified Edit . . . . . . . . Clicks Unified Edit . . . . . . . . Task Success Calculator Tool . . Time-on-task Calculator Tool . . Clicks Calculator Tool . . . . . . Clicks Calculator Tool . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

2

107 107 108 108 108 109 109 109 110

ix

List of Tables 2.1

Typical elements found in a pattern form . . . . . . . . . . . . . . . . . . . . . .

18

3.1 3.2

The risks of a consumer and the respective policies . . . . . . . . . . . . . . . . . The risks of a business and the respective policies . . . . . . . . . . . . . . . . . .

23 23

5.1 5.2

Non-implemented Existing Patterns . . . . . . . . . . . . . . . . . . . . . . . . . Implemented Existing Patterns . . . . . . . . . . . . . . . . . . . . . . . . . . . .

52 53

A.1 User Profile Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

76

D.1 Scenario Issues Forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 D.2 Scenario Issues Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 D.3 Scenario Issues Claims . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105

xi

Acronyms ASQ

After-Scenario Questionnaire

12, 13, 36

CSUQ

Computer System Usability Questionnaire

13

DUTCH

Design of User Tasks from Concepts to Handles

3, 10, 11, 33–35, 57

GUI

Graphical User Interface

1, 5

HCI

Human Computer Interaction

1–3, 5, 10, 14, 16

PDS

Product Definition System

26

PT

prototype

33–37, 57, 58

QFS

Quinity Forms System

26

QIS

Quinity Insurance Solution

26, 28

QUIS

Questionnaire for User Satisfaction

13

RAD

Rapid Application Development

9

RE

Requirements Engineering

6, 7, 9, 35

RIA

Rich Internet Application

1, 2

SE

Software Engineering

3, 5–7, 10, 14

xiii

Acronyms

SUMI

Software Usability Measurement Inventory

13

TM

task model

33–35, 57, 58

UCD

user centred design

1

UI

user interface

16, 17

UID

user interface design

1–3, 6, 12, 15– 17, 19, 33, 35

USE

Usefulness, Satisfaction and Ease of Use

13, 57, 58

UVM

user virtual machine

35

xiv

1 Introduction In this chapter we introduce our research by giving an overview of its context in Section 1.1 and an overview of our motivations in Section 1.2, for this research. We introduce our research questions in Section 1.3 describe our research approach in Section 1.4. We finish off this chapter by clarifying the over all structure of this thesis in 1.5.

1.1 Context With user tasks of software applications growing evermore complex and the emergence of Rich Internet Applications (RIAs) (Figure 1.1), which aim to cover similar ground to desktop application software online (Garrett, 2005), well thought-out user centred design (UCD) becomes evermore essential to the commercial success of an application. The same growth of the need of UCD can be observed in the insurance market. The growing use of online shopping has caused consumers to expect the acquisition of policies, handling of claims and mutation policies to be possible via online applications. According to NIBE-SVV (2006), the Dutch insurance market expects online sales of bulk insurance policies to have a market share of eighty percent by 2010. Software engineers are generally not schooled in UCD or Human Computer Interaction (HCI) techniques and to satisfy software developers’ need for explicit guidance and advice in the designing of Graphical User Interfaces (GUIs), HCI professionals have started capturing user interface design (UID) principles in patterns (Norman and Draper, 1986). Following this, collections of these patterns have arisen (Van Welie, 2009; Tidwell, 2005) and discussions about UID pattern languages have commenced (Schummer et al., 2004). UID patterns is an evolving research area. The approach of UID patterns was already proposed by Norman and Draper in 1986 but there is still discussion as to the ideal form and use of UID patterns (Van Welie et al., 2000; Richter, 2003). The above makes it difficult for anyone to define a complete set of patterns (not that anyone claims to have done so) and there are still patterns out there to be discovered.

1.2 Thesis Background This thesis is written as a report of a 6-month research at Quinity1 , an IT-company that makes web-based software solutions for business administration in the insurance and banking sector.

1.2.1 Objectives The goal of this research was twofold. 1. Formalise a method for UID pattern identification 2. Identify UID patterns for web based applications in the insurance market 1 http://www.quinity.com

1

1 Introduction

(a) Hotmail

(b) Gmail

(c) YouTube

(d) Google Maps

Figure 1.1: Examples of RIAs: Hotmail and Gmail, two well-known online email clients; YouTube, a video sharing community site; and Google Maps, an interactive map with satellite images

Making use of our own method proposed for the first goal, existing insurance software was analysed to generate UID patterns. prototype (PT) applications were created that incorporated the newly found patterns. These PTs were then be compared to the existing applications, to establish the patterns’ worth.

1.2.2 Relevance to the Master Information Science Curriculum This thesis is a result of a Master Project, a graduation project which acts as the conclusion of the study of Information Sciences at the Vrije Universiteit Amsterdam. “Information Sciences [. . . ] focus on theory development and best practices of effective creation, structuring, processing, communication and sharing of information and knowledge using ICT.” (Vrije Universiteit, 2008) This thesis provides a relevant contribution to the domain of Information Sciences because it increases the insight into the HCI aspects of a specific field of software applications to which people do not often have access and of which many designers have little knowledge.

1.3 Research Questions With the goals of the research in mind, we are able to define the following research questions. 1. Which patterns are relevant to the insurance market? 2. What forces are applicable to these patterns?

2

1.4 Research Approach

3. When is pattern A applicable and when pattern B? 4. Is there such a thing as The Insurance Pattern? We performed this research with these questions in mind and aimed to be able to answer them after conclusion of this research.

1.4 Research Approach The following provides an overview of the research method. A more detailed explanation of the complete method is given in chapter 4.

1.4.1 Formalising a Method To accomplish the first research goal, formalising a method for UID pattern identification, we used the Design of User Tasks from Concepts to Handles (DUTCH) method described by Van der Veer and Van Welie as an evaluation method for our patterns. We chose this method because with DUTCH, the design of systems. . . “. . . is driven by an extensive task analysis followed by structured design and iterative evaluation using usability criteria.” (Van der Veer and Van Welie, 2003) Moreover, the method’s iterative nature made it suitable for the implementation of the proofof-concept PT. Yet, as DUTCH is an engineering method and not a pattern recognition method, it was not applicable in its pure form. We had to extend the method to fit our needs for identifying UID patterns by adding the process of identifying patterns to the method cycle.

1.4.2 Analysing Insurance Software To accomplish the second goal, the identification of UID patterns, we evaluated and analysed three existing insurance software cases: 1. Forms Administration System 2. Policy Requisition System 3. Claims Administration System We then sought out software cases that had comparable functional task descriptions and analysed their solutions. From the combination of these analyses, we deduced generic UID patterns, which we then reimplemented into the above mentioned cases. The evaluation of these PTs provided a measure for the quality of our deduced pattern.

1.5 Thesis Structure The thesis will have the following structure. In Chapter 2 we describe HCI and Software Engineering methods and we introduce the concept of design patterns, their history and the current state of affairs in this research domain to give a scientific background and to place this research in a scientific context. In Chapter 3 we define the different aspects of the insurance and the insurance sector, describe characteristics of insurance software and compare contemporary insuranceand general administrative software packages. This serves as general knowledge, to place the

3

1 Introduction

software and patterns described in this thesis in a physical context. Then, in Chapter 4 we explain the research method of this project in more detail. We examine existing patterns, new patterns and the results of the prototype evaluation of each case consecutively in Chapter 5. After this, in Chapter 6, we reveal a catalogue of the new patterns we discovered. We conclude this thesis in Chapter 7, where we discuss the strong and weak points of this research and finish by summarising this research and its results.

4

2 On HCI, Software Engineering and Patterns To perform this research it was necessary to collect background information from existing literature about user interfaces, engineering methods and patterns. To this purpose, this chapter serves as an introduction to Human Computer Interaction- and Software Engineering concepts. We start with definitions of HCI itself in Section 2.1 and follow with definitions of usability in Section 2.2 to give a general outline of the field in which this research was performed. As background for the engineering of patterns we look at different software engineering lifecycle models in Section 2.3. As background information for our evaluation techniques, we look at different types of metrics in Section 2.4 and last but not least we give a summary of existing pattern literature to conclude the defining of the scientific context of this research in Section 2.5. The order of the sections is set up so as to go from broad knowledge to specific details of context knowledge.

2.1 Human Computer Interaction “Human-computer interaction is a discipline concerned with the design, evaluation and implementation of interactive computing systems for human use and with the study of major phenomena surrounding them.” (Hewett et al., 1996) Although there is no official definition for HCI, the definition above provided by the Curriculum Development Group of ACM SIGHCHI1 covers the ground well. In general, HCI is seen as the study of computers (computer machinery), people (the users) and the interaction betweens the two. This interaction takes place at the interface of the machine which can encompass both hardand software. On the software side, a part of the field concentrates on the creation and design of GUIs, attempting to increase their usability.

2.2 Usability Just as there are many definitions for HCI, there are many different definitions for usability. The International Standards Organisation defines it as “the extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency, and satisfaction in a specified context of use.” (International Standards Organisation, 1998) The Usability Professionals Association takes a more production-oriented approach and defines usability as 1 Association

of Computing Machinery Special Interest Group Computer Human Interaction

5

2 On HCI, Software Engineering and Patterns

“an approach to product development that incorporates direct user feedback throughout the development cycle in order to reduce costs and create products and tools that meet user needs.” (Usability Professionals Association, 2009a) Arguably one of the most wide-spread definitions is that of Krug, which states that “usability really just means that making sure that something works well: that a person of average (or even below average) ability and experience can use the thing whether it’s a Web site, a fighter jet, or a revolving door - for its intended purpose without getting hopelessly frustrated.” (Krug, 2000) There are many more definitions around, some of which can be found at (Usability Professionals Association, 2009b). All these definitions refer to the same following notions (Tullis and Albert, 2008). ∙ A user is involved. ∙ That user is doing something. ∙ That user is doing something with a product, system or other thing. In some literature, a distinction is made between usability and user experience. Usability being the ability of the user to complete a task as easily as possible and user experience also taking the whole experience (feelings, thoughts and perceptions that result of the whole interaction) into account (Tullis and Albert, 2008). Our view is that usability is a subset of user experience and that usability contributes to user experience but not necessarily the other way around. During this document we will treat usability as such.

2.3 Software Engineering Lifecycle Models As we need a method to create patterns for UID, which is a part of Software Engineering (SE), we will take a short look at some of the software engineering methods that exist to see if there is one that is applicable to our situation. When developing software it is pertinent to use a method with clearly defined phases. This is preferable over working in a random or ad hoc fashion in which there is little control or monitoring of the quality and budget used of the project. The main elements that every SE method consists of are (van Vliet, 2000): 1. Requirements Engineering (RE) 2. Design 3. Implementation 4. Testing 5. Maintenance Simply executing these phases in order is too crude a way of working to produce quality software. Therefore, when actually creating software, many companies tend to use more sophisticated methods with extra cycles, feedback loops and quality control. We will discuss some these further evolved methods now.

6

2.3 Software Engineering Lifecycle Models

Figure 2.1: The waterfall model; adapted from (van Vliet, 2000, p. 50)

2.3.1 Waterfalls, Prototyping and Incremental Development Although clearly phased approaches to SE with iteration and feedback were already in use since the 1960s, according to van Vliet, the waterfall model is generally attributed to Royce (1970). The waterfall model is the most basic SE method and consists of the elements mentioned above linked in succession. These phases are combined with verification and validation of the quality of the products between each phase (see Figure 2.1). In spite of verification and validation methods at the end of each phase, it is difficult to maintain a sufficiently high amount of quality in-between phases in practice. Furthermore unclear or undefined requirements in early phases warrant for an error-prone product later in a later phase. Due to these two reasons, the principal of prototyping was introduced. In prototyping, (part of) the phases are executed multiple times (see Figure 2.2). This is done to minimise the effort needed to recover from errors found in later phases. Building prototypes with incomplete functionality enables the engineers to clarify unclear requirements of the end user and to implement this functionality correctly in a later version of the product. In this fashion, prototyping becomes a tool for Requirements Engineering. Another option for work towards a final product without having to create the whole product in one go, is to work with iterations and create functionality incrementally. This model is called

7

2 On HCI, Software Engineering and Patterns

Figure 2.2: The prototyping model; adapted from (van Vliet, 2000, p. 53)

8

2.3 Software Engineering Lifecycle Models

Figure 2.3: The spiral model; original source (Boehm, 1988), adapted from (van Vliet, 2000, p. 63)

incremental development. For each iteration the waterfall model is employed and the software gradually grows into the final product. “Developing software in this way avoids the ‘Big Bang’ effect.” (van Vliet, 2000)

2.3.2 RAD Rapid Application Development (RAD) combines elements from iterative models, such as user involvement and prototyping. On top of this it adds an extra element, “[. . . ] it employs the notion of a time box, a fixed time frame within which activities are done.” (van Vliet, 2000) This time frame is immovable and if making the deadline is in jeopardy, then functionality is sacrificed.

2.3.3 Spiral Model To encompass all previous models Boehm introduced the spiral model (Boehm, 1988). Every (sub)problem that arises when engineering software can be solved with a certain amount of iterations around the spiral, including the RE- and maintenance cycles. Figure 2.3 shows the spiral and its consecutive parts.

9

2 On HCI, Software Engineering and Patterns

Figure 2.4: The DUTCH method; adapted from (Van der Veer and Van Welie, 2003; van Vliet, 2000, p. 556)

2.3.4 DUTCH As each software project has its own characteristics and goals, different SE methods will always be in use. To accomplish specific objectives within the HCI domain the DUTCH method was developed by Van der Veer and Van Welie (see Figure 2.4). This iterative method has a strong focus on the user of the system under development. The method has four phases which are iterated through until the software is finished. Evaluating the current situation The current situation (documents, current software and task analysis) is used to make a descriptive task model Envisioning a future situation The descriptive model is used to define a prescriptive task model which solves the problems of the current situation. Specifying the system The descriptive model is used to define the functionality of the system. Evaluation Every phase is constantly evaluated together with users. We believed this to be a good method to use as a starting point for defining patterns in our research. Primarily, because of its user-centered nature and secondarily because of its iterative

10

2.4 Metrics

nature. We will elaborate on DUTCH and how we implemented it in this research in Chapter 4.

2.4 Metrics To measure how usable a product is, we have to measure its usability in some form. To accomplish this we can measure different metrics while the user interacts with the product. We will now discuss different metrics and how we can measure them.

2.4.1 Performance Metrics Performance metrics measure user behaviours during scenarios and tasks. They measure specific actions and behaviour of the user during interaction with the product such as mouse clicks, time to complete a task and and are the most objective type of metrics as they can be clocked, counted or calculated. The following types of performance metrics exist (Tullis and Albert, 2008). Task Success Success measures whether a user is able to complete a task or not. This metric has two forms, binary success and levels of success. Binary success is the simplest way to measure success, it has two possible outputs ‘success’ or ‘failure’. Levels of success are more intricate allowing for partial degrees of success which can be defined as seen fit. Time-on-Task Also called task-time, time-on-task measures how much time it took the user to complete the task. This is a very effective measure for the efficiency of a product because the amount of time it takes a user to accomplish a task says a lot about the usability of the product. Errors Errors are the amount of mistakes a user makes while performing the task. Basically they are incorrect actions which defer the user from completing the task. An error is often an indication of an underlying usability issue. Efficiency Efficiency is a measure of the amount of effort a user has to put into a task to complete it. This can be measured by the amount of actions a user has to perform to complete a task. These actions can have many shapes or forms, such as the amount of clicks or button presses. Learnability Learnability is a measure of how performance changes over time. It can be measured by looking at the time and effort needed by the user to become proficient with the product. Performance metrics are not only useful to see how well users are using the product but they are also very useful to estimate the magnitude of usability issues. If there are many users who encounter a certain usability problem it is more likely to be important than if only one user encounters it. We used performance metrics as part of our evaluation process. Which specific metrics we used and why is discussed in Section 4.3.

2.4.2 Issues-Based Metrics Identifying usability issues and improving product design accordingly provides a lot of value to a product and this process is seen as the cornerstone of the usability profession (Tullis and Albert, 2008). Issues are generally seen as something purely qualitative, where an issue consists of a description of problem perceived by some users during an evaluation test and possibly recommendations on how to fix this problem. But using metrics to measure issues, adds value to the product without slowing down the process. the following metrics can be added to usability issues.

11

2 On HCI, Software Engineering and Patterns

Frequency How often an issue arises or how many users perceived it during an evaluation test is measured with frequency. Severity This is a rating which can be given to an issue to determine how important the issue is, aiding in prioritising issues. We believe usability issues are a very important part of usability evaluation and incorporated identifying them into our evaluation process.

2.4.3 Self-Reported Metrics When measuring satisfaction, a part of usability, one of the most obvious ways to measure it is with self-reported metrics. Otherwise known as subjective data or preference data, the elicitation of this data is used to inform about the user’s experiences and their perception of the system. A multitude of techniques are available to collect this data in the form of questionnaires and different rating types. These elicitation techniques often use survey type questioning with Likerttype scales (Likert, 1932), closed- and open-questions. An overview of the different techniques is given below (Tullis and Albert, 2008). Post-task Ratings The first type of rating one can collect is that associated with tasks. These ratings can give insight into which tasks users found difficult and can point to parts of the system which should be improved (or left alone). Ease of Use The user is asked to perform a variety of tasks and to rate how easy or difficult a task is after having performed each one. This provides a rough measure of perceived usability. After-Scenario Questionnaire Developed by Lewis (1991), the After-Scenario Questionnaire (ASQ) poses three rating scales each touching on different aspects of usability: effectiveness, efficiency and satisfaction. Expectation Measure This method proposed by Albert and Dixon (2003) rates the expected difficulty of tasks as well as the perceived difficulty. Comparing the average expectation ratings with the average experience ratings provides special insight into which tasks require priority in optimising. Usability Magnitude Estimation A totally different approach without Likert scales is proposed by McGee (2003). In this approach users assign usability values to a certain design, possibly in combination with reference ’good’ and ’bad’ designs, and then assign relative values to other proposed designs, building up their own scale as they go along. As stated, post-task ratings are extremely useful for evaluating the usability of product for a certain task. As we evaluated UID patterns for specific task during this research we incorporated these ratings in our study. ASQ was our rating of choice because of its simplicity and that it provided multiple axes which we believed to be insightful. Post-session Ratings Another type of rating is the the overall measure of perceived usability after their whole session with the product. This can give a useful overall usability rating of the product, especially if the same technique is used more than once over a period of time.

12

2.4 Metrics

Aggregated Individual Task Ratings This is the simplest technique and consists of taking an average of self-reported data across the different tasks. System Usability Scale This scale was developed by Brooke (1996) and is a rating consisting of ten statements together with a 5-point Likert scale. The statements are both positively and negatively worded. The score is calculated, via a special formula, to produce a combined usability rating of the whole system (in contrast to being divided into categories). Computer System Usability Questionnaire From the same author as the ASQ technique, the Computer System Usability Questionnaire (CSUQ) contains 19 statements with 7-point Likert scales and an N/A option (Lewis, 1995). All of the statements in CSUQ are worded positively and the result can be viewed in four main categories: System Usefulness, Information Quality, Interface Quality and Overall Satisfaction. Questionnaire for User Interface Satisfaction The Questionnaire for User Satisfaction (QUIS) has 27 ratings Chin et al. (1988) in five categories: Overall Reaction, Screen, Terminology/System Information, Learning and System Capabilities. The first six scales are oppo˙ site words without statements (e.gTerrible/Wonderful), the rest are ratings on a 10-point scale with different anchors depending on the question. Usefulness, Satisfaction and Ease of Use Questionnaire The Usefulness, Satisfaction and Ease of Use (USE) questionnaire was developed by Lund (2001) and has 30 ratings scales divided into four categories: Usefulness, Satisfaction, Ease of Use and Ease of Learning. Each rating is a positive statement with a 7-point Likert scale. Initially the different statements have a weight coupled to them, but later factor analysis showed that they were weighted so close together that they could be treated as equal (Lund, 2009). Software Usability Measurement Inventory A well-known commercial method is known as Software Usability Measurement Inventory (SUMI). This rating was written by Porteous et al. (1993). It has 50 statements, positive and negative, with a simple Disagree/Neutral/Agree scale. What is interesting about this rating is that because it is a de facto industry standard and has been used to evaluate many different products, comparisons can be easily made to other products. Product Reaction Cards A whole other system which steps away from the survey principle is the use of product reaction cards as developed by Benedeck and Miner (2002). This method consists of 118 cards with both positive and negative adjectives on them. The participants choose the cards they think describe the system. Although this method is best used to elicit commentary, it can also be used in a quantitative manner by counting the amount of positive and negative cards chosen. We did not incorporate the use of this rating in our research because we were not interested in the usability of the system as a whole but only of the usability of the specific patterns. We however did provide the above information because we used the USE questionnaire in our initial method which abandoned later during the study. Details of this can be found in Appendix F.

2.4.4 Other Metrics There are even more metrics which can be used to measure the usability of a product. They are listed below.

13

2 On HCI, Software Engineering and Patterns

Behavioural Metrics During a usability test users perform many actions which are not directly related to the task at hand. This metric measures these actions. Behavioural metrics consist of overt behaviours such as laughing, groaning, tapping their fingers or looking bored. One of the best known behavioural metrics is eye-tracking, which is well-known from its use in advertising. Physiological Metrics Physiological behaviours consist of covert events a user displays during usability testing. These can be emotional statuses such as frustration and stress or physical measurements such as heart rate and pupillary response. Combined Metrics The following two metrics are not a pure measurement by themselves. Combined metrics are created by combining multiple base metrics (any of the above) to derive a new score which to compare to other measurements. Comparative Metrics This metrics is found by comparing any usability results to expert or ideal result data and calculating the relative position of the product. We did not incorporate these methods into our study for various reasons. Behavioural and physiological metrics require special equipment to record and are very time consuming to measure. Taking the resources for this research into account these metrics were not an option. We did not have any expert or ideal data to create comparative data with and we did not see any use for combined metrics in this specific study which is why we left these out of the study as well.

2.5 Patterns “A Design pattern is a structured textual and graphical description of a proven solution to a recurring design problem.” (Borchers, 2001, p. 7) Patterns are a structural way to aid designers and engineers in their decision making process. Patterns describe generic solutions to generic problems. The user of a pattern is expected to have (some) knowledge of the domain, he still has to decide when and where to use a certain pattern. Therefore the word “proven” in the case of the above definition implies that the solution works for the problem but is not necessarily the best solution given the engineer’s context. In conjunction, a single problem can have multiple solutions depending on this given context (Tidwell, 2006; Borchers, 2001). The use of patterns is manifold, not only do they enable the reuse of solutions and prevent designers from reinventing the wheel but they help cut down development costs, -time and -errors because of this. Moreover, patterns improve communication about the development project. First of all the create an abstraction level between the problem and the solution, making the solution more comprehensible because lower layers are concealed. Second, it enables simpler communication between developers themselves because they can talk about generic solutions which no longer need to be explained to each other (Snijders, 2004). We believe patterns to be very useful tools in the design- and SE process. Patterns have proven their use in technical terrains and are now being explored for other terrains, such HCI. We believe this exploration to be a positive venture and wish to add to the pattern knowledge with this research. In the following sections we will look at patterns a little more closely. First we will give a short history of patterns as background information. Then we will describe the anti-pattern, another

14

2.5 Patterns

form of pattern. Next, ew describe different types of patterns to create a context in which to place the UID pattern of which we will then dive into more detail. As the use UID patterns are a relatively recent development we will describe its predecessor: guidelines. After this we go into some depth on how to write patterns. We finish with a view on the current state of pattern languages and existing pattern collections.

2.5.1 Pattern History The oldest form of patterns dates back to 1977, when Alexander, Ishikawa, Silverstein, Jacobson, Fiksdahl-King, and Angel defined many patterns for construction, architecture and communitybased design (Alexander et al., 1977). Alexander found that it was possible to create “quality without a name” using available architectural and constructional knowledge and bound this knowledge into a total of 253 patterns. These patterns were all reviewed by other architects and tested in the real world as part of their validation. Patterns were introduced into software engineering by the Gang of Four 2 in 1995 with their book Design Patterns: Elements of Reusable Object-Oriented Software. The design patterns movement is “probably the most important step forward in object-oriented design” (Eckel, 2009). Gamma et al. described 27 technical design patterns in their book (Gamma et al., 1995) and during the years more patterns have been added to this collection.

2.5.2 Anti-Patterns Adjacent to patterns, which provide a generic solution to generic problems, there are antipatterns, which describe generic non-solutions or failures (Brown, 1999). Anti-patterns describe common pitfalls and misconceptions, explaining why a solution to a generic problem which seems correct, actually is not. On the subject of failed software Brown claims the following. “These repeated failures, or “negative solutions”, are highly valuable, however in that they provide us with useful knowledge of what does not work, and through study: why. Such study, in the vernacular of Design Patterns can be classified as the study of Anti-Patterns.”(Brown et al., 1998) Recent research has discouraged the use of anti-patterns. It is claimed that they cause pitfalls in the cognitive process because they focus on what goes wrong instead of right. Van Biljon, Kotz´e, Renaud, McGee, and Sheffah (2004) state that the anti-pattern should not be used when the positive pattern has not been documented thoroughly and that it is therefore generally advised not to use them. We believe that anti-patterns can be useful in certain situations where patterns have many variations. In the process of defining patterns, however, we do not believe anti-patterns to be constructive because they, as stated above, focus on what goes wrong instead of right and therefore cloud judgement. We think that we should focus on grasping the ideal form of a pattern and then perhaps, once this is defined, come back to describe its pitfalls.

2.5.3 Pattern types There are many different abstraction layers in which to view software development and many different aspects of software itself that can be taken into consideration. Virtually every layer and aspect has its own methods and best practices recorded in pattern form. Following is list of the most prominent patterns (Snijders, 2004). 2 GoF

for short

15

2 On HCI, Software Engineering and Patterns

Process Patterns These are patterns which define procedures during the lifecycle of applications. These streamline projects and aid standardisation and continuity in the development process. The lifecycle models in Section 2.3 are examples of process patterns. Analysis Patterns Developed by Fowler (1996), these patterns define virtual representation of business concepts for use in information systems. These can be used to map products to a virtual environment. Although many companies make use of this concept, these companies are rarely aware of this and there is little literature about the pattern type and its use. Technical Design Patterns Introduced by the GoF, (see Section 2.5.1), these well-known patterns describes solutions to technical problems which arise during the development of software. User Interface Design Patterns These patterns describe standard and best practices for UID. They record user task flow, dialogue, functionality, organisation of screens, data visualisation and the organisation of graphical elements on the screen. Functional Patterns These are patterns which describe recurring functional aspects of (domain specific) applications. There are also less well-known patterns, such as performance patterns, which merit mentioning. But to discuss each pattern in detail would be outside the scope of this document as each pattern type merits its own research. The only type of pattern which is of interest to us is the UID pattern. We will discuss this pattern type in detail in the next sections.

2.5.4 User Interface Design Patterns As mentioned above, UID patterns are used to describe everything that is applicable to user interfaces (UIs). To comprehend exactly what a UID pattern is, it helps to know that they can also be called Interaction Design Patterns. These patterns are meant to describe all interaction that takes place at the UI of a machine. These patterns describe, task flow, user-machine dialogue and functionality. Van Welie explains it best, when he states, “These patterns concern the static structure, the appearance and dynamics behaviour of the user interface, but not the implementation in terms of coding. They include the ‘look and feel’of the interface as far as it goes beyond mere style.” (Van Welie, 2001) The use of UID patterns in HCI was first proposed by Norman and Draper (1986) but did not really take off until the first pattern collection arose with Tidwell’s Common Ground collection3 . UID patterns are different from standard patterns, as UID patterns are created from a user perspective instead of the designers like technical design patterns are. The problems described in UID patterns are only indirectly the designers’ problems (Van Duyne et al., 2003; Van Welie and Traetteberg, 2000; Segerst˚ ahl and Jokela, 2006). These problems, which the user has, can be categorised into the following principles (Norman, 1998; Van Welie, 2001). Visibility The ability to understand something just by looking at it. Affordance The perceived and actual properties of an object which indicate how it is to be used. Natural Mapping The creation of a relationship between the task the user wishes to complete and the mechanism to accomplish this. 3 Which

16

is now superseded by her Designing Interfaces Collection

2.5 Patterns

Constraints The reducing of the number of ways to perform a task or the knowledge neede to perform it, thus making the mechanism simpler. Conceptual Models The correspondence of the user’s understanding of the system to how it actually works. Feedback The indication of whether tasks are completed and completed correctly. Safety The protection of the user against unintended mistakes or irreversible actions. Flexibility The ability to adapt to a user’s way of working and to change things later on in a task process. These categories give an indication where to look for issues within software. Although they are very general we believe it to be useful to take these different views into consideration as they may enable us to discover solutions which are not obvious at first.

2.5.5 Guidelines versus Patterns In the past, designers have used guidelines to aid them in the design of UIs. Guidelines have multiple usability issues. First of all because they describe both dos as well as don’ts. Secondly, it is not defined when a guideline is applicable or when to deviate from it. This choice is left up to the experience and knowledge of the designer who may not be up to the task. Guidelines do not capture the context, forces or rationale applicable to a certain solution. Patterns are preferred over guidelines because they formulate examples solely of good design. Moreover they are more effective than guidelines because of the structured and generic way in which they discuss problems and their solutions (Van Welie et al., 2000). Although UID patterns in turn have their own usability issues with designers (Segerst˚ ahl and Jokela, 2006; Seffah and Javahery, 2002), the use of them is still seen as “highly beneficial” (Seffah and Javahery, 2002). The usefulness and effectiveness of patterns has been discussed through the years and extensions have been also been made to improve them (Van Welie and Traetteberg, 2000; Ahmed and Ashraf, 2007).

2.5.6 Writing Patterns and Pattern Structure There are many different ideas on what written patterns should look like. The basic idea set forth by Alexander is that “Each pattern is a three-part rule, which expresses a relation between a certain context, a problem, and a solution” (Alexander, 1979, p. 247). The way different parts of a pattern relate to each other is shown in Figure 2.5. Common aspects of a pattern form are shown in Table 2.1. There are many different forms in which patterns can be written, the form of choice depending on the goal at hand. Besides being narrative or tabular, different pattern forms can contain other elements, such as forces and force resolutions. An overview of different pattern forms is given by Fincher (2008). The actual writing of useful and reusable patterns is seen as a very difficult task by the pattern community. Beck et al. (1996) state that only a certain group of professionals is able to see patterns as well as describe them correctly. Writing patterns is a group process which requires a lot of interactive and iterative reviewing to get right. Some of the best practices have been recorded by Meszoras and Doble (1998) in the form of interconnected patterns, a pattern language in itself. As daunting as writing patterns may seem, this research was done to accomplish exactly that. We believed that with an iterative process and user feedback it would be possible to define

17

2 On HCI, Software Engineering and Patterns

Figure 2.5: The relationships between different pattern elements; adapted from (Snijders, 2004)

Name

A name serves to uniquely identify a pattern, but is also often used to browse through a pattern collection.

Problem

The problem is a very short description of what the typical problem is that the pattern solves.It is often a single line, with just the bare essentials of the problem, since the structure of the pattern has possibilities to elaborate on the problem in other sections.

Context

The context says something about when the given solution applies, and sometimes (when considered necessary) it states when the solution does not apply. This is the part where a ‘situation’ is being sketched, that is exemplary for this pattern to work.

Solution

The solution is a short description of how to solve the design problem at hand in the given context.

Rationale

The rationale is often a more in depth answer to the question ‘why’ the solution works in that context. This is the part of the pattern someone can actually gain some design knowledge by understanding the reasons behind the solution. Applying this knowledge in another context or on another problem can lead to new design ideas.

Example

Patterns are ‘proven solutions’ to a ‘recurring design problem’, therefore examples of the solution should be easy to find. The example-section often contains images to clarify what is meant. Table 2.1: Typical elements found in a pattern form

18

2.5 Patterns

candidate patterns which have a basis of reliability and reusability due to their reviewing from a test group. We will expand more on this in Chapter 4.

2.5.7 Patterns Languages The patterns provided by Alexander et al. (1977) were structured to create a language that related all the patterns together. A pattern language is an interconnected set of patterns organised and structured in a meaningful way from the point of view of the user of the set (Van Welie and Van der Veer, 2003). Languages have the structure of a network, higher level patterns are supported by lower level patterns and neighbouring or connected patterns round off the problem at hand encompassing the total project (Jackson, 2008). At the moment it is difficult to write a UID pattern language as there is still too much discussion about the ideal form of the patterns themselves. At the moment there are calls from the designer community for standardisation of pattern forms (Stapleton, 2009). On a less structured level we have pattern collections or catalogues. These are basically any set of patterns, often with some form of categorisation. In practice collections contain patterns corresponding to a certain domain. Since the Common Ground pattern collection (Tidwell, 1999) many collections have emerged, some in books such as (Borchers, 2001) and (Kunert, 2009), some in books with corresponding support websites (Tidwell, 2006, 2005) and (Van Duyne et al., 2003, 2006), but most of them online (Yahoo!, 2009; Van Welie, 2009; Toxboe, 2009). There are many specific collections which refer to small domains or specific categories of interaction, such as information display patterns (Behrens, 2008) and game interaction patterns (Folmer, 2008). An overview of well-known collections can be found at (Erickson, 2009) and (Borchers, 2006) The relationships between patterns can create different views dynamically for task domains or problem hierarchies when implemented in interactive tools such as Quince (Infragistics, 2009) or the Visual Design Pattern Wizard (Hennipman, 2008). The richer the relationships are, the more one can speak of a language. With enough interconnections a single pattern may even exist in multiple languages. A test was designed by Todd et al. (2004) and administered to existing collections to determine if these collections could be called a language. The results of this test showed that there is still much work to be done in this department. We did not aim to create a pattern language of ‘insurance patterns’ with this research. That would have been too large a task for the resources at hand. However, hopefully the patterns that we found can someday be incorporated in an existing language or catalogue, or possibly they can be expanded to create a language.

19

3 On Insurance and Insurance Software When studying software for the insurance market it is necessary to get a general impression of the product insurance, what it is, how it works and how the companies that provide it work. In Section 3.1 we investigate what insurance is, to inform on what insurance software has to be capable of. We will then outline how the insurance market works as background information for the context of the software. After this, in Section 3.2, we define the software requirements that can be derived from this background information as a basis for later analysis. Finally we take a brief high-level look at the software package we will be testing, as well as comparable insurance packages and administration software.

3.1 What is Insurance? During their lives, people build up wealth and property. Concurrently a need arises to secure this wealth against unforeseen events. The principles of insurance are based on this need. A general description of insurance as described by NIBE-SVV (2006)1 is given below.

3.1.1 The Principles of Insurance Insurance caters to the need for financial security in two ways. 1. By counteracting Financial losses inflicted by damages (e.g. indemnity insurance). 2. By remitting a payment at a certain time in life (e.g. life insurance ) An indemnity insurance that compensates all types of loss that are consequences of an insured event. This could be in the form of financial compensation for material losses that occur due to burglary or an accident. Life insurance provides for a need that a person can get a single sum of money or a series of remissions in the event of reaching a certain age or death. Insurances of this type are pension plans where the insured person can create a retirement fund to provide for themselves during old-age. By Dutch law2 , an insurance is defined by the following elements. ∙ There has to be a notion of a contract, otherwise known as a policy. ∙ The contract has to include the payment of a premium. ∙ The contract’s goal is to provide one or multiple remissions. ∙ The size of the remission, the amount of remissions or the length of the premium payments has to be uncertain. 1 Nederlands

Instituut voor het Bank- en Verzekeringsbedrijf : the Dutch Institute for Bank and Insurance Companies 2 Burgelijk Wetboek art. 7:925 (Dutch Common Law)

21

3 On Insurance and Insurance Software

In The Netherlands the provision of insurance services is supervised by the Authority of Financial Markets and the Nederlandse Bank. The Authority of Financial Markets surveils the conduct of insurance companies as to protect the consumer and the Nederlandse Bank checks the solvency and organisational structure of insurance companies to ensure that they do not go bankrupt.

3.1.2 The Premium To determine the height of the premium for insurance policies, the law of large numbers is used. The total costs for a certain loss is determined for a very large group with statistical analysis. This amount is then divided by the group, resulting in the risk premium. The actual premium consists the following elements. ∙ Risk premium ∙ Portion for business costs and provision ∙ Portion for profit ∙ Insurance tax

3.1.3 Distribution of the Products Insurers use multiple channels for distributing their products to maximise their sales. The distribution of insurance products is done by the following agents. ∙ The insurance company themselves, also known as direct writing. ∙ Via a broker-type intermediary. ∙ Via an authorised representative or empowered intermediary There are two important things to note here. First of all, with banks offering insurance products as authorised representatives the line between insurance companies and banks is getting thinner. Furthermore, brokers are slowly disappearing. This is mostly due to the fact that consumers are able to acquire their products themselves via direct writing and with help of the internet.

3.1.4 The Market Targetgroup The insurance market is basically divided into two target groups, the industry and the consumers. Consumers are people that act from within a personal situation whereas businesses act as a collection of people or have much larger risk. The risks for consumers and businesses are described in Table 3.1 and Table 3.2, respectively. Within a company, the employees can be grouped together and seen as a collective. Insurers make special products for collectives which usually entail a discount or special packages for a certain company. Many companies also arrange their employee benefits in this manner.

22

3.1 What is Insurance?

Risks

Policy types

Loss of, or damage to personal belongings

Fire and theft, Auto, Valuables

Life, death, sickness

Life, Health, Disability

Other capital damages

Liability, Legal aid

Table 3.1: The risks of a consumer and the respective policies

Risks

Policy types

Loss of, or damage to commercial porperties

Fire and theft Valuables

Capital damage

Industrial damages, Liability, Legal aid

Employee risks

Old age, Death, Disability, Health

Table 3.2: The risks of a business and the respective policies

23

3 On Insurance and Insurance Software

3.2 Insurance Software The functional requirements for insurance software are not unique (Quinity, 2003a). Ease of use, speed, low costs and transparency of the provision of services are essential for a competitive market position. The challenges for insurance companies are as follows. ∙ To lower the time-to-market for products ∙ To improve the efficiency of operational processes. ∙ To deliver real-time reports ∙ To maximise chain integration On top of these requirements which arise from competition between insurers, the consumers impose requirements on the insurance software as well. Consumers have grown accustomed to handling a lot of their business online. This calls for the existence of a front-end where user can view their insurance file, file claims or handle other insurance related issues (Quinity, 2005, p.3). Insurance software packages therefore have a certain set up which is recurrent across companies. The main elements are as follows. ∙ Policy administration ∙ Connections to external (legacy) applications ∙ Debtor administration ∙ Claim administration On top of these elements, many products have different output channels to cater to their varying user groups.

3.2.1 User groups In view of the different distribution channels, the structure of the insurance companies and the recurring software goals, the following (large) user groups can be defined. Consumers As the largest group of all, consumers will use the system for buying their insurance products online and for managing their policies and claims. These users have little or no domain knowledge and can not be expected to be computer literate. Consumers need a system that is easy, fast and fool-proof. Intermediaries As the main task of this group is selling products using the system, this group has some domain knowledge and is especially well familiarised with the different products. Users in this group can be expected to be computer literate but not that they have the a lot of time to learn how to use the system. Insurance Staff This group can be seen as experts on the insurance domain. The group consists of certain subgroups: Policy Acceptants, Fraud Experts, Claim Experts and other specific professions within the insurance administration, who all practice their specific tasks on a daily basis. These users have very specific knowledge that they have to process with the system which has to cater to their specific needs. The speed at which they perform a task is their main concern because their main goal is to process as much information as possible. Insurance staff can be expected to be above averagely computer literate and they can also be expected to take a reasonable amount of time to learn how the system works as it will be very specific depending on their job description.

24

3.2 Insurance Software

Administrators This group has the highest clearance of all the other groups. Just like the insurance staff, the administrator group can be split up into multiple functional subgroups, such as Insurance Staff administrators and Product Developers. The administrator group works ‘behind the scenes’ as they manage the staff and the products. This group is highly computer literate and has moderately knowledge of the insurance domain. The work they do is extremely complex and the system has to be very stable and clear. Speed is less of an issue to this group, whereas consistency and efficiency are. By analysing the different subgroups, we see that insurance software has to deliver custom interfaces for virtually the whole spectrum of user types. Luckily, the situations in which the different users interact with the system are distinct so that the interfaces do not have to be intertwined and can be physically separated. During this research our focus lay on a mixture of the Intermediaries, Insurance staff and Administrators. This was because the tasks they had are the most complex in the system and we had the best chance of finding new patterns there.

3.2.2 The Future of Insurance Software There are certain developments active in the Dutch insurance market today. If we extrapolate from these issues, we can get an indication of where the market and the software supporting it are headed (NIBE-SVV, 2006). The most prominent of changes is that of chain integration. The adjusting of different automated systems to work together creates major cost reductions for companies because it enables them to minimise the overhead. A large force in chain integration is the use of internet technology to connect the different systems or locations. The connecting of different systems and locations is becoming more important as the amount of regulating organisations grows. Although the Dutch government is privatising different markets to create a more level playing-field, the administrative overhead for insurers is growing because all the regulating organisations need to have access to data at any given moment. Each organisation has its own flavour of reports and digital forms and it is up to insurers to adhere to these. Internet is also a new area of distribution of insurance products. Insurance companies can now easily get into contact with their consumers directly. Selling insurance online is especially easy for bulk products such as indemnity insurance. NIBE-SVV expects the sales share of bulk products via the Internet to be 80% by the year 2010. In relation to the selling of products via the Internet, the market for intermediaries is shrinking. Smaller intermediaries are being taken over by larger ones or by insurance companies themselves in an attempt to get a grip on the market. There seems to be a tendency of less but larger companies with more specialised customer advisers. An issue which has always been present but which only recently has become into play is fraud. In the past not much attention was paid to fraud because the investment was not worth the return. The tipping point however has long been reached and many insurers are becoming very keen on catching fraud. There is a big role to play for software systems to help the insurers in this task. In the coming years, the ageing of the population will pose a problem for most insurance companies. They will have to turn-out larger remissions and on top of this the health costs are increasing. To compensate this many insurers are expanding their services to hospitals and healthcare centres so as to pay their remissions in kind. In conclusion, the cooperation between the different parties is increasing. This calls for a higher streamlining of data processing and work flow. On top of this the services of insurance companies are being expanded to social services as well. This increases the need for cooperation even more.

25

3 On Insurance and Insurance Software

Figure 3.1: QIS positions itself in between the different elements of an insurance company, from (Quinity, 2005)

3.2.3 Quinity Insurance Solution The system we evaluated during this research was Quinity Insurance Solution (QIS), a multitiered modular software package based on web technology that aims to encompass the whole insurance process. It situates itself on all the different assets of an insurance company, as shown in Figure 3.1, and therewith attempts a total solution for insurance companies. QIS consists of many modules but two modules that provide its base are the Product Definition System (PDS) and Quinity Forms System (QFS). The PDS enables product developers to define actual insurance products on a logical level as objects and add them to the system without programming knowledge (Quinity, 2005). QFS allows an employee to create forms for any general purpose, but most importantly to create forms which have a specific mapping onto the insurance product objects. Together, PDS and QFS become a powerful combination as insurance products can be implemented simultaneously with their request- and mutation forms. This cuts down the need for highly trained programmers and shortens the time-to-market for new products (Quinity, 2003b). Screenshot of QIS are shown in Figure 3.2. Together with its other modules a system is realised which can be accessed through three different channels (Quinity, 2003a) which service the different user groups metioned in Chapter 3.2.1 nicely. ∙ Intranet for administrators and insurance staff ∙ Extranet for intermediaries ∙ Internet for consumers The main goal which is accomplished by using these different channels and web technology is that a single application (though modular) can be built which suits the needs of its different users. This is possible because each channel has a different goal and therefore a different face.

3.2.4 Other Software To place the software we were evaluating in perspective, we also did a short analysis of comparable software. It was rather difficult to do this for a number of reasons. First off, the software and data that insurance companies own are very privacy-sensitive. Insurers are not keen on prying

26

3.2 Insurance Software

(a) Form system

(b) Form System

(c) Policy System

(d) Product Definition System

Figure 3.2: QIS is built up of many different modules

27

3 On Insurance and Insurance Software

eyes and we were therefore not granted access to active software environments. Secondly, most of the packages used are custom-made due to their specific nature and the companies’ requirements. This makes it hard to find general information about that software. The information we were able to find, for the most part, only supplied us with a general description of the systems and not with any visuals of that system. Following is the information we were able to find. Mona Lisa A custom-made policy registration solution used by Monuta and Univ´e is Mona Lisa (Univ´e has named the system Impulsa and has a personalised implementation) which was built for them by Solidium and Atis (Hilarius Media, 2003). This system was a replacement for an older mainframe-based application. Mona Lisa is built in an Advantage Plex environment (Computer Associates Plex Wiki, 2008), a product of Computer Associates. The Mona Lisa program is data-oriented and consists mainly of overview screens with tables and detail screens with editable properties. Elvia Policy System A solution from the insurance company Elvia 3 , a daughter company of Mondial Assistance 4 , is the Elvia Policy System (EPS). The EPS is a perfect example of how a information system enables intermediaries to sell products to consumers via their website. The system looks like a simple web form, but when we study the code we can see that the form is parsed into the website by an iframe and is not part of the intermediary’s site at all. On the back-end there is a centralised online system to manage all the everything from a single application (Figure 3.3). Salesgarant Salesgarant 5 is the extranet channel of Unigarant 6 . Though we were not able to see this environment in action, Unigarant’s site itself allows consumers to request policy quotations on their site via forms (Figure 3.4). This points towards similar forms interaction. AllianzNet Insurer Allianz 7 has a multiple intermediary extranet systems, AllianzService, AllianzNet and Allianz Allegro. AllianzNet communicates with a legacy CICS mainframe in the back-end. The communication between the front- and back-end is mainly done with WebSphere technology (The Future Group). Sadly, we were not able to obtain any screenshots of this application. Certigo Certigo 8 by Netaspect9 is an all-round insurance solution with many similar characteristics to QIS. The company however does not divulge any information to its interface anywhere. Because it was so difficult to get access to actual competitor software we turned our heads towards other candidates. We found that certain off-the-shelf packages for (financial) administration are used by various insurance companies as parts of their total system. Coda The finance system specialist Coda10 has developed many products for financial administration. Recently they have launched their programs as an online service called Coda 3 http://www.elvia.ch/ 4 http://www.mondial-assistance.com/ 5 https://www.salesgarant.nl/ 6 http://www.unigarant.nl/ 7 http://www.allianz.nl 8 http://www.certigo.nl/ 9 http://www.netaspect.nl/ 10 http://www.coda.com/

28

3.2 Insurance Software

(a) New policy

(b) Policy mutation/ cancellation

Figure 3.3: Screenshots of the Elvia Policy System; source (Elvia)

Figure 3.4: Screenshot of the Unigarant policy quotation request form; source (Unigarant N.V., 2008)

29

3 On Insurance and Insurance Software

(a) Homepage

(b) Opportunities

(c) Cash invoice matching

Figure 3.5: Screenshots from Coda 2go; source (CODA Ltd., 2009)

Figure 3.6: Screenshot of Siebel; source (IBM, 2009)

2go 11 in cooperation with Salesforce12 . The interfaces provided by the tour of the different software solutions look well-ordered and intricate (Figure 3.5). It has a lot of tables, data and forms, but they are shown in a well structured manner. Altough the navigation is data oriented the terminology seems applicable and there are also a few task oriented shortcuts. Siebel Customer Relationship Management The Siebel CRM solution13 , created by Oracle comes in two flavours, the stand-alone version and the online On Demand version. This system too is very data oriented. Although the interface has similarities to Coda, Siebel does not do a very good job. The interface seems crowded an unstructured (Figure 3.6). Enterprise Resource Planning SAP14 is a company that builds many enterprise solutions. The interface to its enterprise resource planning software, though bland, seems to do trick nicely. It is very clean and clear cut with excellent visualisations used for wizards and task progress (Figure 3.7). To increase the amount of comparable software as much as possible, we also looked at other types of administrative software. We found two very good examples in the medical sector. Norma EMD/EPD In some hospitals in The Netherlands a program is used called Norma EMD/EPD15 , made by MI Consultancy16 . This program administrates the medical statusses of patients. The screenshots from this stand-alone program in Figure 3.8 show 11 http://www.coda2go.com/ 12 http://www.salesforce.com/ 13 http://www.oracle.com/us/products/applications/siebel/index.htm 14 http://www.sap.com/ 15 Elektronisch

Medisch Dossier/Elektronisch Pati¨ enten Dossier : Electronic Medical File/Electronic Patient File

16 http://www.miconsultancy.com/nl/

30

3.2 Insurance Software

(a) Sales processing

(b) Travel management

(c) Relationship management

Figure 3.7: Screenshots from SAP ERP; source SAP A.G. (2009)

(a)

(b)

(c)

Figure 3.8: Screenshots of Norma EMD/EPD; source (MI Consultancy, 2009)

that it is very data-oriented with long lists, tables and property sheets. Without domain knowledge this program is definitely very confusing. Zorgdesktop Formerly known as Poliplus, AMC Zorgdesktop (AZD) is used by the Amsterdam Medical Centre (AMC) to view patient data, such as lab results and medical history. AZD was developed by AMC themselves and uses modules provided by ChipSoft17 a medical specialist software company. AZD has an extremely tabular interface, much like Norma. The structure in the forms however is lacking and there seems to be little attention paid to human factors as the screen colours are not easy on the eyes and layout is inconsistent across them (Figure 3.9). The different software packages we saw proved not to be very spectacular. For the most part the systems consist of overview-, detail- and input screens in which the objects which are administrated can be viewed or edited. There is a general lack of intuitive design as most of the interfaces are data- and not task oriented because this is simplest to implement. We think that transforming this orientation will be the greatest challenge in our quest for defining patterns.

17 http://www.chipsoft.nl/

31

3 On Insurance and Insurance Software

(a)

(b)

Figure 3.9: Screenshots of AMC Zorg Desktop; source (ChipSoft, 2009)

32

4 Method In this chapter we discuss the method we used to find the answers to the research questions proposed in Section 1.3. First, we give an explanation of our views and ideas on pattern engineering in Section 4.1. Following this we give a description of the adaptations we made to DUTCH in Section 4.2, describing our method, with adapted DUTCH, in its ideal form. Finally, we detail the specific considerations of our implementation of this method for this research in Section 4.3.

4.1 Our Approach This research had two objectives, as mentioned in Section 1.2.1. 1. Formalise a method for UID pattern identification 2. Identify UID patterns for web based applications in the insurance market To accomplish both of these goals we devised an approach to extract patterns. When devising this approach we took into consideration the differences between software- and pattern engineering, the characteristics of how to find patterns and our own ideas on how to extract patterns.

4.1.1 Software Engineering Versus Pattern Engineering There are differences between pattern engineering and normal software engineering. The difference which was of most interest to us, was one of tactics. With software engineering, the goal is to improve the software under scrutiny to its maximum potential within the boundaries and restrictions, such as money and time, imparted on the project by the context in which the project is performed. When evaluating this software, we wish to test that single piece of software as many times as possible in as many different ways as possible to minimise the possibility of errors being present in the system. When engineering UID patterns, this is a different story. To find patterns, we want to compare different software packages which have similar functional elements, to find overlap between them and extract generic elements. Using different software packages which exhibit similar functionality as input for a certain pattern will give us a broad overview of which things work in an implementation and which things do not. Testing the same software over and over again would not increase our knowledge of a pattern; we do not wish to test the software, we wish to test the pattern. An approach to test the pattern would be to implement it into a package which does not contain this pattern (or at least not in its entirety) and to see if the usability of the system improves with this implementation. Another possibility would be to create a fictional context with different prototype systems, which all have the extracted pattern implemented up to a certain degree and find which of the systems has the highest usability. Which ever method is used, hopefully, the prototype with the the extracted pattern fully implemented in it, has the highest usability.

33

4 Method

If this is not the case, there are two possible things that could be incorrect in our pattern specification. 1. We have not captured the user’s problem correctly. The problems in the compared cases differ on a functional level because we have missed something in the problem description, i.e. the solution is correct but not applicable to this specific problem. 2. We have not captured the solution correctly. The combined solution that we have developed does not cover all the problems the user is facing, i.e. the problems match up but the combined solution we have defined is lacking in some way. For instance, if we were to look at a pattern for searching, we would find that there are different types of searching a user can perform which require different ways in which the results of a search should be displayed. Searching for an email contact is functionally different from searching for lemmas in an encyclopaedia because the first is about specifying and refining our search to find one specific contact. Whereas the latter, although containing a refining element, is primarily about aggregating results to collect as much relevant data as possible. Drawing from the above, the goal of evaluating a software system is different in pattern engineering than in software engineering. Software systems in pattern engineering function as testing beds in which a pattern can be substantiated. Whether we improve the software or not, is not really the issue, it is whether we extract the pattern correctly that counts. Of course, applying a correct pattern and applying this pattern correctly, should mean that the task to be performed by the user is made easier and, in turn, that the usability of the software is improved. However it is the difference in scope and focus that we wish to distinguish here.

4.1.2 Where We Looked for Patterns Taking into account that this research project had a time limit, we had to make a selection of the areas where we looked for patterns. We formulated the following groups of user tasks as target areas to search for patterns in the software under examination. Tasks performed often Characteristic of a pattern is that it is a reusable generic solution to a generic problem. We thought it most logical to find a reusable solution in a task that occurred frequently throughout the system. An example of this is searching. Whether editing customer details, a policy or an email the object that we wish to edit has to be found first. Tasks that often fail A task that fails is most probably special in some way, it is possibly difficult to perform because of its cognitive load or because the context is misleading in some way. Whatever the case, if we are able to find or define a similar case in which the user can perform the task successfully we are able to compare the two scenarios and distil the differences between them, pointing us in the direction of a pattern. An example of this is when the user is asked to input financial values that are dependent on each other. The amount of calculation that the user has to perform often confuses him resulting in wrong values being entered. Tasks that often succeed In a situation where a task is often performed successfully there could well be something special about the implementation as well. Perhaps a task is so simple that is it is virtually impossible not to complete, but if this is not the case it pays to investigate different solutions. In contrast to tasks that fail, where we look at the differences between failing and succeeding solutions, here we look at the similarities. Generalising

34

4.1 Our Approach

these qualities can lead us to patterns as well. Here an example is the navigating of a menu structure. When a user is able to find the information he is looking for quickly and easily the information has been structured in a good way. We do not dispute that patterns exist in other areas. For instance, the changing of a user password is task that the user will most probably not perform often and therefore does not fall into one of the categories above. It is important for this function to be implemented correctly if the user is to be able to interact with the system and a pattern can probably be found for this function. However we believed patterns to be most useful for the areas mentioned above because of the impact these patterns could have on the development of software. A task which is performed sporadically is less of an issue than one performed on a regular basis.

4.1.3 How to Extract and Evaluate Patterns In relation to where to look for patterns, we also formulated an approach to extract them. The approach is based on our ideas of going about pattern engineering, as described in the previous section, in combination with an evaluation step taken from software engineering. The pattern engineering part of the approach consists of three abstraction levels, as shown in Figure 4.1. Implementation Level We call the lowest level the Implementation Level. This is where we gather cases as input or prototype our own implementations to consolidate a pattern. Functional Level We call the second, middle level, the Functional Level. On this level we abstract the different tasks and functionalities present in the system to create a description of the task problem and its solution. Pattern Level We combine the descriptions of each case together on the highest level, which we call the Pattern Level. Here we take the relevant parts of each problem description and the most useful parts of each solution description, crafting an optimal solution to a generic problem description. In essence this level is not really different from the level below it, except in terms of its genericalness. The combination of a generic problem together with a generic solution, delivers us a pattern. Software engineering evaluation methods use iterative testing to improve their product. We can use this technique to evaluate our pattern, which is our product. Having extracted a pattern it can be translated back to a certain case, either an existing one or a completely new case. This implementation can then be compared against other implementations with out the pattern to see if there is an improvement in usability. In this manner, pattern- and software engineering methods are executed symbiotically. To illustrate this process we will work through it now with an abstract example corresponding to Figure 4.1. Let us assume we have a set of four cases: 𝑉 , 𝑋, 𝑌 and 𝑍 which all have a search functionality. We abstract all the cases to a general description and start to compare them. Case 𝑉 turns out to be quite different from the other cases and we therefore discard it all together. Cases 𝑋, 𝑌 and 𝑍 have similar problem descriptions and we therefore combine these to a single generic problem description. The solutions which cases 𝑋 and 𝑌 have to their problem seem useful, thus we combine these to a generic solution description. Case 𝑍 however does not seem to have a very useful solution and we discard that solution. The generic problem and solution can now be defined as a pattern. To test our pattern, case 𝑍 makes an excellent candidate, as it does not have our extracted pattern implemented. We now implement our pattern in the context of 𝑍 creating 𝑍 ∗ . The comparison of 𝑍 and 𝑍 ∗ will indicate if our pattern is correct or not.

35

4 Method

Figure 4.1: The Generic Approach — Multiple case are abstracted to describe generic functionality, the useful cases are then combined to describe a generic problem and solution, which in turn are our basis for a pattern

36

4.2 The Generic Research Method

4.2 The Generic Research Method In accordance with our ideas above, we defined our method as follows.

4.2.1 Combining DUTCH with Pattern Engineering To reach our first goal for this research, formalising a method for identifying UID patterns, we chose to combine an existing method for software- and user interface engineering with our approach for pattern engineering. Our method of choice was DUTCH for the reasons already stated in Section 1.4. The method is driven by task analysis and usability criteria which are important elements when evaluating a system’s usability. The method is also iterative which enables us to do multiple user tests to check if our changes to the system are actually improvements. As DUTCH is an engineering method and not a pattern engineering method, it was not applicable in its pure form. Therefore we applied it as an evaluation step to of our pattern extraction process. We translated our approach shown in Figure 4.1 to a DUTCH-specific approach as shown in Figure 4.2. Specifically this means that we not only analysed the current system but we also used other systems that had similar functionality to create a descriptive task model (TM) of the generic problem and solution pair, TM1, from which we extracted our pattern. We then devised a prescriptive task model, TM2, for the an input case by applying our pattern to it. To record the patterns we found, we used the template proposed by Van Welie, Van der Veer, and Eli¨ens (2000) and extended by Hennipman, Oppelaar, and Van der Veer (2008). Another thing we altered in the DUTCH method was the prototype deliverable and its corresponding part in the validation process. Normally, TM2 describes a new work process for the software at hand and the prototypes and evaluations pertain to this new TM. However, for this research we were not interested in creating a whole new TM or a PT of a completely new interface, we wanted to evaluate the correctness of the discovered patterns. Therefore we made small PTs which described only a specific pattern and performed small use case tests around these. We did not develop a PT for the system as a whole.

4.2.2 A Single Cycle through Altered DUTCH Because we deviated from the normal steps of the DUTCH method quite a bit, it is useful to give a more detailed explanation of all the actions that were taken during this research. The following is a breakdown of the steps of our ideal research setup adapted from the original DUTCH method (Van der Veer and Van Welie, 2003) which can be performed iteratively until the result is satisfactory. Our adaptation of the DUTCH method for pattern engineering upholds this iterative process, as shown in Figure 4.3. We will give details on our own implementation and considerations in the next section. Evaluation of the Current Situation In this first phase the initial user requirements are gathered by studying existing documentation. Where the documentation is lacking interviews and other RE techniques are used. On top of this, existing applications are studied by conducting structured interviews and user walkthroughs. The knowledge of the current situation is recorded in a descriptive task model, TM1. This step is the elevation from the implementation level to the functional level for problem descriptions, as shown in Figure 4.1. Any apparent UID patterns that emerge from the software and of which design knowledge already exists, are documented here. Envisioning a Future Situation Using the combined knowledge of TM1 and solutions implemented in other cases, a prescriptive task model describing an improved design of the task

37

4 Method

Figure 4.2: Several different cases are used as input to extract a generic problem and solution pair, a pattern, which is then applied in the transition from TM1 to TM2 to create a prototype of that pattern

Figure 4.3: In our adaptation of DUTCH we added the the extraction of patterns in the middle of TM1, TM2 and the PT

38

4.2 The Generic Research Method

model domain is created, TM2. This is done for a specific case with which we wish to evaluate the pattern. This model implements the solution part of our pattern. Other usability issues which we encountered in the system during the composing of TM1, but which are not relevant to the pattern, are not addressed. This step is analogous to the previous one, elevating the solutions descriptions from the implementation level to the functional level. Specification, Designing and Prototyping for Patterns With the combination of TM2 and the total of the users’ knowledge of the system (its technology, its semantics and syntax), termed as the user virtual machine (UVM), a pattern is defined. This step is the elevation from the functional level to the pattern level in Figure 4.1. The defined pattern is applied to the case chosen in TM2 and a prototype of it is implemented as shown in Figure 4.2. The pattern of this new solution is designed by performing three sub-activities: (i) specifying the functionality (ii) structuring the dialogue between the users and the system and (iii) specifying the presentation of the system to the user This phase therefore results in multiple small prototypes, each presenting a specific pattern. Evaluation In concurrence, evaluation of each phase can take place simultaneously to validate its products. Which type of evaluation is used depends on the phase being evaluated and can consist of things like walkthrough sessions or prototype testing. Our new version of DUTCH however, calls for a special treatment when it comes to the PTs. Just as TM2 is not a description of a whole new system, the PT are not a PT of a whole new system and therefore should not be evaluated as such. Using the different metrics found in Section 2.4.1 and Section 2.4.3 we can compare tasks performed with the current system and with the PT portraying the pattern. The group of users that is used to do evaluations, has some criteria in our adapted DUTCH that have to be observed for the evaluation to be a success. Even more than with normal DUTCH, it is necessary for the users to be end-users of the system which is being examined. It is extra important in this case, because we are evaluating specific domain software, that extremely novice users, who have no or little domain knowledge, cannot work with. The time needed to understand the scenario applicable in the user-test would not make the tests productive. Furthermore, it is useful to select a wide range of users, from novice (with domain knowledge, however) to expert. The discussions about the amount of users, for a test apply here (Nielsen, 1993, p. 117–121). We are not opinionated on this point, and leave this to your discretion. Lastly, we believe it best to test both systems with the same user base. This way the same users evaluate both the current system, without the pattern, and the PT, with the pattern, and a relative score can be calculated. The PT is developed to be an implementation of a single pattern. Therefore during the evaluation the scenario is separated from the context of the system to a certain level. We believe that splitting the pattern from its context does not affect the validity of the experiment. First of all, we do not completely split up the pattern and the context because during the evaluation the user is given a scenario which describes a context within which the user is to perform the task. Without this context there would be no goal in the scenario, no drive for the user to perform the tasks. Secondly, the separation of the task and its context is not really an issue if it does occur, because we wish to test the pattern. If the pattern is correct then it will show that the usability of the system improves no matter what the context. This is inherent to the fact that a pattern should be a generic solution. We made these adaptations of the method to accommodate our research objectives. We believe that the above changes in the method increased the chance for us to achieve these goals, because

39

4 Method

they enabled us to extract patterns from the respective TMs and evaluate them specifically.

4.3 Application of the Research Method To complete our research in a timely fashion and not exceed the resources available to us, we took the certain things into consideration and had to set limitations to our actions.

4.3.1 A Description of our Implementation Following is a description of our exact implementation of the generic method explained in the section above. We applied the whole method to the following software systems. We will describe the patterns that were found in these systems in Chapter 5 1. Forms administration system 2. Policy requisition system 3. Claims administration system Evaluation of the Current Situation First we administered a general user profile survey to define our user-group to gain background information of our test users. Our survey was based on the sample template designed by Mayhew (1999, p. 49–55) and can be seen in Appendix A.1. The survey looks at elements such as age, computer literacy and experience with certain programs. We used this survey because it gave a good overall view of the characteristics and capabilities of the users and did not need to be adapted very much to be applicable for this research. Due to time constraints we assigned each user to two of the systems that we were evaluating for the timespan of rest of this phase. They were assigned according to the results of this user profile survey and in such a manner that there was at least one experienced and one novice user assigned to each system. After this we performed semi-structured interviews with our user group, see Appendix B.1. Asking specific questions about the assigned case. As these interviews did not turn out to be very revealing we also interviewed two designers of the software modules to acquire more detailed information. These interviews gave us enough information to develop use-cases with general tasks. We then performed user tests by making use of the think-aloud protocol during a scenariobased walkthrough session as described by Tullis and Albert (2008, p. 103) and (Wharton et al., 1994) where we measured issue-based metrics described in Section 2.4.2. These sessions gave us insight into how the users interacted with the applications, which issues they run into, which improvements they want and, most importantly, what their work routines and cognitive processes are. Lastly, we performed an expert-review with the system ourselves, as well as described by Nielsen (1993, p. 155–163). We did this to maximise our chances of catching usability issues. Envisioning a Future Situation Using the knowledge gained from the current situation, we performed this phase as described in the previous section. Specification, Designing and Prototyping for Patterns This phase was performed as described in the previous section.

40

4.3 Application of the Research Method

Evaluation Due to time constraints we performed one iteration of designing, prototyping and testing of the PT. To evaluate our proposed patterns we performed tests using the current version of the system and our PT. We measured the following performance metrics, described in Section 2.4.1, (i) task success (ii) time-on-task (iii) errors and (iv) efficiency using a user-testing program called Morae 1 from TechSmith. This recorded all our sessions so that we could analyse them later. To prevent bias of the users’ perception towards any certain prototype, we shuffled the order in which the tests were administered so that each user performed the tasks in a different order. In combination with the measurement of performance metrics, which measure quantitative data, we also measured qualitative data. To accomplish this we used the ASQ technique, as described in Section 2.4.3 and Appendix A.5. This questionnaire gave us insight into how the users perceived the different tasks. We used this questionnaire because it was task-based and gave as variables along multiple axes, which we thought was usefull, as it provided more information. These two metrics provided enough data to compare the current system and the PTs.

4.3.2 What we did not measure We decided not to look at learnability, behavioural or physiological metrics when analysing the cases. Measuring this information was not feasible for this study and beyond its scope, as metioned in Section 2.4.4. In relation to the learnability metric there was another consideration. This metric was not applicable in our research because refers to the system as a whole, combining multiple tasks and workflows, whereas we were interested in the usability of our specific patterns which provided for one single task.

4.3.3 Constraints of the Test Group and Test Schedule The test group consisted of five people, that had varying amounts of practical experience with the different software modules. This is a small group to perform user evaluation with, but the our resources did not permit us to acquire more users. Four of the users were consultants that worked with the product and one was a project leader of a development team for the product. The reason we chose not to use completely inexperienced users in the study is that it did not seem useful. The necessary background knowledge and implicit domain knowledge that is needed to work with the system, is far too high in the chosen cases. Inexperienced users simply are not the user group of the system, whereas the staff members are. The tests per case were all administered simultaneously. This was necessary to complete the research within the set time span. By doing this, we chose not to evaluate our experiences with a test to adapt the test of the following case. Moreover, it was not necessary perform the successively because all testing was in-house and the test users were still available if something had gone drastically wrong.

1 http://www.techsmith.com/morae.asp

41

5 The Results of the Case Studies In this chapter we reveal a summary and the highlights of the results of this research. In Section 5.1 we describe the results of the actions that were common across all input cases. We describe the user profile, the interviews, the input software cases, the user tests and heuristic evaluation for our TM1. After this, in Section 5.2 we describe the patterns we found and reviewed, how we developed them and the result of our user tests with these patterns. In the following section, Section 5.3, we describe new patterns we found but were not able to review and we conclude in Section 5.4 by listing pre-documented patterns which already are implemented or could be. In relation to this chapter, the catalogue of the new patterns can be found in Chapter 6 and the research result data can be found in Appendixes A through E.

5.1 General elements The following section describes the common research elements applied to all the input software.

5.1.1 User profiles The profile of the users with which we performed evaluation in this research can be summarised as follows. The specific results of the user profile questionnaire can be found in Appendix A.2. The test group consisted of four consultants and one project leader. They were four men and one woman of Dutch origin with an age between 25 and 30, all in possession of a university degree. The group members consist of both novice and experienced employees, none of the employees view themselves as experts on the insurance domain. All the members are experienced with computers and also enjoy working with them. The majority finds computers interesting for qualities besides their use as a work tool. They enjoy learning new applications, although they do not always find it worth the time this takes. In general they believe the computer has made their life easier. They all have a reasonable to large amount of experience with the Forms Administration System, less experience with the Policy Administration System and little with the Claims Administration System. None of the users has worked worked with similar software nor do they know of any. All of the users are right-handed, more than half is short-sighted of which the majority is moderately impaired. One person in the group is colour-blind. Apart from this, there no handicaps which have to taken into consideration.

5.1.2 Interviews We performed four semi-structured interviews with the user group members (one user was absent at the time) to gather general information about their work procedures and their experiences with the software systems. A guide for the questions we asked each user can be found in Appendix B.1 and the transcriptions of the interviews can be found in Appendix B.2 through Appendix B.5. Because the users have less experienced with the Claims Administration System we were not able to get a clear view of how the system worked from their interviews. Therefore we called on

43

5 The Results of the Case Studies

two designers of the systems and interviewed them as well. These interviews can be found in Appendix B.6 and Appendix B.7.

5.1.3 Case Software As already mentioned in Section 3.2.3 the software we evaluated to generate patterns for this research was the Quinity Insurance Solution. QIS is a modular product which attempts to supply a complete solution for the whole insurance process. The system has many sub-modules and decided to analyse the three of these. Forms Administration System To enter any data into the system so that it can be managed, a form has to be filled out. These forms can be built and managed with the Intranet application QFS. Together with Product Definition System the Forms Administration forms the basis of QIS. With the PDS an insurance product can be defined as an object in the system, with all of its attributes recorded. QFS enables us to create complete form dialogues which are mapped onto a product object, therefore allowing us to create and modify insurance products in the system. The forms can be summoned by other modules or even completely separate software systems, enabling third parties to incorporate functionality of QIS into their internet website. This module is of interest to us because work flow is counter intuitive, where users tend to think in a top-down manner from form to question, for the system to work it has to be used in bottom-up fashion, defining questions first and grouping them in forms later. Policy Requisition System This system is part of the Policy Administration System which is used to manage the policies that clients of an insurance company have. The requisition system is used via an Intranet by intermediaries to apply customers for an insurance product. The application for insurance is passed along to the insurance company who can then process it further. This is an interesting module for us because it is the main entry point for intermediaries into the system. Furthermore the use of policy packages is interesting because it creates a loop in which the intermediary can input multiple policies. It seems that they tend to get confused about their status as they progress through the process of applying for an insurance policy. Claims Administration System This Intranet system is used by back-office employees, such as call centre staff members, to register insurance claims that clients have. The system covers the complete process of administrating the claim. Controlling the claim from when it first comes in, through its examination of different claims experts, its checking for fraud, up to the point where it is either denied or accepted and possible payments are made. This module is interesting for us because it has to cater to many users with different expertises (call centre staff, fraud experts, damage experts). Therefore the module has to include a lot of functionality which is not applicable to all users. Users tend to get lost in all possibilities. An interesting point to note here, is that all of these modules are a type of administration system. The data is key in all of these systems. This means that all the functions in the system are not arranged around the tasks of the user but around the data itself. Manipulation of the data is accomplished through low-level tasks such as creating, editing, saving and deleting a single data object. The interactions are not visual but textual and higher level abstract notions such as linking data objects are only supported through manual textual input.

44

5.2 Reviewed New Patterns

5.1.4 Collecting Issues in Task Model 1 Using the information gathered in the interviews, we analysed the software modules ourselves with a walkthrough and an heuristic evaluation. Walkthrough Sessions From the interviews we devised a scenario for a common task for each system, which the user should be able to perform when working with the system in a normal fashion. These scenarios can be found in Appendix C. We performed the walkthrough sessions using the think-aloud protocol as explained in Section 4.3.1. Due to time constraints we were not able to do a walkthrough with all the users on every system. We assigned each system three users and we allocated the users so that every system was reviewed by at least one experienced and one novice user. These sessions led us to a list of issues which can be seen in Appendix D.1. The biggest issue in the Forms Administration system was that the information structure of the system was inverted to that of the users’ cognitive process. The system enforced the user, though not actually stating this anywhere, to create questions first, group these, add the groups to one or more forms and then build a dialogue on top of the form. The user tends to think the other way around. The user does not make the distinction between dialogues and forms which the system makes. Other issues that we came across were related to speed. The system often did not provide functionality which the user needed, such as editing multiple objects simultaneously, copying objects and a visual interface to link these objects. The biggest issue that was present in the Policy Requisition System was related to guidance. The task that the user performs consists a multi-tiered wizard which loops itself. Depending on the options the user chooses, or rather the characteristics of the policy package being required, the wizard loops through multiple policy requisition protocols. It is appeared easy for the user to loose track of his progress during this process. Other issues which we found related to a steep learnability curve and the being buried of certain options. The issues that we found in the Claims Administration System were mostly of a visual nature. Users were unclear about how to go about their tasks as the entry points were not clear. Furthermore lack of visual structure hindered the guidance of the system. Users were unclear on where they could perform certain tasks and in which order to perform them. Heuristic Evaluation After the walkthrough sessions we performed a heuristic evaluation of the system ourselves. Our findings did not differ radically from our findings of the walkthrough. However, we were able to zoom in on a few issues which the user had already grown accustomed to. These issues included the input of financial data in an illogical order when requesting a policy and various visual aspects which users complained about.

5.2 Reviewed New Patterns There were many usability issues which we could choose to focus on look into. We chose to inspect three patterns, one pattern for each system. In this section we will give a description these new undocumented patterns and the results of their evaluation with a prototype. The exact patterns will be given in the next chapter.

45

5 The Results of the Case Studies

5.2.1 Incremental Search This pattern can be found in Section 6.1. Description An insurance company does not wish to keep double records or have duplicate information in their system. This would not only waste system resources but also be a magnet for administration errors. It is quite possible that a claim is entered as duplicate for it the incident for which the claim is being made, that is registered in the system. For instance if two people that insurance at the same company are involved in a car accident together they would both register their claim, but it would be the same incident. To enforce that no duplicate claim is entered into the Claims Administration System, when entering a claim, the user first has to search the for the claim to discover if it already exists. When the user has determined this is not the case, he can create a enter a new claim into the system. If the claim already exists, obviously, the user is expected to continue his with the existing claim. This makes the search task the user have slightly different to a normal search. In this case the user is interested in finding exactly that claim object in the system under which the incident is registered. Finding objects like it would only be confusing to the user. The user has to be able to enter specific search parameters by which to find the claim object. This is comparable to searching in a telephone book. The user is looking for an exact record relating to the parameters he has. If there is no record that matches he is not interested in another or similar record. This differs form a usual search method as implemented by an internet search engine or a dating site. In the former the goal is to aggregate as many (useful) results as possible. If there is no exact match the user will wish to see results that partially match his parameters. With the latter partial matches are already a de facto standard where there has to be a form of fuzzy logic deciding what defines a match and what does not. Moreover, the needs of the user in the Claims Administration System take it a step further. If the object can be found, this is fine and the user can continue working with that object. But what the user is really after is if the object can not be found so that he can decide to create a new claim object. This is makes for a form of reverse searching where we are not looking to aggregate multiple results, but narrow our result space down to exactly one or zero. Prototype and Evaluation We developed a pattern for this task which displayed a result counter, as shown in the screen shots in Figure 5.1. The counter updated real-time as the entered his parameters, Figure 5.1a and Figure 5.1b. The counter was situated inside a progress bar which grew as the result count shrank and turned from grey to green when the result set had shrank to a reasonable size which would be meaningful for the user to browse, Figure 5.1c. If the result set for the entered parameters had shrank to a size of zero, the bar turned red, indicating that no results would be returned, Figure 5.1d. When the user effectuated a search or the result set size had become zero a button would appear enabling the user to create a new object, Figure 5.1e. In the scenario we used to evaluate this PT, Appendix C.2.1, we asked the user to determine if three different claims existed. The first two did and the third did not. All of the users succeeded in this task without any errors. The performance metrics show that when user used the PT there was a 36% drop in the time-on-task, Figure E.1, and a 56% drop in mouse clicks, Figure E.2. The ASQ results show an increase of more than one point on all axes A.2a. The satisfaction axes jumps out here with a 2-point increase.

46

5.2 Reviewed New Patterns

(a) The result bar starts off grey. . .

(b) . . . and turns green when a reasonable result set is collected

(c) The result bar turns from green. . .

(d) . . . to red when the result count hits zero

(e) The create new object button appears when a search is effectuated or when the result count is zero

Figure 5.1: Screen shots of the prototype for the Incremental Search pattern

5.2.2 Unified Edit This pattern can be found in Section 6.2. Description When working with the Forms Administration System the user often has to perform very repetitive work. This is case when the user has to input many questions into the system which all have the same attributes. When creating question objects simultaneously, an option to accomplish this would be to be able to duplicate objects. However, when the user has to manipulate multiple objects to adjust a certain attribute, he has to do this manually object by object in the current system. A similar case here is the properties screen in Windows Filemanager. By selecting multiple files and then selecting properties we can change attributes for all of these files in one single screen as if we were editing a single file. Options which are too complicated to edit in this form are disabled. Attributes that differ across files are not disabled but greyed to indicate this. These attributes can still be edited to the value we wish. An example in which the task description is similar but the solution is different is the edit function in PHPMyAdmin. Here the user is able to select multiple objects to edit but cannot edit them all simultaneously. The screen the user is redirected to contains separate edit forms for each selected object. Prototype and Evaluation The PT we devised contained a function with which the user could select all the objects he wished to edit, Figure 5.2. First the user selected the objects by checking the checkboxes, Figure 5.2a, and then clicked on a separate edit button after which he was redirected to an edit screen

47

5 The Results of the Case Studies

(a) The user selects the objects he wishes to edit

(b) A list of objects being edited is given at the top of the screen

Figure 5.2: Screen shots of the prototype for the Unified Edit pattern

which applied to all the selected objects. The fact that multiple objects were being edited was made clear at the top of the edit screen where a list was given of these objects, Figure 5.2b. In the scenario we used to evaluate this PT, Appendix C.2.2, we asked the users to change the visibility conditions of two questions in a form so that the questions would only become visible when certain values were entered into a third question. All of the users accomplished this task. The performance metrics show that there were a lot more errors when using the PT than with the original interface, Figure E.5. The other performance metrics showed improvement in the PT. The time-on-task decreased with 39%, Figure E.3, and the amount of mouse clicks decreased with 42%, Figure E.4. The ASQ results show an increase on all axes, Figure A.2b. The effectiveness and efficiency axes are noteworthy here, with a 1.5-point and 2-point increase respectively.

5.2.3 Calculator Tool This pattern can be found in Section 6.3.

48

5.3 Non-Reviewed New Patterns

Description During the requisition of an insurance policy the user is often asked to enter financial data which is composed of multiple sub-amounts, for the amount of valuables insurance, for instance. It is often the case that the system asks for a total amount first and sub-amounts later. This requires the user to do some calculating in their own head before entering the data. A similar situation arises with the tax return program of the Belastingdienst. When a user has to enter an amount which is dependant on other sub-amounts the program provides an extra calculator screen. This screen is not a straightforward calculator, data can be entered according to its semantics and the program calculates the total according to the context. Sub-amounts are always retained and can be used recurrently elsewhere in the system if this is necessary. The main issue that arises here is that we do not think it useful to let a user do any calculation of his own. If a user only possesses the sub-amounts, he should enter these and the system should calculate the total itself. This not only simplifies the task for the user, who now has to enter less data elements and does not have to calculate anything, but arguably also lowers errors because the calculation is no longer done manually. Prototype and Evaluation The pattern we defined here, simply put, is an adding tool, Figure 5.3. In our prototype, we created a set of fields in which sub-amounts were to be entered, Figure 5.3a. When the user entered data into these fields the total at the bottom of the field set was updated to display the total, Figure 5.3b. In the scenario we used to evaluate this PT, Appendix C.2.3, we asked the users to enter financial data as if they were place a requisition for an insurance policy. In the original interface none of the users succeeded in this task correctly against 80% of the users succeeding in the PT, Figure E.6. The other perfomrance metrics show that ther was a decrease of 27% in the time-on-task, E.7, a very negligable decrease in the amount of clicks, E.8, and a strong decrease in the amount of errors made, E.9. The ASQ results show a 1-point increasein efficiency and effectiveness whereas the satisfaction rating stayed the same.

5.3 Non-Reviewed New Patterns The following two patterns are patterns we found during this research but did not have time to prototype and evaluate. We therefore merely describe them and do not give any test results. The exact patterns can be found in the next chapter.

5.3.1 Power Text Edit This pattern can be found in Section 6.4. As mentioned with the Unified Edit pattern in Section 5.2.2 the user often has to perform repetitive and tasks when working with the Forms Administration System. An upcoming tendency is to use an external program which can record macros. This enables the user to record a single iteration and then let the program repeat the same action for a selection of objects. But this is very tedious and time consuming. This function differs slightly from the Unified Edit context in that, with that pattern users want to edit multiple objects to change an attribute to a single value. The users need a way to edit multiple similar objects at once even when the attributes they wish to change vary slightly. The online survey tool SurveyGizmo displays such a function. Survey Gizmo allows the user to create multi-page surveys and distribute them. When creating a survey the user adds questions

49

5 The Results of the Case Studies

(a) The fields start emtpy. . .

(b) . . . and the total is updated as the user inputs data

Figure 5.3: Screen shots of the prototype for the Calculator Tool pattern

50

5.4 Overview of Applicable Existing Patterns

to a page in that survey. If the question is multiple choice the user can edit all the answer options in one swoop with a special text area where all the answer options are displayed per line. The option attributes are displayed on the same line divided by pipe symbols1 . We envision an interface which is similar to that of Survey Gizmo. Perhaps combined with the selection method proposed in United Edit or with an information hierarchical grouping, such as ‘all the questions on a page’, these questions can be displayed in the new text area. Every question object is displayed on its own line in the text area as flat text. The attributes of that object are divided by pipe symbols or an other defining symbol of choice. In this way the user can make small adjustments over multiple objects and save them all in single action.

5.3.2 Levelled Search This pattern can be found in Section 6.5. Sometimes when a policy is requisitioned in the Policy Requisition System it is not always directly accorded. Sometimes data that needs to be entered is not available yet, the value of the house may need appraising, or the applicant needs to be screened in some form or other, for fraud for instance. In this case the incomplete insurance application can be saved for editing at a later date. When this happen users need to find this application and search for this policy. However a screen full of results is not useful for the user to see. A fairly similar search task can be observed in the web shop Bol.com. Here users enter a title for something they wish to acquire. However the system has no way of knowing if the user is looking for books, DVDs or board games. This has to be deduced from the available stock or the context. the system solves this by returning the results in categories. When entering the “Lord of the rings” into the search bar, the system returns top hits in the the categories: Dutch books, English books, music, DVDs and games. The user can then choose to continue in one of these categories. The meta search engine Cuil 2 also uses this category distinction. Searching for “jaguar” returns results in many different categories: the car, the operating system and the football team3 . What is interesting about this form of displaying results is that the user is searching for a certain item and has to find it via objects that are of a different abstraction level. In the case of the Policy Requisition System, the user is searching for a policy belonging to a specific person. So to find the policy it is easiest to find the person. We propose an interface in which the results of the search are grouped by an appropriate higher category. The result screen of policies should be preceded by a screen with the people that hold them, thus minimising the need for browsing through results.

5.4 Overview of Applicable Existing Patterns This section lists all pre-documented patterns which were applicable in the insurance software. The names correspond to the names used in Quince 4 (Infragistics, 2009) so that they can be found easily. We will not describe these patterns themselves. In Table 5.1 we give list patterns which are applicable but not yet implemented in the software. We give the name of the pattern and a description where it could be implemented. In Table 5.2 we list all the patterns that are pre-documented and presently implemented. Here we give the name of the pattern and where it is implemented. 1a

pipe symbol: ∣

2 http://www.cuil.com/ 3 They

however do not return any results for the animal, for some reason.

4 http://quince.infragistics.com/

51

5 The Results of the Case Studies

Table 5.1: These are the pre-documented patterns which we thought would be useful to implement in the software. They can be found in the Quince tool.

Pattern name

Where it should be implemented

Alternating Row Colors

This would be a great addition to the very tabular views of the system.

Cascading Lists

This would useful in the edit screens where options are sometimes multi-layered.

Clear Entry Points

This is applicable on all of the entry screens for every application. These do not provide any indication of what a user can do with the system.

Closable Panels

This could be very useful to display extra information or provide links to tasks that are often used.

Dashboard

This could be used in the Policy Requisition System in combination with Clear Entry Points, displaying the current status of policy requisitions and actions related to them.

Inline Validation

With so much data being entered into forms constantly, Inline Validation would be a real time saver for the user.

Input Prompt

Possibly, as an alternative to Input Hints.

Journal Navigation

When filling in the various wizards.

Large Set Single Selector

When selecting questions or variables to enter into a form.

Liquid Layout

There is often so much data that a filled out screen could provide more space to put it in and also provide some air for the data to breathe.

Local Zooming

Could be very useful when displaying ’extra’ information of objects.

Multiple Selection form a Large List

Useful to select questions when defining groups or forms.

New-Item Row

This could be very useful when adding conditions to questions.

Preview

We believe that this could be used to bridge the gap between the data objects and the actual end result.

Primary Action

This could be implemented in the Policy Requisition System to improve the guidance of the system.

Same Page Error Messages

Errors are usually displayed in a central place, whereas it would be more useful (espaecially in the large forms) to display multiple errors directly next to the originating field.

Status Area

This could be extremely useful in the more complex wizards to display progress or other relevant data. Continued on next page

52

5.4 Overview of Applicable Existing Patterns

Table 5.1 – Continued from previous page Pattern name

Where it should be implemented

Task Pane

This could be used on various start screens to speed up user activity.

Text Field Autocompletion

This could be used often in the various forms, for suggesting other objects which need to be referenced.

Titled Sections

This could be used to group various tasks that are now scattered in a long list under ’Various’.

Transition

Because screens look similar, it is not always clear that a screen has refreshed. Adding a transition to it may aleviate this.

Tree table

This could be used in the same way as Local Zooming.

Table 5.2: These are the pre-documented patterns which we found were currently implement in the software. They can be found in the Quince tool.

Pattern name

Where it is implemented

Breadcrumbs

Almost everywhere.

Button Groups

Object edit screens.

Data Tips

In the form of tool tips.

Date Picker

Where a date needs to be entered in a form.

Forgiving Format

With date and postcode fields.

Form

The whole system.

Global Navigation

Everywhere.

Grid Layout

Everywhere.

Illustrated Choices

In combination with action links (They could be used more frequently, though.).

Input Hints

Near fields that have to adhere to a specific format.

Left Aligned Labels

Almost everywhere.

Navigation tabs

All the menus are structured in this fashion.

Paging

The resultsets are so large that this is implemented on every result screen.

Property Sheet

All object screens are property sheets.

Responsive Disclosure

This is used so that parts irrelevant form parts are not displayed.

Search

Is the entry screen to the applications.

Search Results

Is the way all objects are displayed. Continued on next page

53

5 The Results of the Case Studies

Table 5.2 – Continued from previous page Pattern name

Where it is implemented

Sortable Table

Is used when displaying certain reultsets.

Visual Framework

Is implemented everywhere, but could be used a bit more consistently.

Wizard

This pattern is used in for policy requisition, although it is not implemented to its fullest potential.

Work With

This is the standard pattern used for editing data objects.

54

6 The New Patterns This chapter is a catalogue of all the new patterns which we found during this research.

6.1 Incremental Search Name Alias Problem Usability Principle

Incremental Search Finding a specific single item The user has to search a data set for a specific object which is needed in a larger task. Minimising actions, Immediate feedback

Context

The user has to either find this exact object or determine that it does not exist so that he can create it.

Forces

– The user is not only searching for an object but needs to know if the object exists or not. – The search space is large enough for the user not to comprehend the total dataset.

Solution

Near to the search form, show a counter that displays the amount of hits according to the current criteria. Update the counter in real-time while the user fills out the search criteria. Combine the counter with a progress bar and enable the bar to change colour as the solution space shrinks. The bar can turn green when the solution space is acceptably small and turn red when there the solution space has a size of zero.

Rationale

The user can keep on filling in criteria until he reaches a solution space that is small enough to comprehend. This pattern gives a better answer to if an object exists than a standard search because it can give insight into exactly which criteria make the search fail and when. Also if the user fills in certain criteria which return no hits then the user can be sure that the desired object does not exist and choose according actions. This improves satisfaction. This pattern speeds up the users search behaviour because they can keep on filling out the search form until they are satisfied with the result, which is usually faster than browsing results and then refining search criteria. This improves performance speed.

55

6 The New Patterns

Examples

A Swiss online telephone book, http://tel.search.ch/, shows how the counter decreases while typing. Pressing enter shows the current results, yet the user can continue typing.

6.2 Unified Edit Name Alias Problem Usability Principle

56

Unified Edit Editing objects simultaneously The user wishes to edit multiple data objects in a single pass. Natural Mapping

Context

The attributes of the objects the user wishes to edit all need to be set to the same value.

Forces

– The objects can be collected in some manner (by use of a search) – The choice has to be made whether differing attribute values across objects should be editable or not.

Solution

Allow the user to select the elements he wishes to edit from a search and add a multiple edit button. Visualise the collection which is to be edited, as if it were a single object, showing the editing screen in a normal fashion. Attributes of which the values differ across objects should be editable but greyed. Attributes that are to tooc omplicated to edit in this manner should be disabled. Add list to the screen that displays all the objects which are being edited. After the attributes are edited they should be set identically across all objects.

6.2 Unified Edit

Rationale

Performance speed is drastically increased because the user now only has to perform the desired action a single time instead of once for each object. Furthermore, the chance of errors is lowered because of this as well.

Examples

The tool phpMyAdmin, http://www.phpmyadmin.net/, is a free database administration program that enables users to select multiple objects which can then be edited simultaneously.

In Windows Filemanager, http://www.microsoft.com/windows/ WinHistoryDesktop.mspx, when you edit the properties of multiple files simultaneously, properties of which the values differ across files are greyed out.

57

6 The New Patterns

In Microsoft Powerpoint, http://www.microsoft.com/ powerpoint, the same thing happens when editing properties of multiple objects simultaneously. It is very subtle but here the valuebox of width is greyed out.

6.3 Calculator Tool Name Alias Problem Usability Principle

58

Calculator Tool Creating a total sum The user needs to enter numerical data of which the value has to be calculated from certain sub-values. Natural Mapping, Conceptual Model, Error prevention

Context

The user does not have the required value directly available to him and has to perform a manual calculation before he can enter the requested data.

Forces

– The sub-values not so complex that they can be entered into the system in a practical manner. – The calculation is always the same or it can be computed by the system according to the user’s input.

Solution

Provide a helping calculator tool in which the user can enter the sub-values that are available to him. Let the system compute the requested value from the sub-values. However, save these sub-values so that the user can edit them later.

6.4 Power Text Edit

Rationale

With this solution the user does not have to perform the calculation himself. Not only does this, increase satisfaction because it is easier for the user, it also lowers the error rate because the calculation does not have to be performed by the user. With complex calculations this can be a big issue. Moreover, the task completion rate is increased because user can now enter values which they do have available to them and which require minimal cognitive processing. This solution also provides the user with a new feature that allows him to ’play around’ with the sub-values to reach a desirable total-value.

Examples

The tool that the Dutch revenue service, http://www. belastingdienst.nl/ provides to enter tax returns, oftens asks for layered financial data. When the user is asked for the general expenses of the household, he is provided with calculator tool which enables him to enter sub values which are available to him so that the total can be calculated by the system.

6.4 Power Text Edit Name

Power Text Edit

Alias

Quick text edit

Problem Usability Principle Context

The user wishes to edit multiple similar data objects in a single pass. Affordance, Natural Mapping The variables or values the user wishes to edit differ per object and a point and click interface is too slow for the user.

59

6 The New Patterns

Forces

– The objects have to be gathered in some form (by using a search). If the gathering is insufficient to isolate the desired objects, the objects also have to be selectable in some form (as with Unified Edit). – The user is an experienced user with a good understanding of the underlying data model of the system. – The amount of objects that are to be edited should not be so large that the user cannot oversee them in a single glance. – A need arises for a strong error check on the back-end of this interface because the user may inadvertently break objects or relationships between other objects which are dependant on these objects.

Solution

Provide a text area field in which all the objects are shown per line and their properties are divided by a defining symbol, such as a pipe symbol. The user can then edit all attributes of the objects at will and then save them all in one single pass. Provide the syntax for the object above the text field so that the user has a reference as how to build the object correctly.

Rationale

This increases both performance speed and satisfaction because it enables the user to edit many attributes at and objects in a very unrestricted manner. Caution is advised, however. This pattern should only be applied for experienced users as it introduces margin for error.

Examples

The survey tool SurveyGizmo, http://www.surveygizmo.com/, allows users to edit the answer objects of multiple answer-type questions using a text area.

. . . becomes . . .

60

6.5 Levelled Search Results

6.5 Levelled Search Results Name Alias Problem Usability Principle

Levelled Search Results Unclear object search parameters The user has to search a data set for a specific object which is needed in a larger task. Affordance, Minimising actions

Context

The search parameters the user has available are not sufficient to define this exact object.

Forces

– The objects need to be in a form information hierarchy so that the can be categorised. – The result set has to be large enough for categorisation to be useful.

Solution

Provide the user with the categories that contain matching objects. The user then selects one of these categories to browse through the results in that category.

Rationale

This increases satisfaction because the user is not forced to browse through a batch of results which are not relevant. Possibly it also increases performance speed because there are less results to browse through. However, the user is required to perform an extra click which can possibly counteract this speed gain.

61

6 The New Patterns

Examples

62

The webshop Bol.com, http://www.bol.com/, asks its users to refine their search by category. They provide main categories centrescreen and sub-categories on the left hand side. They even expand on this pattern by providing relevant results in each category below.

7 Discussion and Conclusion In this chapter we conclude this thesis. First we draw conclusions from the research results in Chapter 5 and evaluate the research process in Section 7.1. Then we attempt to answer our research questions in Section 7.2. Finally, we discuss which research agendas still remain and what future research should focus on in Section 7.3.

7.1 Evaluation In Section 1.2.1 we stated that we had the following research goals. 1. Formalise a method for UID pattern identification 2. Identify UID patterns for web based applications in the insurance market We believe that we have accomplished both these goals to a certain degree and will discuss how successful we were in this section.

7.1.1 DUTCH in Combination with Pattern Engineering To formalise a method for UID pattern identification we combined an existing software engineering method called DUTCH with our own ideas about pattern engineering. We found DUTCH to be a useful as an evaluation method in the adapted form in which we used it. DUTCH is a software engineering method that is oriented on task analysis. This is very effective when improving software in general but less when attempting to extract UID patterns. UID patterns, by definition, do not question user tasks; this is left to functional design patterns. UID patterns take the task at hand for granted and question the way the user executes it with the system. The DUTCH method steers toward task analysis whereas we want to busy ourselves with interface analysis. It costs quite some effort to constantly remind oneself of this. The DUTCH method is a method that drives on progress. Each iteration is meant to deliver product versions that are an improvement to the version before it. In pattern engineering we wish for our pattern to improve over time, yet this does not mean that we wish for our prototypes to be strict improvements. We might wish to deliberately sabotage one of prototypes in a certain way to see if this affects its usability in any way. This is something that is not accounted for in the DUTCH method. In conclusion we believe DUTCH to be a very strong and useful method for engineering software. As a pattern evaluation method it is rather elaborate and in practice a minimalistic approach might be more pragmatic.

7.1.2 Our Initial Method Something we wish to note here is that DUTCH is an extremely good tool for evaluating the functions of a system in relation to its user. It requires us to make task descriptions, recording goals and a lot of background information. However this lead us to having tunnel vision at the beginning of this research. We had defined an initial research method with DUTCH, which we

63

7 Discussion and Conclusion

discovered in mid-process would not be suitable to answer our research questions. We changed the set-up of our method during the course of this research to what is described in Chapter 4. Our initial research method and its preliminary results can be found in Appendix F. In the initial method we wished to create a new interface for all of the three insurance systems we analysed, applying newly found patterns in this new interface. This would be a classic task for DUTCH to be applied in. However, this would not have given us the detailed results of which pattern worked and to what length it worked. This is the reason we decided to make small prototypes implementing a single pattern and compare these to the original system.

7.1.3 The Patterns We discovered five new patterns during this research. We were not able to implement two of these in a prototype and test them, so we will focus our attention on the three that we did review. Incremental Search We think this pattern is the most solid of the lot. The ASQ results increased on all axes and the performance showed drastic improvements. We de not believe there is anything to discuss here. Unified Edit With this pattern performance metrics showed a strong increase in errors in the performance metrics which is troubling. We believe this to have to have two causes. The first being that the users attempted to perform the required task in the prototype in exactly the same fashion they would in the original system. However this was not implemented which caused for some confusion and accounts for virtually all the errors. The second cause, which is related to the first, is that the pattern was probably not clear enough. Once users were aware they had to go about the task in a different manner a few of them still had to explore this a bit before they understood how to accomplish the task in the new interface. This accounts for the remaining few error counts. We still believe the pattern to be a success however, because of the strong increase on the efficiency and effectiveness axes in the ASQ results. This increase is to be expected as the pattern transforms a repetitive task into a single task and eliminates extra actions. Calculator Tool This pattern’s ASQ results show that the satisfaction remained the same. We believe that this is not a representative result of the pattern. Similarly to the Unified Edit prototype we had only implemented the pattern and disregarded other elements of the system. However, during the evaluation the users wanted to use a help function to explain specific terminology to them. Some of the users were dissatisfied by the fact that this help function was disabled and took that into consideration when grading the prototype. The low success rating in the original system here requires some explanation. The reason that all the users failed in the original system is not because they had to perform the calculation. The wording of the question for the input field was misleading, which caused them to miss the fact that they had to perform a calculation at all. Our pattern circumvents this problem. Because the calculation is done for the user and no longer asked, the wording of the question is no longer and issue. The single task failure that still occurred with the prototype was due to the terminology problem mentioned above. What the evaluations have taught us, is that to evaluate a pattern successfully we have to keep every other element of the system the same, isolating the pattern variable as best as possible. We had not thought that unrelated elements would reflect in the evaluation scores the way they did with the Calculator Tool pattern. Summing up, we believe that these patterns are useful and can be a valuable addition to existing pattern collections. They do require more evaluation than they have gotten and the Unified Edit and Calculator Tool patterns most probably require some refinement.

64

7.2 Answers to the Research Questions

7.2 Answers to the Research Questions At the beginning of this thesis, in Section 1.3, we posed four research questions. In this section we will answer them. Our questions were the following. 1. Which patterns are relevant to the insurance market? 2. What forces are applicable to these patterns? 3. When is pattern A applicable and when pattern B? 4. Is there such a thing as The Insurance Pattern? The first three questions are answered in Chapter 5. The existing implemented patterns and relevant non-implemented patterns that we found are listed there. We do not discuss these patterns any further because they are already well documented elsewhere. The new patterns that we discovered can be found in Chapter 6. The forces and context are described there as well and speak for themselves. As for the last question, “Is there such a thing as The Insurance Pattern?”, we answer this with ‘no’. This is because we found that the insurance software turned out to be a specific form of administration system. This made comparison to other administration software an easy jump to make. When defining abstract insurance objects, more often than not, no better representation is applicable than tables. This was prevalent in all other administration software we saw. It also explains why the visuals are often so bland. However, we believe that usability can be drastically improved by incorporating the the patterns mentioned in Table 5.1 and transforming the system from being data oriented to being more task oriented. With so many similarities between administration software in general, we believe that the patterns we have found during this research are applicable to all administration software in general and not just insurance software. This is can also be deducted from the tasks to which the patterns pertain, the searching and manipulation of data objects. If the data object is a client in a CRM system or a receipt in a financial administration is not of importance. Of course, this is my no means a definite ‘no’. We have only evaluated part of a single insurance system and more systems should be analysed in future research to answer this question with more certainty.

7.3 Open Issues and Future Work In this research we evaluated DUTCH as an evaluation method patterns during pattern engineering. As stated there were issues with the use of this method and there are many other methods which can also be evaluated to see how they fit with pattern engineering. In this research we have deviated so far from the standard DUTCH method and goals that it is questionable to speak of the DUTCH method. Possibly a new method needs to be defined in which pattern engineering incorporates select elements of software engineering into its evaluation cycle. Another option entirely, would be to investigate if patterns can extracted without a prototype evaluation cycle at all. By simply describing what you see intricately and using some form of logical evaluation or by using existing cases and comparing those instead of creating prototypes. In the area of the patterns themselves we see possibilities for a lot of research. First of all, we only scratched the surface of insurance software with this research, analysing only three modules of a single software package. First, the whole system could be analysed thoroughly and second it should be compared to other insurance systems in more detail to complete the picture. This will hopefully enlarge our set of new patterns and all of these can analysed against other administration software packages to see if they apply there as well. On top of all this, our

65

7 Discussion and Conclusion

set of new patterns now requires its own iterations of evaluation and improvement so that the specifications of our new patterns are perfected. In short there is more than enough work to be done.

66

Bibliography Seffah Ahmed and Gaffar Ashraf. Model-based user interface engineering with design patterns. The Journal of Systems and Software, 80:1408–1422, October 2007. W. Albert and E. Dixon. Is this hat you expected? the use of expectation measures in usability testing. In Proceedings of the Usability Professionals Association 2003 Conference, Scottsdale, AZ, USA, 2003. C. Alexander. The Timeless Way of Building. Oxford University Press, New York, 1979. C. Alexander, S. Ishikawa, M. Silverstein, M. Jacobson, I. Fiksdahl-King, and S. Angel. A Pattern Language: Towns, Buildings, Construction. Oxford University Press, New York, 1977. K. Beck, R. Crocker, G. Meszaros, J.O. Coplien, L. Dominick, F. Paulisch, and J. Vlissides. Industral Experience with design patterns. In Proceedings 18th International Conference Software Engineering, pages 103–114. IEEE Computer Society Press, 1996. Christian Behrens. Information Design Patterns. Website, 2008. URL http://interface. fh-potsdam.de/infodesignpatterns/. [Last accessed June 2009]. Joey Benedeck and Trish Miner. Measuring Desirability: New Methods for evaluating Desirability in a Usability Lab Setting. In Usability Professionals Association Conference, Orlando, FL, USA, July 8–12 2002. B.W. Boehm. A Spiral Model of Software Development and Enhancement. IEEE Computer, 21 (5):61–72, 1988. J. Borchers. A Pattern Approach to Interaction Design. John Wiley & Sons, Chichester, UK, 2001. Jan Oliver Borchers. Patterns. Webpage, May 2006. URL http://www.hcipatterns.org/ patterns. [Last accessed June 2009 ]. John Brooke. SUS: A quick and dirty usability scale. In P. W. Jordan, B. Thomas, B. A. Weerdmeest, and I. L. McClelland, editors, Usability Evaluation in Industry. Taylor & Francis, London, UK, 1996. William Brown, Raphael Malveau, Hays McCormick, and Thomas Mowbray. The Software Patterns Criteria. Website, 1998. URL http://www.antipatterns.com/. [Last accessed June 2009]. William J. Brown. Anti-Patterns: Refactoring Software, Architectures and Projects in Crisis. John Wiley & Sons, New York, 1999. J. P. Chin, V. A. Diehl, and K. L. Norman. Development of and Instrument Measuring User Satisfation fo the Human-Computer Interface. In ACM CHI ’88 proceedings, pages 213–218, 1988.

67

Bibliography

ChipSoft. Integratie azd en cs-ezis. Webpage, 2009. URL http://www.chipsoft.nl/Mediair/ Archief/032006/AZD.htm. [Last accessed June 2009]. CODA Ltd. Coda 2go screenshots. Webpage, 2009. URL http://www.coda2go.com/ applications/resources/screenshots. [Last accessed Jun 2009]. Computer Associates Plex Wiki. What is Plex? Wiki Website, November 2008. URL http: //wiki.plexinfo.net/index.php?title=What_is_Plex%3F. [Last accessed June 2009]. Bruce Eckel. Electronic book: Thinking in patterns with java: Problem-solving techniques using java. Electronic Book, 2009. URL http://mindview.net/Books/TIPatterns/. [Revision 0.9]. Elvia. Anleitung Tour Online. Mondial Assistance (Schweiz) AG, Wallisellen, Switserland. URL http://www.elvia-eps.ch/pdf/EPS_Anleitung_d.pdf. Tom Erickson. The Interaction Design Patterns Page. Webpage, March 2009. URL http: //www.visi.com/˜snowfall/InteractionPatterns.html. [Last accessed June 2009]. Sally A. Fincher. Hci pattern-form gallery. Webpage, August 2008. URL http://www.cs.kent. ac.uk/people/staff/saf/patterns/gallery.html. [Last accessed June 2009]. Eelke Folmer. Interaction design pattern library for games. Website, 2008. URL http://www. helpyouplay.com/. [Last accessed June 2009]. Martin Fowler. Analysis Patterns: Reusable Object Models. Addison-Wesley Professional, 1996. Eric Gamma, R. Helm, and J. Vlissides. Design Patterns: Elements of Reusable Object-Oriented Software. Addison-Wesley, Reading, 1995. J.J. Garrett. AJAX: A New Approach to Web Applications. Essay, February 2005. URL http:// www.adaptivepath.com/ideas/essays/archives/000385.php. [Last accessed March 2009]. Elbert-Jan Hennipman. The Pattern Design Wizard. elbert-jan.nl/testCG/. [Last Accessed June 2009].

Website, 2008.

URL http://www.

Elbert-Jan Hennipman, Evert-Jan Oppelaar, and Gerrit C. Van der Veer. Pattern Languages as Tool for Discount Usability Engineering. In Interactive Systems. Design, Specification, and Verification, number 5136 in Lecture Notes in Computer Science, pages 108–120. Springer Berlin / Heidelberg, July 2008. Thomas T. Hewett, Ronald Baecker, Stuart Card, Tom Carey, Jean Gasen, Marilyn Mantei, Gary Perlman, Gary Strong, and William Verplank. Acm sigchi curricula for human-computer interaction. Website, 1996. URL http://sigchi.org/cdg/. Hilarius Media. Univ´e maakt levensverzekeringen rendabel met panklare IT-oplossing van Monuta. Webpage, June 2003. URL http://www.systemimagazine.nl/html/archief/2003/ jun/1320.html. [Last accessed June 2009]. IBM. Solution Starter: Project Server to Siebel. Webpage, 2009. URL http://msdn.microsoft. com/en-us/library/aa168486%28office.11%29.aspx. [Last accessed June 2009]. Infragistics. Quince. Website, 2009. URL http://quince.infragistics.com/. [Last accessed June 2009].

68

Bibliography

International Standards Organisation. Ergonomic Requirements for Office Work with Visual Display Terminals (VDTs); Part 11 — Guidance on Usability, 1998. Godfrey Jackson. A Pattern Language. Webpage, 2008. URL http://www.jacana.plus.com/ pattern/index.htm. [Last accessed June 2009]. Steve Krug. Don’t make me think! A common sense approach to Web usability. New Riders Press, Indianapolis, 2000. Tibor Kunert. User -Centered Interaction Design Patterns for Interactive Digital Television Applications. Human-Computer Interaction Series. Springer Verlag London, 2009. J. R. Lewis. IBM Computer Usability Satisfaction Questionnaires: Psychometric Evaluation and Instructions for Use. International Journal of Human-Computer Interaction, 7(1):57–78, 1995. J.R. Lewis. Psychometric Evaluation of an After-Scenario Questionnaire for Computer Usability Studies: The ASQ. SIGCHI Bulletin, 23(1):78–81, 1991. Also see http://oldwww.acm.org/ perlman/question.cgi?form=ASQ. Rensis Likert. A Technique for the measurement of Attitudes. Archives of Psychology, 140(55): 1–55, 1932. Arnold Lund. Measuring Usability with the USE questionnaire. Usability and User Experience Newsletter, 2001. URL http://www.stcsig.org/usability/newsletter/0110_measuring_ with_use.html. [Last accesssed June 2009]. Arnold Lund. Personal communication, May 2009. Deborah J. Mayhew. The Usability Engineering Lifecycle: A Practitioners Handbook for User Interface Design. Morgan Kaufman Publishers, Inc., SF, CA, 1999. M. McGee. Usability Magnitue Estimation. In Proceedings of the Human Factos and Ergonomics Society Annual Meeting, Denver, CO, USA, 2003. Gerard Meszoras and Jim Doble. Pattern Languages of Program Design 3, chapter A Pattern Language for Pattern writing, pages 527–574. Addison-Wesley, Reading, MA, USA, 1998. URL http://www.hillside.net/patterns/writing/patternwritingpaper.htm. MI Consultancy. Demonstratie filmpjes van norma. Demo clips, 2009. URL http://www. miconsultancy.com/nl/index.php?option=com_content&task=view&id=31&Itemid=50. [Last accessed June 2009]. NIBE-SVV. Thuis in de verzekeringsbranche. Krips B.V., Meppel, Netherlands, 5 edition, 2006. J. Nielsen. Usability Engineering. Academic Press, Boston, San Diego, CA, USA, 1993. D. Norman and S. Draper. User Centered System Design; New Perspectives on Human-Computer Interaction. Lawrence Erlbaum Associates, Inc. Lahwah, NJ, USA., 1986. Donald Norman. The Design of Everyday Things. Basic Books, 1998. M. Porteous, J. Kirakowski, and M. Corbett. SUMI User Handbook. University College, Cork, Ireland, 1993. see http://sumi.ucc.ie/. Quinity. Oplossingen voor verzekeraars. Leaflet, Utrecht, The Netherlands, 2003a.

69

Bibliography

Quinity. De Quinity Formulierencomponent. Leaflet, Utrecht, The Netherlands, 2003b. Quinity. Quinity Insurance Solution. Booklet, Utrecht, The Netherlands, 2005. A. Richter. Generating User Interface Design Patterns for Web-based E-business Applications. In INTERACT 2003 - 2nd Workshop on Software and Usability Cross-Pollination: The Role of Usability Patterns, Z¨ urich, Switzerland, September 2003. Siemens AG, Competence Center User Interface Design. URL http://www.swt.informatik.uni-rostock.de/deutsch/ Interact/09%20Richter.pdf. W.W. Royce. Managin the development of large software systems: Concepts and tehcniques. In Proceedings IEEE WESCON, pages 1–9. IEEE, 1970. SAP A.G. Sap erp demos. Demo clips, 2009. URL http://www.sap.com/solutions/ business-suite/erp/demos/index.epx. [Last accessed June 2009]. T. Schummer, J. Borchers, J. Thomas, and U. Zdun. Human-computer-interaction patterns: workshop on the human role in HCI patterns. In Conference on Human Factors in Computing Systems, pages 1721–1722, 2004. A. Seffah and H. Javahery. On the Usability of Usability Patterns - What can make patterns usable and accessible for common developers. In Workshop entitled Patterns in Practice, ACM CHI Conference, Mineapolis, MI, USA, April 2002. K. Segerst˚ ahl and T. Jokela. Usability of Interaction Patterns. In Conference on Human Factors in Computing Systems, pages 1301–1306, 2006. Jeroen Snijders. Functional design patterns. Masterthesis, University of Utrecht, Utrecht, The Netherlands, August 2004. Patrick Stapleton. UI Pattern Documentation Review. Article, June 29 2009. URL http: //www.boxesandarrows.com/view/ui-pattern. [Last accessed August 2009]. The Future Group. Allianz - java case study. Webpage. URL http://www.the-future-group. com/index.php/allianz. [Last accessed June 2009]. Jenifer Tidwell. Common Ground Collection: A Pattern Language for Human-Computer Interface Design. Website, 1999. URL http://www.mit.edu/˜jtidwell/common_ground.html. Last accessed March 2009. Jenifer Tidwell. Designing Interfaces: Patterns for Effective Interaction Design. Website, 2005. URL http://www.designinginterfaces.com/. [Last accessed June 2009]. Jenifer Tidwell. Designing Interfaces. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472, USA, 2nd edition edition, 2006. E. Todd, E. Kemp, and C. Philips. What makes a good user interface pattern language? In Proceedings of the fifth conference on Australasion user interface, volume 28, pages 91–100, 2004. Anders Toxboe. UI-patterns.com. Website, 2009. URL http://ui-patterns.com/. [Last accessed 2009]. Tom Tullis and Bill Albert. Measuring the User Experience - Collecting, Analyzing and Presenting Usability Metrics. Morgan Kaufman Publishers, Burlington, MA, USA, 2008.

70

Bibliography

Unigarant N.V. Fiets premie berekening. Webpage, 2008. URL https://www.unigarant.nl/ UnigarantWebsite/Verzekeringen/Verzekeringen/Fiets/PremieBerekenen.htm. [Last accessed June 2009]. Usability Professionals Association. What is Usability? Webpage, 2009a. URL http://www. upassoc.org/usability_resources/about_usability/definitions_of_usability.html. [Last accessed June 2009]. Usability Professionals Association. More Definitions of Usability. Webpage, 2009b. URL http: //www.upassoc.org/usability_resources/about_usability/definitions.html. [Last accessed June 2009]. J. Van Biljon, P. Kotz´e, K. Renaud, M. McGee, and A. Sheffah. The use of anti-patterns in human computer interaction: wise or ill-advised? In Proceedings of the 2004 annual research conference of the South African institute of Computer scientists and information technologists on IT research in developing countries, pages 176–185, 2004. Gerrit C. Van der Veer and Martijn Van Welie. DUTCH - Designing for Users and Tasks from Concepts to Handles. In Dan Diaper and Neville Stanton, editors, The Handbook of Task Analysis for Human-Computer Interaction, chapter 7, pages 155–173. Lawrence Erlbaum, Inc., 2003. URL http://www.cs.vu.nl/˜martijn/gta/docs/chapterDUTCHv2.3.pdf. [Last accessed March 2009]. Douglas K. Van Duyne, James A. Landay, and Jason I. Hong. The Design of Sites: Patterns, Principles, and Processes for Crafting a Customer-centered Web Experience. Addison-Wesley Professiona, Reading, 2003. Douglas K. Van Duyne, James A. Landay, and Jason I. Hong. The Design of Sites. Website, 2006. URL http://www.designofsites.com. [Last accessed June 2009]. Hans van Vliet. Software Engineering: Principles and Practice. John Wiley & Sons, Ltd, Chichester, 2000. M. Van Welie, G.C. Van der Veer, and A. Eli¨ens. Patterns as Tools for User Interface Design. In Jean Vanderdonckt and Christelle Farenc, editors, International Workshop on Tools for Working with Guidelines, pages 313–324. Springer, October 7–8 2000. URL http://www.cs. vu.nl/˜martijn/gta/docs/TWG2000.pdf. [Last accessed March 2009]. Martijn Van Welie. Task-Based User Interface Design. PhD thesis, SIKS, 2001. Martijn Van Welie. The Amsterdam Pattern Collection. Website, 2009. URL http://www. welie.com/patterns/. [Last accessed June 2009]. Martijn Van Welie and H. Traetteberg. Interaction Patterns in User Interfaces. In PLoP 2000 conference, 2000. Martijn Van Welie and Gerrit C. Van der Veer. Pattern Languages in Interaction Design: Structure and Organization. In Proceedings of Interact, number 3, pages 1–5, 2003. Vrije Universiteit. The domain of the Master Information Sciences (IS). Webpage, 2008. URL http://www.few.vu.nl/onderwijs/masters/is/. [Last accessed April 2009]. Cathleen Wharton, Riemand John, Clayton Lewis, and Peter Polson. Usability Inspection Methods, chapter The Cognitive Walkthrough Method: A Practitioner’s Guide, pages 105–140. John Wiley & Sons, Inc., NY, USA, 1994.

71

Bibliography

Yahoo! Design Pattern Library. Website, 2009. URL http://developer.yahoo.com/ypattern. [Last accessed June 2009].

72