issue 11 - Agile Record

3 downloads 595 Views 10MB Size Report
Aug 31, 2012 - based around practical application of the techniques and methods that you ..... Early Bird price – vali
The Magazine for Agile Developers and Agile Testers

Requirements in Agile Projects www.agilerecord.com free digital version

August 2012

made in Germany

ISSN 2191-1320

issue 11

Pragmatic, Soft Skills Focused, Industry Supported

CAT is no ordinary certification, but a professional journey into the world of Agile. As with any voyage you have to take the first step. You may have some experience with Agile from your current or previous employment or you may be venturing out into the unknown. Either way CAT has been specifically designed to partner and guide you through all aspects of your tour. The focus of the course is to look at how you the tester can make a valuable contribution to these activities even if they are not currently your core abilities. This course assumes that you already know how to be a tester, understand the fundamental testing techniques and testing practices, leading you to transition into an Agile team.

The certification does not simply promote absorption of the theory through academic mediums but encourages you to experiment, in the safe environment of the classroom, through the extensive discussion forums and daily practicals. Over 50% of the initial course is based around practical application of the techniques and methods that you learn, focused on building the skills you already have as a tester. This then prepares you, on returning to your employer, to be Agile. The transition into a Professional Agile Tester team member culminates with on the job assessments, demonstrated abilities in Agile expertise through such forums as presentations at conferences or Special Interest groups and interviews. Did this CATch your eye? If so, please contact us for more details!

Book your training with Díaz & Hilterscheid! Open seminars:

© Sergejy Galushko – Fotolia.com

27.–31.08.12 in Frankfurt, Germany 10.–14.09.12 in Berlin, Germany 24.–28.09.12 in Stuttgart, Germany 29.10.–02.11.12 in Berlin, Germany 03.–07.12.12 in Cologne/Düsseldorf, Germany

03.–07.09.12 in Helsinki, Finland 15.–19.10.12 in Oslo, Norway 26.–30.11.12 in Amsterdam, The Netherlands

(German tutor and German exam)

Díaz & Hilterscheid GmbH / Kurfürstendamm 179 / 10707 Berlin / Germany Tel: +49 30 747628-0 / Fax: +49 30 747628-99 www.diazhilterscheid.de [email protected] page 2

Agile Record – www.agilerecord.com

Editorial Dear reader, Requirements are the basis for everything we need to create. It does not matter whether it is software or a house, bridge, etc.In some countries, like in South Korea, requirements are quite extensive and very well defined. Requirements and requirement management are a part of the culture. In other countries, they also have requirements, but their definition and management are quite poor. You can mostly see the consequences very soon… How do requirements and requirement management work in an agile environment? Have a look at what our authors wrote about it. There are some good hints and experiences. I thank all the authors and advertisers for their great support of the magazine. We are already facing the Agile Testing Days. It seems that we will rock the market again. The numbers of attendees this time will be double than last year! Don’t miss to be part of it. One thing at the ATD is the MIATPP Award (Most Influential Agile Testing Professional Person). Please send us your vote. Last year Gojko Adzic got it. Have a look at www.agiletestingdays.com After the great success of the Agile Testing Days, we decided to set up a new conference with an extended scope: The Agile Development Practices was born! We have started the call for papers. The response of the agile professionals is very positive, and we are happy to be able to offer a trendy and cool conference for the agile community. Please inform your interested contacts. You are still in time for submitting a paper. Please have a look at www.agiledevpractices.com Last but not least, I want to thank Katrin Schülke for her amazing job in creating our magazines from the early beginning. It has been a pleasure working with her for the last years on such interesting projects. We wish her the best for the new step in her career and private life. We will keep in touch! All the best.

Enjoy reading

José Díaz

page 1

Agile Record – www.agilerecord.com

Contents Editorial  1 Editorial Board  3 When an Automated Regression Test is Missing  10 by Lisa Crispin User Requirements in the 21st Century  13 by William Hudson The Top 10 Critical Requirements are the Most Agile Way to Run Agile Projects  17 by Tom and Kai Gilb Effects of Social Defenses on Agile Requirements  22 by Ilan Kirschenbaum The Agile Project Manager – Please Sir, May I have some help?  25 by Bob Galen Project Disasters: Are Requirements Really to Blame?  30 by Leanne Howard Agile – Where’s the Evidence?  35 by Allan Kelly Rope in Value with User Stories  38 by Allison Pollard “The Only Constant Is Change”  40 by David Kirwan Requirements Elicitation using Testing Techniques  44 by Nishant Pandey Good Practices in Agile Requirements that Build Great Products  47 by Rathinakumar Balasubramanian Agile Requirements: Lessons from Five Unusual Sources  50 by Raja Bavani Outlining Agile  54 by David Gelperin Performance Requirements in Agile Projects  57 by Alexander Podelko Industrial-Strength Agile  68 by Jeff Ball Masthead  70 Picture Credits  70 Index Of Advertisers  70

page 2

Agile Record – www.agilerecord.com

Editorial Board

David Alfaro

Josh Anderson

Plamen Balkanski

Matt Block

Jose Manuel Beas

Jennifer Bleen

Andreas Ebbert-Karroum

Pat Guariglia

Ciaran Kennedy

Roy Maines

Steve Rogalsky

Dave Rooney

Steve Smith

Zuzanna Sochova

Declan Whelan

Go to www.agilerecord.com/editorial_board.php to read their biographies

page 3

Agile Record – www.agilerecord.com

November 19–22, 2012 in Potsdam, Germany Dorint Hotel Sanssouci Berlin/Potsdam www.agiletestingdays.com

November 19–22, 2012 in Potsdam, Germany Dorint Hotel Sanssouci Berlin/Potsdam www.agiletestingdays.com

Become Agile Testing Days Sponsor 2012 The Agile Testing Days 2012, from November 19–22 is an annual European conference for and by international professionals involved in the agile world. Since 2009 we got a lot of addition to our agile family and we are happy to present the 4th edition of the Europeans greatest agile event of the year. What you will expect: • • • •

4 inspiring conference days 9 stunning keynotes 10 instructive tutorials 12 informative sponsor sessions

• • • •

40 amazing talks More than 70 speakers Over 400 testers from all over the world And an exhibition area of 500sqm

Tutorials Time

Tutorial

08:00–09:00

Registration

09:00–17:30

“Management 3.0 Workshop” Jurgen Appelo

09:00–17:30

“Making Test Automation Work in Agile Projects” Lisa Crispin

09:00–17:30

“Transitioning to Agile Testing” Janet Gregory:

09:00–17:30

“Introduction to Disciplined Agile Delivery” Scott W. Ambler

09:00–17:30

“Beheading the legacy beast” Ola Ellnestam

09:00–17:30

“Fully integrating performance testing into agile development” Scott Barber

09:00–17:30

“Mindful Team Member: Working Like You Knew You Should” Lasse Koskela

09:00–17:30

“Mind Maps: an agile way of working” Huib Schoots & Jean-Paul Varwijk

09:00–17:30

“Winning big with Specification by Example: Lessons learned from 50 successful projects” Gojko Adzic

09:00–17:30

“Software Testing Reloaded – So you wanna actually DO something? We’ve got just the workshop for you. Now with even less powerpoint!” Matt Heusser & Pete Walen

All tutorials include two coffee breaks (11:00 and 15:30) and lunch (13:00–14:00).

November 19–22, 2012 in Potsdam, Germany Dorint Hotel Sanssouci Berlin/Potsdam www.agiletestingdays.com

Conference Day 1 Time

Track 1

Track 2

Track 3

Track 4

Vendor Track

08:00–09:20

Registration

09:20–09:25

Opening

09:25–10:25

Keynote: “Disciplined Agile Delivery: The Foundation for Scaling Agile” – Scott W. Ambler

10:25–10:30

Break

10:30–11:15

“5A – assess and adapt agile activities” Werner Lieblang & Arjan Brands

“Moneyball and the Science of Building Great Agile Team” Peter Varhol

“Get them in(volved)” Arie van Bennekum

11:15–11:40 11:40–12:25

“Testing distributed projects” Hartwig Schwier

Break “The Agile Manifesto Dungeons: Let’s go really deep this time!” Cecile Davis

“Balancing and growing agile testing with high productive distributed teams” Mads Troels Hansen & Oleksiy Shepetko

“We Line Managers Are Crappy Testers – Can We Do Something About It” Ilari Henrik Aegerter

“The many flavors and toppings of exploratory testing” Gitte Ottosen

12:25–13:45

Lunch

13:45–14:45

Keynote: “Myths About Agile Testing, De-Bunked” – Janet Gregory & Lisa Crispin

Time

Track 1

Track 2

TBD

TBD

TBD

Vendor Track

14:45–16:15

Consensus Talks 10 min. each

Open Space Cirilo Wortel

TestLab Bart Knaack & James Lyndsay

Testing Dojos

Coding Dojos Meike Mertsch & Michael Minigshofer

Product Demo

Time

Track 1

Track 2

16:15–16:40 16:40–17:25

Track 3

Track 4

Break “The Beating Heart of Agile” Andrea Provaglio

“Why World of Warcraft is like being on an agile team, when it isn’t and what we can learn from online role playing games” Alexandra Schladebeck

“Agile communication: Back and forth between managers and teams” Zuzana Sochova & Eduard Kunce

“Developers Exploratory Testing – Raising the bar” Sigge Birgisson

17:25–17:30

Break

17:30–18:30

Keynote: “Self Coaching“ – Lasse Koskela

19:00

Social Event

Vendor Track

November 19–22, 2012 in Potsdam, Germany Dorint Hotel Sanssouci Berlin/Potsdam www.agiletestingdays.com

Conference Day 2 Time

Track 1

Track 2

Track 3

Track 4

07:30–09:20

Registration

08:10–08:55

Early Keynote: TBD

09:20–09:25

Opening

09:25–10:25

Keynote: “How to change the world” – Jurgen Appelo

10:25–10:30

Break

10:30–11:15

“Continuous Delivery of Long-Term Requirements” Paul Gerrard

“How releasing faster changes testing” Alexander Schwartz

TBD

11:15–11:40 11:40–12:25

Vendor Track

“Testers are bearers of good news” Niels Malotaux

Break “Experiences with introducing a Continuous Integration Strategy in a Large Scale Development Organization” Simon Morley

“Skills & techniques in the modern testing age” Rik Teuben

“Continuous Delivery: from dream to reality” Clement Escoffier

“Ten qualities of an agile test-oriented developer” Alexander Tarnowski

12:25–13:45

Lunch

13:45–14:45

Keynote: “Adaptation and Improvisation – but your weakness is not your technique” – Markus Gärtner

Time

Track 1

Track 2

TBD

TBD

TBD

Vendor Track

14:45–16:15

Consensus Talks 10 min. each

Open Space Cirilo Wortel

TestLab Bart Knaack & James Lyndsay

Testing Dojos

Coding Dojos Meike Mertsch & Michael Minigshofer

Product Demo

Time

Track 1

Track 2

16:15–16:40 16:40–17:25

Track 3

Track 4

Break “From CI 2.0+ to Agile ALM” Michael Hüttermann

“Testers Agile Pocketbook” Stevan Zivanovic

“Extending Continuous Integration and TDD with Continuous Testing” Jason Ayers

“Excelling as an Agile Tester” Henrik Andersson

17:25–17:30

Break

17:30–18:30

Keynote: “Reinventing software quality“ – Gojko Adzic

Vendor Track

November 19–22, 2012 in Potsdam, Germany Dorint Hotel Sanssouci Berlin/Potsdam www.agiletestingdays.com

Conference Day 3 Time

Track 1

Track 2

Track 3

Track 4

07:30–09:10

Registration

08:10–08:55

Early Keynote: TBD

09:10–09:15

Opening

09:15–10:15

Keynote: “Fast Feedback Teams” – Ola Ellnestam

10:15–10:20

Break

10:20–11:05

“Exceptions, Assumptions and Ambiguity: Finding the truth behind the Story” David Evans

“BDD with Javascript for Rich Internet Applications” Carlos Blé & Ivan Stepániuk

11:05–11:30

“Automation of Test Oracle – unachievable dream or tomorrow’s reality” Dani Almog

Vendor Track

“You Can’t Sprint All the time – the importance of slack” Lloyd Roden

Break

11:30–12:15

“Combining requirements engineering and testing in agile” Jan Jaap Cannegieter

“TDD-ing Javascript Front Ends” Patrick Kua

“Archetypes and Templates: Building a lean, mean BDD automation machine for multiple investment platforms” Mike Scott & Tom Roden

“Taking over a bank with open source test tooling” Cirilo Wortel

12:15–13:00

“Agile Solutions – Leading with Test Data Management” Ray Scott

“Changing Change” Tony Bruce

“Technical Debt” Thomas Sundberg

“Changing the context: How a bank changes their software development methodology” Huib Schoots

13:00–14:10

Lunch

14:10–15:10

Keynote: “The ongoing evolution of testing in agile development” – Scott Barber

15:10–15:15

Break

15:15–16:00

“Thinking and Working Agile in an Unbending World” Peter Walen

“Sprint Backlog in ATDD” Ralph Jocham

“Mobile test automation at mobile scale” Dominik Dary & Michael Palotas

“Quality On Submit, Continuous Integration in Practice” Asaf Saar

16:00–16:05

Break

16:05–17:05

Keynote: “The Great Game of Testing” – Matt Heusser

17:05

Closing

Please note that the program is subject to change.

November 19–22, 2012 in Potsdam, Germany Dorint Hotel Sanssouci Berlin/Potsdam www.agiletestingdays.com

Become Agile Testing Days Sponsor 2012 For this phenomenal event, we are looking for supporters. Please have a look at our portfolio and create your own sponsorship package:

Exhibitor

Conference Sponsor

Diamond Exhibitor:

16,200.00 €*

MIATPP Award Trophy Sponsor: 2,970.00 €*

Platinum Exhibitor:

10,800.00 €*

Conference Bag Insert:

495.00 €*

Gold Exhibitor:

5,400.00 €*

Coffee Break Cup Sponsor:

1,530.00 €*

Silver Exhibitor:

3,600.00 €*

Social Event Sponsor:

8,990.00 €*

Media Sponsor

Session Sponsor

Program First Page Advert:

990.00 €*

90 Minutes Product Demo:

2,250.00 €*

Program Full Page Advert:

495.00 €*

45 Minutes Early Keynote:

2,250.00 €*

Program Half page Advert:

249.00 €*

* Early Bird price – valid until July 31, 2012. Please find details on packages online at www.agiletestingdays.com

Exhibitors & Supporters 2011:

The Austrian Software Test Experts!

A Díaz & Hilterscheid Conference Díaz & Hilterscheid Unternehmensberatung GmbH Kurfürstendamm 179 10707 Berlin Germany Phone: Fax:

+49 (0)30 74 76 28-0 +49 (0)30 74 76 28-99

[email protected] www.agiletestingdays.com www.xing.com/net/agiletesting www.twitter.com/AgileTD

Column When an Automated Regression Test is Missing by Lisa Crispin From time to time I see someone blog or tweet a rant to the effect that automated regression tests are useless, don’t find bugs, and don’t have any return on investment. This puzzles me, because our automated regression tests at the unit, API and GUI levels find regression bugs on a regular basis. This is not to say that automated regression tests somehow take the place of other critical types of testing, such as exploratory testing. But, they do provide a safety net that allows us to change and add code with confidence. Back in 2006, I discovered that some of our automated tests had false failures. I logged a bug in our defect tracking system. I also wrote a “refactoring” card to be done in one of our semiannual “Engineering sprints”, where we do activities to learn new techniques and reduce our technical debt. Our automated GUI test tool, Canoo WebTest, could not work with certain web pages that allowed users to upload files. If the application was running on Linux, the resulting upload would contain several blank lines, which caused the code to fail upon validating the upload file. As a result, we were not able to have automated regression tests for several key file uploads in our legacy code. When I first discovered this issue in 2006, Tony, one of my teammates, wrote a debugging page that allowed me to identify the specific issue. Unfortunately, in a subsequent “engineering sprint”, an overly enthusiastic developer deleted my debugging page, thinking it wasn’t used anywhere in the app. At the start of each bi-annual “engineering sprint”, we put all the “refactoring cards” on our task board, including my plea to refactor these upload pages so that we could automate regression tests. And each “engineering sprint”, the card did not get picked up. And here’s the news – we do not have time to do manual regression tests. Fast forward to 2012. Uh oh! There is a high production bug! One of the critical user interfaces to upload a data file is bombing out! What a surprise. Since there are no automated regression tests for this interface, the regression bug was released to production, at the worst possible time. All of a sudden, my “refactoring card” was a high priority. It referred to the bug I had entered back in 2006, which referred to the information my teammate Tony had provided. Not surprisingly, Tony could no longer remember what he had learned about the code back then.

page 10

Here’s the greatest irony. When one of our developers started working on the problem, he was able to refactor the code in ONE HOUR so that our Canoo WebTest scripts for all these upload interfaces can now work. We put this card off for six years, resulting in a high production bug, and it was an hour’s worth of work to fix the problem. What did we learn? One: if we don’t have automated regression tests, we aren’t going to do the regression tests manually. Two: If the code has an issue that’s preventing automated regression tests, invest an hour or so looking into the problem. It might be a really quick fix! Three: when automated regression tests prevent a high bug that affects many of your customers, they probably have a good return on investment. Don’t be swayed by the pundits who advise against automating your regression tests. Just take time to learn how to write automated tests that are efficient and easy to maintain.

> About the author Lisa Crispin is the co-author, with Janet Gregory, of Agile Testing: A Practical Guide for Testers and Agile Teams (AddisonWesley, 2009), co-author with Tip House of Extreme Testing (Addison-Wesley, 2002), and a contributor to Experiences of Test Automation by Dorothy Graham and Mark Fewster (Addison-Wesley, 2011) and Beautiful Testing (O’Reilly, 2009). She has worked as a tester on agile teams for the past ten years, and enjoys sharing her experiences via writing, presenting, teaching and participating in agile testing communities around the world. Lisa was named one of the 13 Women of Influence in testing by Software Test & Performance magazine. For more about Lisa’s work, visit www.lisacrispin.com. @lisacrispin on Twitter, entaggle.com/lisacrispin

Agile Record – www.agilerecord.com

Can agile be certified? Find out what Aitor, Erik or Nitin think about the certification at www.agile-tester.org

Training Concept All Days: Daily Scrum and Soft Skills Assessment Day 1: History and Terminology: Agile Manifesto, Principles and Methods Day 2: Planning and Requirements

© Sergejy Galushko – Fotolia.com

Day 3: Testing and Retrospectives Day 4: Test Driven Development, Test Automation and Non-Functional Day 5: Practical Assessment and Written Exam

27

tra

inin

gp

rov i

de

rs w

orl

dw

ide

!

Supported by

We are well aware that agile team members shy away from standardized trainings and exams as they seem to be opposing the agile philosophy. However, agile projects are no free agents; they need structure and discipline as well as a common language and methods. Since the individuals in a team are the key element of agile projects, they heavily rely on a consensus on their daily work methods to be successful. All the above was considered during the long and careful process of developing a certification framework that is agile and not static. The exam to certify the tester also had to capture the essential skills for agile cooperation. Hence a whole new approach was developed together with the experienced input of a number of renowned industry partners.

Barclays DORMA Hewlett Packard IBM IVV Logic Studio Microfocus Microsoft Mobile.de Nokia NTS Océ SAP Sogeti SWIFT T-Systems Multimedia Solutions XING Zurich

User Requirements in the 21st Century by William Hudson

The Role of IT Has Changed – Requirements Gathering Hasn’t In the early 1990’s, few could have predicted the explosive growth of the World Wide Web and the invasion of information technology into almost every area of daily life. The pace of change has accelerated even further in the past few years with the increasing popularity of mobile devices, such as tablets and smartphones, swelling the ranks of internet users to a projected 2.7 billion by the year 2015, about one third of the world population. This means that information technology is being used by a vastly more diverse audience than it was 20 years ago, yet the methods we use to specify and build interactive systems are firmly rooted in the 20th century. For example, although the Agile Manifesto – used as the underlying philosophy for many current development methods – was ratified in 2001, it was based on the industrial design and development methods of the Lockheed Skunk WorksTM from the 1940s [1]. In fact there are few, if any, mainstream software development methods for gathering requirements or building solutions designed around users, usability or user experience. The absence of focus on users becomes particularly embarrassing when considering the place of technology in the new millennium. Twenty years ago, interactive systems were used primarily by people who were employed for the purpose. While personal computers were making important inroads into the commercial computing world, they were still relatively expensive and the province of a privileged few. If interactive systems were hard to use, staff would be trained or struggle on as best they could. The role of IT has changed substantially in that time, with the vast majority of users now expecting (and demanding) self-explanatory systems that are both effective and enjoyable to use. To bridge the gap between traditional software development (including Agile methods) and the needs of present-day users, we have introduced a corrective role that we call a usability or userexperience specialist. Regrettably, this role is usually peripheral to the software development process itself and is often limited to

page 13

usability evaluation. The net result is that we have added another quality hurdle, but have made no real changes to the development process itself. Both of these approaches were expressly admonished by the legendary quality consultant J. Edwards Deming (the man credited with teaching the Japanese how to build quality cars – quoted here from his 14 Points for Management [2]): 3. Cease dependence on inspection to achieve quality. Eliminate the need for inspection on a mass basis by building quality into the product in the first place. 9. Break down barriers between departments. People in research, design, sales, and production must work as a team, to foresee problems of production and in use that may be encountered with the product or service. Although Deming was addressing quality in an industrial setting, his Points for Management apply equally well to software development. This is particularly true of Agile with its narrow focus on the creation of working code rather than the overall effectiveness or suitability of the result. Industrial Focus As mentioned above, Agile itself has an industrial heritage, as do most of the other tools and techniques in use in mainstream software development. Ivar Jacobson’s development of use cases, for example, stems from his experience of switching systems in telecommunications. One of his earliest papers on the subject refers directly to this focus – Object Oriented Development in an Industrial Environment [3]. Throughout the 20th century, most authors in the field of software engineering wrote as though interactive systems did not exist. The design and development of effective interactive systems was left as an exercise for the reader or dismissed as a separate area of study – poorly served until the phenomenal interest in the World Wide Web in the mid-1990s. Even relatively recent texts on software development make only passing reference to usability.

Agile Record – www.agilerecord.com

The industrial heritage of software development is reflected in our approach to requirements. Jacobson, in his discussion of use cases, is not alone in treating the system under consideration as a “black box” – something whose inner workings are invisible to an external observer. This is why, given the large number of questions that we could reasonably ask about a new development project, requirements are traditionally seen as specifying what is required; how is to be avoided at all costs. There are many good reasons for this – even when developing interactive systems, we do not want to indulge in premature design. But this approach only works if we do not much care about what actually happens inside the black box, as long as it meets all of the requirements. This is not practical with interactive systems: we cannot specify the interaction requirements in enough detail to be certain that all conforming implementations will be equally effective.

Traditional Requirements

What How Who

To all intents and purposes, our black box is translucent. Users cannot see the implementation itself, but timing, terminology, organization and demands on users’ skills and judgement are fully exposed – resulting in something more akin to a “grey box”. And the solution to building more effective interactive systems does not lie with superficial improvements to the user interface. It demands significant (but not earth-shattering) changes to the way we collect and evolve user requirements. The overall approach can be thought of as user-centered Agile development, although the terms Agile UCD and Agile UX are equally applicable. User-Centered Agile Development For interactive systems in the 21st century, we must do more to understand how our users behave and what they need from our interactive solutions. Furthermore, these behaviors and needs must inform the design process without violating agile constraints (for example, no big design up front). To do this requires a few adjustments to a typical development process:

1. Integration of usability and user experience expertise In his book Agile Software Requirements [4] Dean Leffingwell shows an “ideal” agile team with a UI (user interface) resource as external and includes user experience designers in his discussion of “other supporting roles”. This is not an unusual arrangement for many organizations, but it does explain why many interactive systems suffer from poor usability and user experience. The key expertise for researching, designing, advising and assessing the user experience is removed from the process of building the solution. It many ways this is simply a reflection of the industrial focus that dominated software development in the 20th century – the user interface and user experience in general are separate topics outside a team’s core activities. Yet Leffingwell discusses at length the pitfalls of “functional silos” (the separation of resources by function); he doesn’t appear to consider the user interface and user experience important enough to include them in the “ideal” agile team. This again reinforces the industrial background of current methods and Agile’s focus on working code over and above solutions that are effective for our users. Let’s be clear about the Skunk Work’s (and Agile’s) working philosophy: close cooperation should take the place of extensive documentation. If we are building systems that are predominately interactive, we need to build the required expertise into our teams – not call on it occasionally as an external resource. The usability and user experience expertise: ■■

drives the user research process,

■■

develops personas and other user-requirement-focussed artifacts,

■■

evaluates the usability of product increments (or prototypes), and

■■

advises and supports developers in usability and user experience issues.

This approach gives the team a real chance of understanding their users and addressing usability issues, while there is time and resource available to make adjustments.

UCD Contexts of Use What How Who When

1. Integration of usability and user experience expertise with the development team

Where Why

2. An appropriate amount of user research up front 3. The distillation of the user research to a suitable form for design (personas) 4. Parallel streams for interaction design and development The following sections address each of these adjustments in turn.

page 14

2. User research Detailed user requirements cannot be elicited in the same way as business requirements, rules and constraints. Instead we must investigate and understand what the international standard on human-centered design [5] refers to as the contexts of use. This Agile Record – www.agilerecord.com

need not be a long or onerous process, particularly in a wellestablished problem domain, such as e-commerce. However, it does require knowledge of research methods and some training in human-computer interaction. One of the main reasons for this is that just talking to users about their requirements is known to be an ineffective way of establishing how an interactive solution should be designed. This is particularly true in novel problem domains – users are not familiar with the possibilities and are often not aware that their own needs may differ significantly from other users’ or what the design team has in store for them. Instead, the preferred methods of user research usually involve a mixture of observation (to understand working practices in some detail) and interviews – see, for example, Karen Holtzblatt and colleagues’ work on contextual design [6, 7]. The user research process can also be used to familiarize the development team with real users and their needs. This is an important part of collaborative development since otherwise team members can have real difficulty in understanding that users are often not as comfortable with technology as they are and may not appreciate the need for a user-centered approach [8]. In addition, affinity diagramming workshops (discussed in [6, 7]) can be used to immerse the team and other stakeholders in the domain of real users. 3. Distillation of user research A large body of data often results from even a small amount of user research. Since the functional requirements of the solution are still specified in use cases or user stories, user research needs to be reduced to a more compact and immediate form. For these purposes, personas are ideal although quite widely misunderstood. The focus of personas should be on user behaviors and needs, not demographics. Most interactive solutions will have a relatively small number of personas with some defined as primary and others as secondary. Each primary persona requires a separate user interface because of their particular contexts of use. The persona itself is a description of a “typical” user with their needs, behaviors and motivations as well as problem areas and challenges. We present a persona as if they were a real person – this often makes the development team uncomfortable at first, but there are good psychological reasons for doing so, since this gives us a better chance of understanding and treating users as real people [9, 10]. 4. Parallel streams for interaction design and development Since the agile philosophy has us avoiding “big design up front” (for good reason), we need to weave detailed interaction design into the mainstream development process. We do this by involving usability and user experience expertise in the creation of (lightweight) use cases or user stories and then expanding them in the cycle before the development team is due to implement them. The process is summarized in Figure 1. The UX stream is involved in both specifying the interaction design, but also in evaluating the product increments delivered in the previous cycle.

page 15

0

UX

0

Personas 1

Dev

UX is always once cycle ahead and one cycle behind development

User Research

Cycle 0

2

1

3

4

... Design

1

2

... Evaluation

2

3

... Implementation

Figure 1, Parallel Streams Approach

Summary Our development processes need to be aligned with the needs of the 21st century in light of the experience of the past decade. Interactive solutions will be used by an increasingly large and diverse population; usability and user experience cannot remain as optional or occasionally-invoked resources on the periphery of development teams. In this article, we have looked at some of the vestiges of the “industrial” approach to software development and the remedial steps needed to make agile processes more user-centered. 1. Hafen, G. The Skunk Works: Agile Development for 60 Years. SSTC 2005 Proceedings, 2005. Available from: http://www. sstc-online.org/proceedings/2005/PDFFiles/GLH930.pdf. 2. Deming, W.E., Out of the Crisis2000: The MIT Press. 3. Jacobson, I. Object Oriented Development in an Industrial Environment. 1987. New York, NY: ACM. 4. Leffingwell, D., Agile Software Requirements2011: AddisonWesley. 5. DIS, I., 9241-210: 2008. Ergonomics of human system interaction-Part 210: Human-centred design for interactive systems (formerly known as 13407). International Organization for Standardization (ISO). Switzerland, 2010. 6. Beyer, H. and K. Holtzblatt, Contextual Design: Defining Customer-Centered Systems1998, San Francisco, CA: Morgan Kaufmann. 7. Holtzblatt, K., J. Wendell, and S. Wood, Rapid contextual design: a how-to guide to key techniques for user-centered design2005: Morgan Kaufmann. 8. Hudson, W. Reduced empathizing skills increase challenges for user-centered design. in CHI 2009 and BCS HCI 2009 Conferences. 2009. Boston, MA and Cambridge, England: ACM. 9. Nordgren, L.F. and M.H.M. McDonnell, The Scope-Severity Paradox. Social Psychological and Personality Science, 2011. 2(1): p. 97-102. 10. Sears, D., The Person-Positivity Bias. Journal of Personality and Social Psychology, 1983. 44(2): p. 233–50.

Agile Record – www.agilerecord.com

> About the author William Hudson1 is a User Experience Strategist who consults, writes and teaches in the fields of user-centered design, user experience and usability. He has over 40 years’ experience in the development of interactive systems, initially with a background in software engineering. William was the product and user interface designer for the Emmy-award-winning “boujou”; now an indispensable tool in major film studios. He has specialized in interaction design and human-computer interaction since the late 1980’s. William has written and taught courses that have been presented to hundreds of software and web developers, designers and managers in the UK, North America and Europe. He is the founder and principal consultant of Syntagm, a consultancy specializing in the design of interactive systems established in 1985. William has presented papers, talks and tutorials at international conferences including CHI, UPA and Nielsen Norman Group Usability Week. He is the author of over 30 articles and papers2, including the Interaction Design Encyclopedia entry on card sorting3 and the Guerrilla UCD4 series of webinars introducing project teams to usability, user experience and user-centered design. William is also a contributor to the Rational Unified Process and to Addison-Wesley’s Object Modelling and User Interface Design. He is an Adjunct Professor at Hult International Business School, Courses Co-Chair for the CHI 2013 conference5 in Paris and an organizer of the UCD 2012 conference6 in London.

1 http://www.syntagm.co.uk/design/whudson.htm 2 http://www.syntagm.co.uk/design/articles 3 http://www.interaction-design.org/encyclopedia/card_sorting.html 4 http://guerrillaucd.com/ 5 http://ch2013.acm.org/ 6 http://ucd2012.org/

Follow us @ar_mag

page 16

Agile Record – www.agilerecord.com

Gilb‘s Mythodology Column

The Top 10 Critical Requirements are the Most Agile Way to Run Agile Projects by Tom and Kai Gilb There is a dangerous assumption amongst agile professionals. It is that ‘requirements’ are mainly in ‘use cases’ (1), and that progress is measured by burn-down charts. We would like to strongly dispute this dangerous idea, and offer a more powerful, and ‘more agile’ requirements concept. Our idea, and it is one we have practiced for decades (2), is simple, like agile should be. It reduces bureaucratic and up-front documentation – like agile should do, and it is focussed on delivery of value to our stakeholders – like agile says it wants to. Our idea is (3,4) that in 1 day, on 1 page, 1 team will draft 1 set of maximum 10 critical project requirements. These will be agreed as the most critical reasons for funding the project. If these requirements are met, then the project will be a complete success. Everything else, including use cases, functions, user stories, designs, architecture, is regarded as the necessary ‘means’ (design, not requirement) for meeting the top 10 requirements. It is our experience that when you put the right questions (“what are the primary expectations for funding this project?”), to the right people (the project funders and sponsors, not the users), you will always get, and get agreed, a limited set of answers, your ‘top 10’. In fact, the top 10 are usually already hiding in the project documentation and the management project slides. And our current projects totally ignore them! We are so busy laying stones and walls that we forget the cathedral we are supposed to be building (5).

3. They can quantify the degree of improvement they expect the project to produce (if guided, they can develop a defined scale of measure, and a numeric goal on that scale). 4. They can do this in a morning session and edit it to be ‘good enough’ for project use in one day of work (3). 5. It is not perfect. It can and will be continuously improved. But it is also remarkably stable. Here is what our student, Richard Smith at Citigroup, reports (6): “You may be interested to know that I wrote a detailed business requirements spec (no design, just requirements) adopting many of your ideas shortly after the course, including key quantified requirements. This spec ended up staying largely stable for a year as we did an Evo–like (Ref. 4) development process, at the end of which we successfully went live with a brand-new FX order management system in a global big-bang release to 800 Citi users in 20 locations.” (2009 e-mail to us). This is evidence that there is less ‘requirements churn’ if there are fewer but critical requirements. However, something has to be learned as we release increments of the system! Richard Smith follows up by saying (6)

Our experience, when asking responsible managers to tell us what their top 10 requirements are, is this:

“but the detailed designs (of the GUI, business logic, performance characteristics) changed many, many times, guided by lessons learned and feedback gained by delivering a succession of early deliveries to real users.”

1. They can identify the top few immediately.

■■

2. They have already documented them somewhere (but the project has ignored them).

page 17

This sounds like the essence of real agile to me! Agile should be about learning which designs satisfy the critical requirements; not about implementing so-called functions,

Agile Record – www.agilerecord.com

March 4–6, 2013 in Düsseldorf, Germany The Conference for Agile Developers

Call for Papers The call for papers ends by August 19, 2012. The theme of this year’s conference is “Retrospectives on Agile – Forward Reasoning”. Go to www.agiledevpractices.com and submit your paper!

Topics of interest: The values and principles behind the agile practices · Clean code · Refactoring · Dealing with legacy code · Version Control Systems · Users stories and acceptance criteria · BDD/ATDD/TDD · Meetings · Retrospectives · Product backlog management and estimation · Practices for adoption · Agile adoption stories · What worked and what didn‘t · DevOps · Distributed teams management · Roles in the team

www.agiledevpractices.com

features and use cases, which are usually sub-optimal amateur ‘design’ with another name. We have also discovered that much of what people call ‘requirements’ are really design. The test is simple. Ask why! If the answer is clearly a real requirement, then what you were calling a requirement is probably really a design. Example: Why ‘password’? Answer: “Security!”. Ah, so security is your requirement, password is a design. And possibly not the smartest design. It depends on the unstated security ‘requirement’. (7) Here is our definition of a ‘requirement’ (8):

The top 10 most critical requirements are mostly ‘quality’ (defined as how well the system functions) requirements, together with some work capacity and cost reduction requirements. Everybody can quantify, and thus clarify, work capacity and cost reduction. But almost everybody has a big problem with the ‘-ilities’. How do you quantify things like security, usability, maintainability, and adaptability? There are 3 methods we teach (9, 10), and we have found no ‘unquantifiable’ qualities. 1. Look it up in a book (9).

“A Stakeholder-valued future state, under defined conditions.”

2. Use common sense and domain knowledge to work out scales (9, process).

And here is our definition of a ‘design’:

3. Google it: for example ‘usability metrics’. Lots of good answers on 1 page.

“A design is a concept that is intended to satisfy some requirements, to some degree.”

An excellent example of doing this with the Evo-agile method is our client Confirmit. (11)

Fig.1: The 25 quantified quality and work capacity requirements

page 19

Agile Record – www.agilerecord.com

The 25 quantified quality and work capacity requirements (Fig.1) were developed in a week by Confirmit. These were then used in twelve one-week cycles of quality delivery, with release to world market after every 12 weeks. The illustration above shows the feedback at cycle 9 of 12, and the improvements % shows the % of the way to target levels accomplished by the 4 parallel teams.

Ref. 4. The Evo standard http://www.gilb.com/dl487

One of the authors (Kai) analyzed a project (Bring, Oslo, 12) which had used Scrum in the conventional way, correctly. However, on delivery to the market, the sales dropped dramatically. Kai’s analysis was that there was no quality requirement for the speed with which customers could find the correct service. The result was intolerable. Customers gave up and went to competitors. However, when the system was redesigned to meet such a quality requirement, all was well. Scrum alone is not enough! Quality requirements are necessary to manage the Scrum process!

1. Gather from all the key stakeholders the top few (5 to 20) most critical goals that the project needs to deliver. Give each goal a reference name (a tag).

One of our Bank clients in London has instituted our Evo process as a framework to manage agile processes like Scrum (4), so that they are more specifically responsive to stakeholder needs at the top quality levels. Scrum alone is not enough. Ref. 1. Gilb’s Mythodology Column, 1, Agile Record 6, User Stories: A Skeptical View Ref. 2. Quantified top level project objectives, as documented in 1988, Principles of Software Engineering Management (20th printing). The book that many agilistas (like Kent Beck) credit with their early inspiration. Especially about incremental and evolutionary delivery, the core of agile today. Ref. 3. The outline of our agile Evo startup week process http://www.gilb.com/dl521 Day 1: Project Objectives: The top few critical objectives quantified. Objective: Determine, clarify, agree critical few project objectives – results – end states Process: Analyze current documentation and slides for expressed or implied objectives (often implied by designs or lower-level objectives) Develop list of Stakeholders and their needs and values Brainstorm ‘top ten’ critical objectives names list. Agree they are top critical few. Detail definition in Planguage – meaning quantify and define clearly, unambiguously and in detail (one page) Quality Control Objectives for Clarity: Major defect measurement. Exit if less than 1.0 majors per page Quality Control Objectives for Relevance: Review against higherlevel objectives than project for alignment. Define Constraints: resources, traditions, policies, corporate IT architecture, hidden assumptions. Define Issues – yet unresolved Note we might well choose to do several things in parallel. Output: A solid set of the top few critical objectives in quantified and measurable language. Stakeholder data specified. Participants: anybody who is concerned with the business results, the higher the management level, the better. page 20

The ‘Evo’ (Evolutionary) Method for Project Management. Process Description

2. For each goal, define a scale of measure and a ‘final’ goal level. For example: Reliable: Scale: Mean Time Before Failure, Goal: 1 month. 3. Define approximately 4 budgets for your most limited resources (e.g., time, people, money, and equipment). 4. Write up these plans for the goals and budgets (Try to ensure this is kept to only one page). 5. Negotiate with the key stakeholders to formally agree the goals and budgets. 6. Plan to deliver some benefit (that is, progress towards the goals) in weekly (or shorter) increments (Evo steps). 7. Implement the project in Evo steps. Report to project sponsors after each Evo step (weekly, or shorter) with your best available estimates or measures, for each performance goal, and each resource budget. On a single page, summarize the progress to date towards achieving the goals and the costs incurred. 8. When all goals are reached: ‘Claim success and move on’. a. Free remaining resources for more profitable ventures Ref. 5 “Three stonemasons were building a cathedral”. http://www.thehighcalling.org/audio/work/stonemasons Three stonemasons were building a cathedral when a stranger wandered by. The first stonemason was toting rocks to a pile, near a wall. “What are you doing?” said the stranger. “Can’t you see that I’m carrying rocks?” The stranger asked the second laborer, “What are you doing?” “I’m building a wall” he replied. A few steps away, the stranger came upon a third mason. “What are you doing?” he asked. This worker smiled. “I’m building a cathedral to the glory of God!” Same jobs, different missions. Ref. 6. Richard Smith: Citigroup experience with Evo http://rsbatechnology.co.uk/blog:8

Agile Record – www.agilerecord.com

Ref. 7. Gilb: Quantifying Security: How to specify security requirements in a quantified way. http://www.gilb.com/dl40 Ref. 8. Full Planguage Glossary of Concepts (655+) updated June 13 2012 http://www.gilb.com/dl46, see particularly Requirement and Design idea. Ref. 9 CE book chapter 5, Scales of Measure, free download. http://www.gilb.com/tiki-download_file.php?fileId=26 Ref. 10 How to Tackle Quantification of the Critical Quality Aspects for Projects for Both Requirements and Designs http://www.gilb.com/tiki-download_file.php?fileId=486 Slides http://www.gilb.com/tiki-download_file.php?fileId=124 Paper 11. Confirmit Case (of Evo and quantified qualities as main drivers) http://www.gilb.com/dl152 slides including Confirmit case (Firm) ‘WHAT’S WRONG WITH AGILE METHODS? SOME PRINCIPLES AND VALUES TO ENCOURAGE QUANTIFICATION’ with Confirmit Case. http://www.gilb.com/dl50 12. Bring Case. The Inmates are running the asylum, Construx Summit talk Oct 25 2011 Seattle, contains considerable Bring Case slides www.gilb.com/tiki-download_file.php?fileId=488

> About the authors Tom Gilb and Kai Gilb have, together with many professional friends and clients, personally developed the methods they teach. The methods have been developed over decades of practice all over the world in both small companies and projects, as well as in the largest companies and projects. Tom Gilb Tom is the author of nine books, and hundreds of papers on these and related subjects. His latest book ‘Competitive Engineering’ is a substantial definition of requirements ideas. His ideas on requirements are the acknowledged basis for CMMI level 4 (quantification, as initially developed at IBM from 1980). Tom has guest lectured at universities all over UK, Europe, China, India, USA, Korea – and has been a keynote speaker at dozens of technical conferences internationally. Kai Gilb has partnered with Tom in developing these ideas, holding courses and practicing them with clients since 1992. He coaches managers and product owners, writes papers, develops the courses, and is writing his own book, ‘Evo – Evolutionary Project Management & Product Development.’ Tom & Kai work well as a team, they approach the art of teaching the common methods somewhat differently. Consequently the students benefit from two different styles. There are very many organizations and individuals who use some or all of their methods. IBM and HP were two early corporate adopters. Recently over 6,000 (and growing) engineers at Intel have adopted the Planguage requirements methods. Ericsson, Nokia and lately Symbian and A Major Mulitnational Finance Group use parts of their methods extensively. Many smaller companies also use the methods.

page 21

Agile Record – www.agilerecord.com

Effects of Social Defenses on Agile Requirements by Ilan Kirschenbaum

Agile requirements are very effective. That is, when all parties play the same game. Similar to any other change, organizational defense mechanisms can prevent agile requirements from being effectively used. This article describes such mechanisms through two case studies, and the ensuing interventions that assisted in resuming effective work. Agile practitioners have a selection of techniques to choose from for generating agile requirements. Most commonly used is “User Stories à la Ron Jeffries”, but other techniques, notably Effect Mapping1 as suggested by Gojko Adzic, can provide an excellent tool to focus on what’s important together with the best ROI to satisfy the customer. So why is it that product managers and product owners sometimes struggle to provide useful requirements to their team/teams?

User Story Battles in the Service of Turf Wars David is a product owner working with three development teams. He has been an agile practitioner for over two years. Initially David would write traditional requirements (“The system will allow

subscribers to enter their details…”). Having attended a Scrum team training, David adopted the user story format. However, it was being followed semantically only (“As a Product Owner, I want a subscriber entry utility, so users can update the system”). The Definition of Done was also superficial only. In parallel, David prepared a high-level design document with detailed screen layouts and flow charts describing the desired behavior. After ten sprints using various tactics to get more valuable input from the Product Owner, the team started writing alternative Definitions of Done to satisfy their own sprint review expectations. At the end of each project, a recurring “ritual” happened in which the Product Owner displayed his displeasure with the delivered scope. This was followed by harsh words by managers, followed by lengthy talks and then followed by an action plan. However, after the next project was completed the same “ritual” would occur. From David’s point of view, he was going out of his way to satisfy the team’s incongruent demands for requirements. He did not understand the logic behind the need to explain for each story who the user was, and why they need the new requirement. He fiercely resisted writing a demo-able “Definition of Done”, which he perceived as clearly the tester’s role as part of creating test cases for the requirements. To understand David’s resistance, it was important to look at the organizational structure. David’s manager, the Director of Product Management reports to the Product Marketing VP. The development team members report to a Development Manager, who indirectly reports to the organization’s R&D VP. Therefore, whenever the recurring “ritual” of project-end disappointment occurred, there was an automatic escalation to the VP level, and the two VPs suggested that their subordinates meet to resolve the issues on their own. Within the hierarchical structure, there was little scope for collaboration, so in order get appraisal by his manager, David does not need to satisfy the development team; in order to get appraisal by her manager, the Development Manager does not need to satisfy the Product Owners.

1 http://gojko.net/papers/effect_maps.pdf

page 22

Agile Record – www.agilerecord.com

At one point, Tammy, the Development Manager decided to dedicate a person, Tessa, to writing user stories for the teams. Tessa consulted with the Agile Coach, and started to attend the ceremonies regularly and intently. After the second sprint the team invited the coach to their retrospective meeting to consult on what they defined as the “Inability to have effective sprint reviews”. At one point during the retrospective meeting, a team member threw a comment “But Tessa is not the real Product Owner; she is just a proxy”. To this the coach responded: “What makes a real Product Owner?” For a while no one responded, giving the coach the opportunity to deliver his ‘who is the PO?’ speech – The answer being: Responsible for ROI, single voice for the team, attends the planning and review. There was a short silence, after which two team members said in unison: “So Tessa is exactly what the Product Owner should be!” Two sprints later the team started to present test automations in the sprint review according to acceptance tests defined by Tessa. Having a person whose role it was to serve as the bridge between Product Management and Engineering ended the turf-war between the two VPs that manifested itself in David’s resistance to work effectively with the team. In effect, David was a tool in this war reflecting the valence2 of one of the VPs towards aggressiveness. Tessa’s objectives, set by her manager, Tammy, were to bridge the gap between the teams and the “real” Product Owner. The team, having realized that Tessa was the acting Product Owner prevented the “ritual” from taking place. As Barry Oshry describes it in his book Seeing Systems, preventing the sound of the old dance shaking3. Usually it makes sense to choose a Product Owner with relatedness to product management. In this case it was more important to bridge the gap than to bring rich business knowledge. It should be noted, that a little over a year later, Tessa started looking for a position as a Product Owner – officially this time round.

the logic behind a family of customer-facing products. When a new project would start, Jack calls up a meeting with the entire team, telling the story of how the customer’s users will benefit from the new features. He would create a backlog of high-level features reflecting the business flows, and from them prepare a backlog of stories for the three sprints ahead. He would plan to have a sliding-window of roughly three-sprint backlog according to the takeaways from demos to the customer team. The work is laid out in a Kanban visible to all in the main corridor. Typically after one or two sprints into the project, there would emerge an emergency defect or customer request on one of the customer-facing products. When this would happen, people literally ran out in the corridor, clutching printed e-mails, preparing emergency plans for fixing the defects or developing the urgent change requests. Denver, the R&D VP in charge of the customer-facing products, would frantically ask for resources to help his development team handle the emergency. Denver himself calls this event a “Fire Drill”, due to the resemblance of the running staff. Within 2-3 days, the Kanban became irrelevant, and needed to be reset. Jack’s desire to run an Agile project, with a predictable progress based on empirical measurement of consuming the backlog, provoked anxiety in the other products. Whenever a new project started or a running project stabilized, Denver unconsciously summons a new “Fire Drill”. An e-mail from a customer is an excuse to avoid the presence of the “good” project, and to sabotage the efforts by requesting additional resources to take away focus from it. Having faced the “Fire Drill” ritual one time too many, when a fairly small project starts Jack presents a new project to his team, as well as to the customer-facing teams and managers. He does this after creating a combined backlog for both product families by involving users of both products from the customer team. Jack realizes that this may put additional pressure on Denver, and makes sure he is informed by the PMO regularly on the project progress and the reaction of the customer team to changes in the backlog. Jack has realized that the transition to Agile provoked fire drills stemmed from artificial conditions, as a defense mechanism against change. By involving the other products in the project, Jack placed Denver in a position where a fire drill would yield a lose-lose scenario, rather than the familiar lose-win. By providing continuous updates, Jack helped reduce the anxiety levels experienced by Denver.

Familiarity of Fire Drills over Comfort of Predictability Jack is a Product Manager with about 20 years’ experience in the software industry in various roles. Jack is responsible for a product that executes business processes (back end) providing 2 http://en.wikipedia.org/wiki/Valence_(psychology)

Conclusion Agile requirements in their own right contribute to a good agile backlog. However, they do not necessarily reduce anxiety levels that prevent defense mechanisms in the organization. An experienced Agile Coach or consultant can observe the “rituals” and “listen” to the subtext that occurs between individuals and teams. By articulating the symbols (turf-wars, fire-drills) into terms that relate to daily life of the organization (proxy vs. real, PMO status report), the coach or consultant can reduce the anxiety level, thus enabling effective work.

3 http://books.google.com/books/about/Seeing_Systems. html?id=wtK9aFmM9zgC&redir_esc=y

page 23

Agile Record – www.agilerecord.com

For further reading on social defenses, see Isabel Menzies Lyth article Social Systems as a Defense Against Anxiety4. Article reviewed by Dekel Levinson5

4 http://www.moderntimesworkplace.com/archives/ericsess/sessvol1/ Lythp439.opd.pdf 5 http://www.linkedin.com/pub/dekel-levinson/4a/836/586

> About the author Ilan Kirschenbaum is a seasoned software professional with more than 20 years’ experience in various roles. When Ilan first encountered agile software development, his career took a new direction, and indeed Ilan quickly evolved into Agile Coaching, which he practices passionately. Today Ilan is a co-founder and coach at Practical Agile. He also participates in a two-year program studying Organizational Development in Psychoanalytical-Systemic Approach, sponsored by the Tavistock Institute. You may follow Ilan in his blogs where he writes regularly on various agile issues at http://fostnope.com and on general subjects from children’s books to cooking and more at http://kirschilan.wordpress.com (in Hebrew).

The Agile Project Manager – Please Sir, May I have some help? by Bob Galen

A Sad Story A seasoned Director of Software Development was championing agile adoption at their company. It was a moderately scaled initiative, including perhaps 100 developers, testers, project managers, BAs and the functional management surrounding them. They received some initial agile training, seemed to be energized and aligned with the methods, and were “good to go” as they started sprinting. Six months later things were a shambles. Managers were micromanaging the sprints and adjusting team estimates and plans. The teams were distrustful, opaque and misleading their management. There was virtually no honest and open collaboration – nor trust. They’d (re)established a very dysfunctional dance. Funny thing is… Their agile coach had asked many times if they needed help and the answer was always: ”No – things are going fine”. Only when they had failed 10 sprints in a row and team members were mutinying, did the Director reach out for help to their coach. Their coach came back and in relatively short order brought the team back to ‘basics’ and helped them restore balance, trust, collaboration, and commitment to agile delivery. Afterwards, everyone was asking the questions: “Why did it take so long – why didn’t we ask for help sooner?” Another Sad Story A set of teams in a mature internet startup had been leveraging Scrum for 4-5 years. They were incredibly mature and were delivering well on the promise that agile has in terms of value delivery, quality, and team morale. Things were going quite well… or so it seemed. But under the covers, the teams were losing their ‘edge’. Defects were on the rise. The teams weren’t having impactful retrospectives and really tackling self / continuous improvement. Morale was sort

page 25

of slipping, and the teams were losing their accountability towards providing great results and real value. In a word, complacency was seeping into the teams. Funny thing is… The organization’s agile coach would have a weekly meeting with the Scrum Masters across all of the teams. She would always ask if they needed any help. By attending a planning or grooming session? By co-facilitating a retrospective? By partnering with any of the Scrum Masters in coaching their teams? But that honest offer of help was never met with a pull-request… in over a year. Not one of the experienced Scrum Masters directly asked for help. Why not? Instead, they mostly struggled to inspire their teams towards improvement and became comfortable with and defensive of the complacency trending. And a Final Sad Story I was coaching several Scrum teams as part of a new adoption. I would count this as a true enterprise-level adoption – in that there were many teams starting all at once across several projects. In order to provide some coaching guidance as they began, I was rotating amongst the various team stand-ups as a ‘chicken’. There was one team where I noticed one of the software engineers was struggling with their sprint work. In sprint planning, Sue had estimated the work at several days to complete (really the entire team had agreed). But as the sprint unfolded, Sue seemed to be struggling with the complexity of the work. On day 2 of the sprint, she identified that in the stand-up, but was still hopeful. On day 3, she was still working hard, but again, hopeful. On day 4, again…. This continued until the seventh day of the sprint when it was obvious that Sue was struggling and the entire team tried to come to her aid. Regardless of everyone’s efforts, the task was attacked too late and the team failed to deliver on their sprint commitment. Funny thing is…

Agile Record – www.agilerecord.com

This was the team’s number one priority user story for the sprint. They’d all committed to getting it done as part of the sprint’s body of work. Yet, no one seemed interested in the fact that it was running late and jeopardizing the other work they’d committed to and the overall sprint goal. That is, not until the “last minute”. Beyond that, not a single person on the team asked if they could help her early on OR challenged why she was struggling so as to encourage her to ask for help. It just dragged out until it was literally…too late.

■■

If I want it done right, I’ll do it myself; I don’t trust others to do this work – I really want to work alongside of you as part of your team…not! And do you “always” do it right and get it done, regardless of the complexity? Get real.

■■

Everyone else is busy too – seems to be an empathetic and honorable approach…as long as you’re making progress. However, the real question is – is everyone working on the highest priority items to meet the sprint’s goals? If not or if something is delayed, then realign.

■■

I get paid to solve problems – no, you get paid to be a solid team member and to deliver value for your customers. No individual has all of the answers; instead there is great power in collaboration and the wisdom of the crowds.

■■

I don’t want to disappoint my team – it’s not about you! Believe it or not, your team understands your strengths and weaknesses. They’ll admire your effort and your honesty when you ask for help when you’re struggling.

■■

I’m the only one who knows that code or understands the domain and design – I’ve been here the longest and I’m the only one left with a clue about this code. Well…that will remain the situation unless you start letting others in to help you. How about mentoring your “replacement” so you can move onto other things?

■■

Don’t bring me problems…bring me solutions – this ageold management speak was a façade to allow managers to disengage from their teams. It no longer applies. Anyone and I mean anyone, that can help a team advance…should be engaged to help.

■■

It’s embarrassing, I don’t want to be the “weakest link” on our team – I actually believe that self-aware and teamcentered individuals can find a place where there are no “weak links” on a team. A place where the team covers each other’s weaknesses and simply delivers on their combined strengths.

■■

I’m trying to have a “can do” or positive attitude – I know, many engineers are infernally optimistic, but let’s also bring a healthy dose of realism and experience into play. Look at your history and be self-aware. Asking for help IS a positive response.

■■

Everyone thinks I’m perfect – I hate to break this news to you, but no, they don’t. If anyone has worked with you for any length of time, they understand your strengths AND your weaknesses. Including your inability to ask for help… so ask.

■■

I’ve already started, it will take longer to hand it off to someone else – this aligns nicely with 90% done syndrome. It’s counter-intuitive, but teams swarming around work get it done the quickest. So the push here should be to ask for help and engage others as soon as you can.

http://engineer-today.blogspot.com/2010/02/helping-hands-forpoor.html Help me…where is this going? In all three of my stories there was a fundamental reluctance for folks to ask for help. Not only that, when they did ask for help, it was often very late in the game and the challenge, issue or problem was greatly exacerbated and much more difficult to tame. The intent of this article is to explore the dynamics of this common software team anti-pattern. While it’s not directly related to agile, I think it surfaces more frequently in agile teams given the self-directed, collaborative and transparent principles those teams aspire to. What I’ve noticed in the professional landscape is that folks are truly reluctant to ask for assistance. Is it ego? Is it embarrassment? Is it trust? Is it perception? I think it’s all of these and more. Why I’m surfacing it now is that I’ve been observing it for years as part of my Agile & Scrum coaching. I see it at all levels of organizations – which my examples try to illustrate. It happens at the senior leadership level, the management level, and at the team level. It’s often independent of a person’s experience. Indeed, there seems to be a relationship between the more experience you have and your reluctance to admit that you don’t know something, or need help in formulating a next step. Some Anti-Patterns Below is a list of some of the thought patterns I’ve seen exhibited within teams by folks who don’t want to ask for help. I know there are probably many more, but I do think the list will help to (1) clarify the challenge or problem at hand, and (2) focus us towards improvement in our abilities in asking for help. ■■

■■

90% done syndrome – when you get 90% of a project done in the first 10% of time, but the next 10% takes 90% of the remaining time. It implies that we underestimate and should assume that “finishing” a task usually takes longer than we imagine. Delivering software in fully releasable chunks helps manage this. I’ve got the best skills for this specific task – a big part of this is ego and the belief that you are the strongest link. Surely this isn’t reality and it certainly doesn’t help to develop the team’s overall skills either. Perhaps you could pair with someone?

page 26

However, in the end, all of these are simply excuses for not asking for help. In a way, I think they are mostly ‘selfish’ in that you make them “about you”.

Agile Record – www.agilerecord.com

At least from an agile team and project perspective, it’s not about the individual. It’s about the team. Asking for help is an acknowledgement that your team is greater than the sum of its parts, and that you have a responsibility to identify challenges and face them as a team. When you’re unwilling to raise them early and often, you’re not seeing the big picture of collaborative team work towards a common goal.

Just do it! Don’t think too much about it Keep your release, sprint and team goals in mind, it’s not about you It allows the team to solve their problems…not individuals If you “feel” like you need help…you do

The “Simplicity” of Agile & Coaching One of the biggest challenges I find in my coaching is having agile teams ask for help. I can’t tell you how often I’ve found that teams get a brief sense of the agile methods and dive in before they truly know what they’re doing. Part of the problem is the inherent simplicity of the methods themselves. On the surface, everything sounds so easy. All you need is: ■■

A self-directed team

■■

A customer

■■

A project

■■

A backlog (list)

■■

A daily stand-up

■■

A demo

It’s a sign of strength, not weakness Also, offer to help…whenever possible…and don’t always take “No” for an answer Solve your problems together Ask before it’s “too late”; time is the enemy Craftsmen learn from each other; looking for alternative approaches Pairing truly helps teams in asking for help; pair often

And life is good…right? Now you’re agile and everything will sort itself out. You simply need to keep ‘sprinting’ and good things will result. What these teams fail to grasp is that there is a big, no huge, difference in “Doing Agile” vs. “Being Agile”. They’re often focusing on the individual ceremonies or tactics and not truly grasping what it takes to evolve into a well-formed, mature agile team that aligns with the core principles of agility. Incredibly often, these “doing agile” teams don’t even realize that they’re off track or need help. That is until it’s quite late – when they’ve got a great deal of dysfunction in place. Or when they realize that they’ve failed to deliver on the results that “being agile” teams can produce. Then they reach out for a helping hand, but usually only after a whole lot of waste. As an Agile Project Manager, don’t let your teams fall into this trap. Remind them that agility done well is a complex and continuous journey and that asking for help or getting a coach or guide is an incredibly mature and healthy step. http://floatingdoctors.com/wordpress/?page_id=99 How to ask for help? I thought I’d just share a few words of advice in how to think about asking for help. In many ways, it’s a mindset that you have to reframe from your existing perspectives to a new view – you and your team.

page 27

One wonderful place to explore your personal and your teams’ growth when it comes to asking for help and working together is the retrospective. It should be a ‘safe’ environment for any good team to reflect on their challenges and how they could have improved. One important area to continuously explore in your retrospectives is the teams’ behaviors around collaborative trust and asking for and providing help. Try talking about how “help friendly” your team is in your retrospectives. Both Directions And help is a multi-directional element. Meaning you’ll often find yourself asking for help and providing help…often at the same time. I think the degree to which you offer to help and collaborate will improve your own abilities to ask for and receive help from team members. An easy way to “get better” is helping your own team members – asking probing questions surrounding team challenges and being real in exchanges around getting things done. This is particularly important at a leadership level in setting an example where asking for help is construed as a positive and normal activity within the team. Where saying “I don’t know”, and “Can you help me with this?” and “What do you think I should do?” are all perceived as mature, healthy, and constructive events within your organization. I remember reading a leadership book that talked about senior managers asking to be mentored by members of staff. The idea was that they would ask for help from folks who’d been there the longest. That they would show humility and teach-ability by asking for, listening to, and digesting the wisdom these folks could share. And in doing so they created a more collaborative and humble

Agile Record – www.agilerecord.com

Knowledge Transfer – The Trainer Excellence Guild

This three-day program provides a highly interactive exploration of unit test automation, principles and practices. Companies such as Microsoft, Google, and IBM have already realized the potential that lies in Unit Test automation. Following these practices reduces the amount of defects in your software decreasing the effort involved in product development and result in more satisfied customers. Learn how to use frameworks for test writing and how to isolate your code in an efficient and easy manner.

Specification by Example: from user stories to acceptance tests by Gojko Adzic The winner of the MIATPP award at the Agile Testing Days 2011 presents: This two-day workshop teaches you how to apply emerging practices for managing requirements, specifications and tests in agile and lean processes to bridge the communication gap between stakeholders and implementation teams, build quality into software from the start, design, develop and deliver systems fit for purpose.

» more at www.testingexperience.com Date

Place

Price

Sept 10–12, 2012

Berlin

€ 1,450 + VAT

» more at www.testingexperience.com Date

Place

Price

Sept 17–18, 2012

Berlin

€ 1,200 + VAT

© iStockphoto.com/pixdeluxe

TDD .NET by Lior Friedman

environment where showing vulnerability and asking for assistance was not only ok, but it was the norm. Wrapping Up and a Survey Quite a while ago I wrote a blog post about teams handling failure and failing. As part of the post, I created a short survey to poll readers and get a sense for the state of “failure acceptance” out in the real world. I guess the premise was that my lens might be a bit skewed and I wanted to get other perspectives. In that case it turned out that the environment for failure acceptance was even worse than I had imagined. The results were interesting, but sad as well. I’m inspired to try the same approach with this topic. To that end, I’ve created a relatively short survey surrounding organizational health (team, management, senior leadership) when it comes to asking for help. Here’s a link – http://goo.gl/CFVY4 And I need your help in filling it out. Wrapping this article up, I think Agile Project Managers foster an environment where asking for help is considered a strength and always well received. Where team members embrace and welcome the opportunity to help each other out. Where they look at providing two-way help as one of the strengths of their team and their organization

> About the author Bob Galen is President & Certified Scrum Coach (CSC) at RGCG, LLC a technical consultancy focused towards increasing agility and pragmatism within software projects and teams. He’s also Director, Agile Solutions at Zenergy Technologies. He has over 30 years of experience as a software developer, tester, project manager and leader. Bob regularly consults, writes, and is a popular speaker on a wide variety of software topics. He is also the author of the book Scrum Product Ownership – Balancing Value from the Inside Out (http://www.amazon.com/SCRUM-Product-Ownership-Balancing-Inside/ dp/0578019124/ref=sr_1_3?s=books&ie=UTF8&qid=1 325898572&sr=1-3). He can be reached at bob@rgalen. com or [email protected]

The best way to start this is to lead by example. To show vulnerability yourself and ask for help when it’s appropriate. To occasionally say – “I don’t know” when you’re dealing with daily challenges. To ask questions of the team when folks appear to be struggling… teasing out who needs help as soon as possible. So go ask your team to help you, help them, in asking for help… Here are few follow-up references that I thought I’d share with you. You’ll notice that agile teams aren’t the only ones who struggle asking for help. ■■

http://www.businessweek.com/managing/content/ jun2010/ca2010063_197398.htm

■■

http://www.workhappynow.com/2012/04/how-to-ask-helpat-work/

■■

http://www.nytimes.com/2011/04/10/jobs/10career.html

page 29

Agile Record – www.agilerecord.com

Project Disasters: Are Requirements Really to Blame? by Leanne Howard

We continually hear about project disasters, cancellations, overruns or overspends. Are requirements really to blame? If they are, can we fix it? Does Agile have a part to play in this?

for many of the projects that fail to achieve satisfactory outcomes due to time and cost overruns. (see Figure 1)

From the Planit Index for 2011. the following information, gathered from some of the major organizations within Australia and New Zealand, seems to suggest that requirements are perceived to have a major contribution to project failure. While only a small number of projects were actually cancelled (just above 4%), some interesting findings can be derived from investigation of the reasons why these projects failed. The number one reason for failure was that business requirements or priorities had changed. This aligns with the findings that almost 30% of projects suffered changes to 25% of their scope or more. While organizations have to be flexible to contend with changing conditions, these results point to a weakness in the overall definition of requirements in projects. It is also likely that this issue, with changing business requirements, may also be responsible

When asked the question “How would you generally rate the conditions for your software development projects?”, the following responses were received. If the high rate of project failure reported (only 49% of projects were reported to be delivered on time, within budget and to scope) is directly related to the implementation of quality assurance practices, then the examination of project conditions reveals that there are numerous reasons why quality assurance is not conducted in an optimal manner. Further causes of turmoil in projects included unrealistic expectations, which was cited by 31% of respondents. Once again these results point to the problems that many organizations are expe-

Figure 1. Causes of project failure

page 30

Agile Record – www.agilerecord.com

Figure 2. Rating of project conditions

riencing when scoping requirements at the early stage of project development. (see Figure 2) The Agile Manifesto refers to requirements in at least three of its four statements. “Working software over comprehensive documentation” – does that mean the working software in effect becomes our requirements? Does it also imply that hundred-page requirement documents are not the best way to capture requirements for the whole team to understand? “Customer collaboration over contract negotiation” – does this reflect that the best way to elicit requirements is through continuously talking with our clients as things change, not just when the contract says we can, or mandate that a particular item must be delivered at a certain time even if the priority or use changes? Linked with this is “Responding to change over following a plan” – who is to say that our initial thoughts were 100% correct, or that we had considered everything? We cannot possibly know everything up front. What happens if our competitors release a product we don’t have and we start losing market share? Even some of the 12 Principles of Agile Methods refer to requirements. For example: 2. Welcome changing requirements, even late in development. Agile processes harness change for the customer’s competitive advantage. 11. The best architectures, requirements, and designs emerge from self-organizing teams. As we have already discussed, change is embraced within Agile. Shouldn’t we in IT align to the business strategy and be there to partner as support to the business initiatives, not to hinder them. IT needs to increase its capability and have the same goals as the business.

page 31

Secondly IT, based on sound technical knowledge, should strive to create the best solution for their customers. If requirements are heavily prescriptive and “cast in stone”, it stifles creativity. Working proactively with the business throughout the delivery allows fluidity and adaption to moving priorities. It also allows the business to focus on requirements in small groups close to the release, which allows for flexibility to evolve in a different direction as more becomes known about a subject. It can also restrict the “nice to have” syndrome as business sees some product and then wants to move to the next without necessarily putting on all the “bells and whistles”. They know they are going to get what they want within agreed iterations and therefore move away from the mentality of “I have to get everything in as I do not know when I will next get an opportunity”. Requirements Specification One of the most significant changes that occurred with the introduction of agile software development methods was how these methods captured requirements. The foundations on which traditional requirements management processes are based still apply with an agile approach. The techniques for gathering requirements remain unchanged, regardless of whether you are in a waterfall project or an agile one. The key difference is at what point you apply the techniques in the software development process. Agile methods recognize the major failing with the traditional requirements processes of trying to define all the requirements up front. Agile methods therefore focus on simply capturing – at a high level – what is needed, i.e. high-level statements and descriptions. Requirements in an agile process are now “place holders for conversations that need to happen at a later date”. The “later date” is when the requirement is scheduled in the iteration. The requirements process is carried out using a “just-in-time” approach. This avoids wasting time to write documentation which may change later.

Agile Record – www.agilerecord.com

The Planit Index asked the question – “How would you rate your requirements definition?”

“User Stories Applied”, provides more detail on what type of information a user story actually contains, which is described below.

We have already seen the impact that poor requirements scoping can have on project outcomes, but how did respondents rate their organization’s requirements definition? In Figure 33, a third of respondents indicated that their organization’s requirements definition was either poor or very poor, and another third rated them as only OK. Worsening results were experienced at both extremities of the rating scale when compared to the 2010 Index, with a 2% decrease in the number of respondents who rated their organization’s requirement definition as excellent and 4% increase in negative responses.

A user story is composed of the following three aspects:

This unfortunately is pointing towards a trend that we within IT are getting worse at writing requirements which are of value to the business, or may even be testable.

Figure 3. Rating of requirements definition

Within Agile, details about the requirements are fleshed out using face-to-face based communication techniques, primarily workshops, with the project team and the customer. The teams have a better understanding of requirements on agile projects due to this process. How can we think that handing over a 100-page document of requirements to the team with little further communication is likely to produce success? The outcome is “lightly” documented, i.e. sufficient information is captured to enable: ■■

Estimates to be made on how long it will take to implement, which include all activities which the team can then commit to

■■

A common understanding of what the user actually wants

■■

Clarification of what the acceptance criteria will be for the requirement ensuring that everyone is focussed on testing and building quality in

User stories have become the de facto approach for capturing the user’s requirements in agile based methodologies. Mike Cohn in

page 32

■■

A written description of the story used for planning and as a reminder

■■

Conversations about the story that serve to flesh out the details of the story

■■

Tests that convey and document details and that can be used to determine when a story is complete

User Stories Applied: For Agile Software Development – Mike Cohn 2004 Writing User Stories Grasping the concept of what a user story is may be straight forward, but writing them does have its challenges. The common challenges faced are: ■■

How much detail should the story contain?

■■

The story is too large (generally called an epic) and needs to be broken down into smaller pieces (sounds easy, but isn’t)

■■

How to capture the business rules applicable to a user story, i.e. should the business rules be captured as additional detail on the user story or captured as acceptance criteria/conditions

■■

How to make user stories independent

I have found that breaking stories into different persona to which the story applies to be helpful, and also splitting stories by artefacts. For example if you are looking at loans, each different type could be a story. This will also help with prioritization and putting stories into different iterations, i.e. if 70% of your clients use a particular loan type, do that one first. It will also unlock you maximum business value and return on investment. When writing a user story, focus on the following aspects: ■■

Who is the story about

■■

What is it they want to do with the system

■■

Why do they want to do this action

Whilst the amount of detail in a user story may be less, associated with each story are the acceptance criteria. This means that from the conception of the requirement the team is thinking about how to test it, i.e., building testability in. Extending this further, the team should also be thinking about how they can use automation to test and therefore building the hooks in. This coupled with the greater collaboration and discussion about the story should ensure the requirement is of high quality and delivers maximum business value close to its elicitation. Changing requirements, which is also cited as a major contributor to project failure, can also be improved within Agile. Agile embraces change: if you decide halfway through a project that a lower value item is no longer required – why build it just because it is in the backlog? If a story increases in value or a Agile Record – www.agilerecord.com

new requirement will provide the business with greater value than what is still left in the backlog, then let’s do it. Due to the close working relationships, discussions can be had about the impact of this from both the product risk perspective and the technical implications. Both parties are in fact partners in delivering the common goal of business value as quickly as possible.

There is also further evidence from the Planit Index that when Agile is adopted, projects are at least generally more successful 56% of the time. As requirements appear to be such an issue in traditional projects, this evidence seems to suggest that some improvements are made in the requirements process, as well as possibly in other areas.

Agile methodologies should therefore make significant improvements for projects in the area of overall requirements management and delivery. That is not to say that it can’t also make improvements in other areas too, such as estimation which is easier on smaller chunks of functionality. When putting into place these agile practices for requirements management, we can really see the benefits. Planit then asked the question – “To what extent would your business benefit from improving how requirements are defined?” The problems experienced with requirements definition was further reinforced by the 98% of respondents who agreed that improving requirements definition would deliver a benefit to the business (Figure 4). It can be reasonably assumed that investing greater time and resources into this activity will lead to better outcomes by correcting a factor which is seen as a key contributor to project failure. This, however, needs to be focussed at the right time within the project, i.e. as the requirements are elaborated, with all the team involved to ensure they all have the same viewpoint, and not midway through when we all know that adding more resources does not necessarily help. Greater business collaboration and ‘just in time’ writing of requirements can only help in this area.

In conclusion, it seems that many projects do suffer from poor requirements elicitation, documentation and management. Traditional projects are not set up to deal well with change, particularly nearing the end of testing. Many have high amounts of process around change request management which actively discourages any change. The business have had intensive involvement during

Figure 4. Benefit from improving requirements definition

It can be seen from the Planit Index that more companies are adopting Agile in software development, as the percentage increases year on year. This is not to say that Agile is the “silver bullet”; however, there is a lot to say about how using these practices can improve requirements and the strength of the IT/business collaboration. page 33

the initial phases of the project, but they do not again become engaged until near the end for UAT. If this is a large programme of work, it may be a year later, by which time their business and processes may have moved forward to satisfy the needs of their Agile Record – www.agilerecord.com

customers. Often it is then too late to make changes before the next release to production. Agile on the other hand continuously engages the business, collaborates with them on the next set of functionality, and encourages review, update and reprioritization of requirements. The business is involved within the iteration, providing ongoing feedback to ensure that the product meets their needs, elaborating on the requirements just before they are worked on and then released. They showcase the product as soon as it is considered done, or even earlier if prototypes are used. Product grooming allows the business the opportunity to reprioritize and even introduce changes as they happen. Finally, with frequent releases to production the end users are enabled to provide quick feedback, which can be incorporated into the backlog. Agile allows for requirements to be worked on as they are about to be delivered, which saves waste, allowing for change and reprioritization as the project proceeds, and frequent feedback loops, leading to business satisfaction and unlocking value quicker.

> About the author Leanne Howard is an Account Director with Planit. Over the past 20 years, Leanne has worked in the IT Software Testing industry across multiple programs and projects including those using Agile. She is a specialist in the implementation of practical testing methods with a strong focus on client satisfaction. She enjoys sharing her knowledge through mentoring and coaching others to promote continuous learning, and is active in the advancement of the testing profession through her role at Planit and involvement with relevant testing and agile association bodies.

For more details on the Planit Index, please see http://www.planit.net.au/index-report-2011/

Testen für Entwickler Beschreibung Während die Ausbildung der Tester in den letzten Jahren große Fortschritte machte – es gibt mehr als 13.000 zertifizierte Tester alleine in Deutschland – wird die Rolle des Entwicklers beim Softwaretest meist unterschätzt. Dabei ist er beim Komponententest oftmals die treibende Kraft. Aus diesem Grunde ist es wichtig, dass auch der Entwickler Grundkenntnisse in Kernbereichen des Softwaretestens erlangt. http://schulung.diazhilterscheid.de

© iStockphoto.com/arlindo71

€ 800,00 zzgl. MwSt

Termine* 11.10.12–12.10.12

2 Tage Berlin *Änderungen vorbehalten

page 34

Agile Record – www.agilerecord.com

Agile – Where’s the Evidence? by Allan Kelly

From time to time someone asks “Where is the evidence that Agile works?” My initial reaction, usually when it is posed inside a company, is: “This is a diversion”. Although it sounds like a reasonable question, the person asking wishes to stall moves towards Agile. The question is aimed at derailing or side-tracking conversations. In truth my reaction is overly defensive. This is a rational question, aimed at making a rational decision, and Agile advocates should be able to put up evidence to support their position. However, doing so isn’t very clear-cut. If we are to really reach a rational decision, surely we should examine the alternative. If the team does not do Agile, what are they doing? For the sake of brevity, let us assume it is “Waterfall.” Now we need to ask “Where is the evidence for Waterfall?”As far as I know, there is none, although there are plenty of cases of failed projects.

His answer: “That’s an interesting question because a lot of research simply assumes that Agile (as a whole) is better, without any evidence for it.” He also pointed out there is a context issue here. Is it better for embedded systems? For web development? In finance? Health care? Modern software development is a varied field, it is not homogenous. And how do you define better? Better architecture? Better requirements? Better code quality? And better compared to what? Chaos? Good Waterfall? Bad Waterfall? CMMI level 1? 2? 3? 4? 5? Any serious evidence for Agile would need to address all answer to these questions. It is unlikely to come to a conclusion that covers all cases.

Not that Agile is immune from failed projects of course. And a rigorous study would also need to define success and failure, but let’s keep moving forward and see if we can find any evidence.

And Scrum Despite this one study claimed, Scrum resulted in productivity improvements of as much as 600% (Benefield, 2008). I’ve even heard Jeff Sutherland verbally claim in 2008 that Scrum can deliver 1000% improvement. To be honest, I don’t believe these figures.

Agile? Let’s start with “Agile.” How do we define Agile? Scrum? XP? Where does Agile start and Lean end, or vice-versa? Or is, as I believe, Agile a form of Lean?

After posting this in a blog, one commenter said he had contacted Jeff Sutherland for the collaborating evidence; Jeff pointed him to someone else, who pointed him onward. The trail eventually led back to the beginning.

We have a definition problem. Agile is poorly defined. I e-mailed a friend of mine who is undertaking a PhD in Architecture and Agile, and asked him: “Can you please point me at a literature review?”

If these figures are possible, then I think it says something about chaotic state the organizations started in, more than about the power of Agile. In these cases my guess is that following any process or practice would be an improvement. Standing on one while coding would probably have generated a 50% improvement.

(Of course, if I was a serious researcher, I would closet myself in the library for days or weeks and do the literature review myself. I’m not, and I need to earn money, so I didn’t.)

page 35

Better? There is a trap for Agile here. Much traditional work has defined “better” as: on schedule/time, on budget/cost, with the desired

Agile Record – www.agilerecord.com

features/functionality. But Agile doesn’t accept that as better, Agile negotiates over features/functionality and aims for business value. A report from Cranfield University (Ward, 2006) suggested that much traditional IT work failed to truly capture business value because people focused on time, budget, features. The same report suggested that the use of formal methodologies gave managers “a false sense of security, and perhaps an excuse for not being sufficiently involved.” Then there is the question of bugs. Traditional development has been very accepting of bugs, Agile isn’t. Traditional managers accept bugs and negotiate over them. Agile managers (should) have a very low tolerance of quality issues because they know that rework and technical debt slow teams down. Maybe asking for evidence about Agile is aiming for too much. Maybe we should look at the practices instead. Here there is some evidence. The techniques Over the years, there have been various studies on pair programming which have been contradictory. Since most of these studies have been conducted on students, you might well question the reliability. Test Driven Development is clearer. A study from Microsoft Research and North Carolina University, which is pretty conclusive on this, suggests that TDD leads to vastly fewer bugs (Nagappan et al., 2008). In the case of Visual Studio, the number of bugs fell by 91%. Keith Braithwaite has also done some great work looking at Cyclomatic Complexity of code, and there seems to be a correlation between test coverage and better (i.e. lower) cyclomatic complexity. Keith is very careful to point out that correlation does not imply cause, although one might hypothesize that test driven development leads to “better” design. (Keith’s work can be found in his blog http://cumulative-hypotheses.org.) TDD and source code are relatively easy to measure. I’m not really sure how you would determine whether planning meetings or user stories worked. Maybe you might get somewhere with retrospectives. Measure how long a team spends in retrospectives, see if the following iteration delivers more or less as a result. For their book “Organizational Patterns of Agile Software Development”, Coplien and Harrison spent over 10 years assessing teams (Coplien and Harrison, 2004). This lead to a set of patterns which describe much of agile software development. This is qualitative, or grounded, research rather than quantitative, but is just as valid. These patterns themselves are useful, whether they are “better” depends on what you are comparing them with. Anecdotal evidence If we switch from hard-core research to anecdotal evidence and case studies, things become easier. As many readers know, I’ve been working with teams in Cornwall for over 18 months. During my March visit we held a workshop with the leaders of the companies page 36

and software teams. Without mentioning names,some comments stood out here: ■■

“The main benefit [of Agile] was time to market… I don’t know how we would have done it without Agile.”

■■

“Agile has changed the way we run the company.”

■■

“It is hard to imagine a world without Agile.”

The last company quoted in this list is now finding their source code base is shrinking. As they have added more and more automated tests the design has changed and they don’t see the need for vast swaths of code. Is that success? Each line of code is now far more expensive and they have less. (Heaven only knows what it does to function point analysis!) Commercial evidence If you want something a little more grounded, there was a 2012 Forrester report which said: “Agile enables early detection of issues and mid-course corrections because it delivers artefacts much sooner and more often. Finally, Agile improves IT alignment with business goals and customer satisfaction.” (Lo Giudice, 2012). Similarly, back in 2006 a Gartner report said: “It’s a fact that agile efforts differ substantially from Waterfall projects. It’s also a fact that agile methods provide substantial benefits for appropriate business processes.” (Agile Development: Fact or Fiction, 2006). But I’m back to that nebulous thing “Agile.” Back to the Waterfall What I have not done yet is look for, let alone present, any evidence that Waterfall works. Frankly, I don’t believe Waterfall ever worked. Full stop. Winston Royce who defined “the Waterfall” didn’t, so why should I? Royce’s original paper argued that the Waterfall model described how we conceived software development, but didn’t actually show what happened in practice (Royce, 1970). If you read to the end of the paper, Royce presents a different model with significant feedback loop. OK, sometimes, to save myself form a boring conversation I’m prepared to concede that: Waterfall worked in 1972 for developers using Cobol on OS/360 with a hierarchical IMS database. But this isn’t 1972, you aren’t working on OS/360 and very few of you are using Cobol with IMS. What is the question? Since we cannot define what Agile is, what domains we are interested in, what constitutes “better”, or what alternative we are comparing it with, asking “Where is the evidence for Agile?” is the wrong question. Even if I could present you with some research that showed Agile, or Waterfall, did work, then it is unlikely that the context for that research would meet you context.

Agile Record – www.agilerecord.com

(If you do want to stall the agile people in your company, first ask “Where is the evidence Agile works?”. When they produce it, ask “How does this relate to our company? This is not our technology/ market/country/etc. etc.”)

References BENEFIELD, G. Year. Rolling Out Agile at a Large Enterprise. In: Hawaii International Conference on Software Systems, 2008 Big Island, Hawii.

I think it might come down to one question: Is software development a defined process activity or an empirical process activity?

COPLIEN, J. O. & HARRISON, N. B. 2004. Organizational Patterns of Agile Software Development, Upper Saddle River, NJ, Pearson Prentice Hall.

If you believe you can follow a defined process and write down a process to follow and the requirements of the thing you wish to build, then Waterfall is probably for you. On the other hand, if you believe the requirements, process, technology and a lot else defies definition, then Agile is for you. Do it yourself Most of the evidence for Agile tends to be anecdotal, or at best qualitative. Very little is available as quantifiable numbers and that which is available is very narrowly focused. All in all, I can’t present you any clear-cut evidence that “Agile works”. I think there is enough evidence to believe that Agile might be a good thing and deserves a look.

LO GIUDICE, D. 2012. Justifying Agile with Shorter, Faster Development. Forrester. NAGAPPAN, N., MAXIMILIEN, E. M., BHAT, T. & WILLIAMS, L. 2008. Realizing quality improvement through test driven development: results and experiences of four industrial teams. Empirical Softwae Engineering, 13, 289-302. ROYCE, W. W. 1970. Managing the development of large software systems: concepts and techniques. WARD, J. 2006. Deliverying Value from Information System and Technology Investments: Learning from success. Cranfield, Bedford: Cranfiedl School of Managament.

Ultimately the evidence must be in the results. Your organization, your challenges, your context are unique to you. So my suggestion is: make your own evidence. Set up two similar teams. Tell one to use Waterfall and one Agile and let them get on with it. Come back every six months and see how they are doing. Anyone want to give it a try? And if anyone knows of any research, please please please let me know!

> About the author Allan Kelly has held just about every job in the software world: system administrator, tester, developer, architect, product manager and development manager. Today he is based in London and helps companies adopt agile practices, aligning development with business objectives and delivery systems. He is also the originator of Retrospective Dialogue Sheets, and author of “Changing Software Development: Learning to become Agile” (2008) and “Business Patterns for Software Developers” (2012). More about Allan at www. allankelly.net, on Twitter he is @allankellynet.

page 37

Agile Record – www.agilerecord.com

Rope in Value with User Stories by Allison Pollard

Lasso Value in the Product Backlog In traditional projects, a large amount of effort was focused on documenting and understanding upfront the comprehensive requirements of a project in order to deliver the product. Experience tells us that this approach does not work – it is impossible to predict and document the requirements of complex software development projects. For this reason, the Agile Manifesto includes the following principle: Welcome changing requirements, even late in 
development. Agile processes harness change for 
the customer’s competitive advantage. Requirements can be like animals – difficult to manage in numbers. After transitioning to Agile, is your team focused on adding value to the product, or are you trying to deliver the project requirements (changes and all) faster? When driving cattle, cowboys had to strike a balance between speed and the weight of the cattle, which equated to their value. Similarly, those writing agile requirements need to balance anticipation and adaptation1 in order for teams to deliver valuable product. Managing the Herd In Scrum, the Product Owner role is the single voice that drives the delivery team, and it is his responsibility to communicate requirements to the team throughout the project. The Product Owner populates the Product Backlog with the requirements and prioritizes them. He has the hardest role because he is responsible for “maximizing the value of the product and the work of the Development Team.”2 Because the Product Backlog reflects the work of the team, developers and testers should be seriously interested in it.

1  Jim Highsmith, Agile Software Development Ecosystems 2  Jeff Sutherland and Ken Schwaber, The Scrum Guide

page 38

In my experience, the backlog can easily become quite large – if each item is written on an index card, the backlog can easily cover the walls of a conference room. Suddenly it can seem like the product backlog is managing the product owner instead of the other way around because of its overwhelming size. In fact, the bottom of the backlog may never change because low priority items stay low priority, and large items may not get the attention needed to become right-sized for sprints. Sprint-sized items are delivered by the team, but do they reflect the most valuable work they could’ve done? If this product backlog is a herd of animals, it is too large for one cowboy to manage, some of the animals are weak, and the strongest animals might be in danger of wandering off from the herd. Teams and Scrum Masters may need to help the Product Owner recognize the state of the Product Backlog and improve it. Odds are that such a large product backlog is not really focused on value – it is full of ideas and requests that are anticipating users’ needs instead of remaining adaptable. We know from numerous studies that the biggest source of waste in software delivery is rarely or never used functionality; a backlog containing every idea and request can lead to making sure everything is implemented rather than asking “can we release what is implemented now?” The Product Owner must know what creates value – what functionality leads to a product that can be released now – and be aggressive in grooming the backlog to keep it focused on value. Evaluate the herd. Does the team have a clear picture of the animals closest to them? The items at the top of the backlog should be sufficiently well-understood and estimated so they can be brought into sprints and the Product Owner can effectively plan releases. If team members are unclear on the backlog items, they should collaborate with the Product Owner to make them better. The backlog items that never move from the bottom are weak animals; can they be made stronger and more valuable? If not, remove them. Are the large animals in danger of getting lost or wandering off? Large backlog items lose value if they are not broken down in time for the team to pull into a sprint and deliver. Look at the

Agile Record – www.agilerecord.com

large backlog items for any that might be time-sensitive and work with the Product Owner to ensure the order of the backlog is such that it can be delivered when it is needed by forecasting based on item estimates and the team’s velocity. The result of evaluating the herd should be a product backlog that is DEEP3: detailed appropriately, estimated, emergent, and prioritized. Caring for the Animals Often the Product Owner populates the product backlog with user stories. According to Ron Jeffries, user stories have three critical aspects: card, conversation, and confirmation. The card contains just enough text to identify the requirement and remind everyone what the story is, the requirement itself is communicated through conversation, and the confirmation tells the team how the Product Owner will confirm that they’ve done what is needed. By creating the card, the Product Owner is able to catch the requirement animal in the lasso’s noose, but it is the conversation with the team that tightens the rope. The conversation drives the value of the increment that will be delivered, so it is important to have a wellwritten card to start the conversation that will ensure the team is delivering the right requirement. User stories are often written using the format made popular by Mike Cohn: As a , I want so that . These stories can make for fine cattle in the Product Backlog herd, but in the quest to deliver value, it can be beneficial to use a different story format to emphasize it: In order to , as a , I want .4 By moving value to the beginning of the user story, Product Owners and teams are more likely to answer the question, “How can I make this story more valuable?” Well-understood user stories lead to better decisions by team members because they can balance what is best for the user and what is easiest to implement. A good user story meets the INVEST criteria5: it is Independent, Negotiable, Valuable, Estimable, Sized appropriately, and Testable. Just as the herd is composed of individual cows or steers, the backlog should be composed of stories that are self-contained so they are not dependent on other user stories. Stories should leave some flexibility for team members and Product Owners to later flesh out details and reflect value to users or customers. In order to plan effectively, stories need to be estimable; stories that are sized appropriately small enough can be worked by the team within a sprint, which helps teams validate that work is getting done and receive feedback on the product early to confirm value is being delivered. Appropriately sized stories are also easier to integrate, test, and deploy.

acceptance criteria or conditions of satisfaction) can be written in a number of ways, but one common way is: Given [initial context], when [event occurs], then [ensure some outcomes]6. Developers and testers can maximize value now and in the future by automating tests that show the confirmations have been met. By knowing how the animal will be evaluated at the end of the drive, the cowboys can better ensure its delivery will be as valuable as possible. The End of the Drive Agile projects emphasize working software over comprehensive documentation but can fall victim to delivering lesser valuable product increments due to large, hard to manage Product Backlogs and poorly written user stories. Product Owners and teams need to collaborate and use the Product Backlog as a powerful tool to drive the team’s work and forecast releases. Just as cattle are evaluated based on the quality and quantity of beef they provide, agile requirements should be evaluated based on the quality of software they produce.

> About the author Allison Pollard is a Senior Consultant for Improving Enterprises in the Dallas, Texas area. She has worked with Agile teams for over four years in project management, Scrum Master, and coaching roles. Allison also volunteers locally as one of the organizers of the DFW Scrum user group.

Team members should review the story to ensure it is testable and the confirmations to understand how the Product Owner will determine the story is done. The confirmations (also known as 3  Roman Pichler, Agile Product Management with Scrum: Creating Products that Customers Love 4  Elizabeth Keogh with credit to Chris Matts, http://www.infoq.com/ news/2008/06/new-user-story-format 5  Bill Wake, http://xp123.com/articles/invest-in-good-stories-and-smarttasks/

page 39

6  Dan North, http://dannorth.net/introducing-bdd/

Agile Record – www.agilerecord.com

“The Only Constant Is Change” by David Kirwan

What’s the problem? Gathering and analyzing requirements correctly is not trivial. If you don’t get the requirements correct, you won’t delight the customer. If you get a requirement wrong and it gets into production, it requires money to put right. This can be as much as one hundred times more expensive if the mistake gets into production (Source IBM). Getting requirements wrong can result in rework, which means you will get less stories done, and this will slow you down whether you use Waterfall or Scrum. At the end of the day, the requirements must lead to a potentially shippable increment in functionality. However, requirements change… How do you solve the problem? You can solve the problem of changing requirements by ensuring your organization’s quality philosophy takes the customers quality requirements into account. Customer involvement is essential; if they aren’t involved, you must have someone who can speak with authority as to what is actually required. There are a variety of requirements strategies ■■

Just Barely Good Enough (JBGE) is an approach where the customer or product manager provides requirements that are just barely good enough to start working on

■■

Document Just in Time is an approach where the customer/product manager document the requirements just before the coding starts, maybe a day or a few hours beforehand

■■

Executable (Tests) Requirement Specifications are supported by tools such as FitNesse where you can define system requirements in a syntax that is tied back to fixtures which implement the required functionality

The use of an Agile Requirements Management Tool is recommended, but you must remember the manifesto: “Individuals and interactions over processes and tools”. While the use of a tool can unburden your memory, the real value, when it comes to gathering

page 40

and managing requirements, is to talk to people. A variety of tools are available; the main ones are: ■■

Cucumber: This tool uses “Given, When, Then” acceptance criteria. Cucumber is a tool for running automated acceptance tests written in a behavior driven development (BDD) style

■■

FitNesse: FitNesse is a web server, a wiki, and an automated testing tool for software. It is based on Ward Cunningham’s Framework for Integrated Test (FIT). FitNesse supports acceptance testing rather than unit testing as it facilitates detailed readable descriptions of system functions.

■■

Green Hopper: This tool adds agile project management to an existing JIRA project. Good for building a backlog, planning each iteration’s work and visualizing team activity and progress.

Ways/Tools to Elicit Examples and Requirements The best way to elicit requirements is by initially verbally communicating the requirements and storing them in the “Requirements Backlog”. These can be discussed just before doing the coding. We can have tests (which reflect the requirements) to guide development, or two tests, a high-level happy-path test and a high-level unhappy-path test. We can also have tools that can help to describe the desired behavior, e.g. ■■

FitNesse

■■

Mock-ups, e.g. Balsamiq

■■

Mind Maps, e.g. FreeMind

■■

Checklists

■■

Spreadsheets

■■

Wikis

■■

Flow charts

Agile Record – www.agilerecord.com

Eliciting Requirements When it comes to eliciting the requirements, to avoid any contradictions, there must be only one person who speaks on behalf of the customer. This is usually the Product manager, although in principle everyone can help getting the requirements right.

■■

Use cases can supplement the examples or tests

During each sprint the team works with the customer to deliver the increment in functionality. The solution is incrementally improved upon over the course of each iteration. What is the make-up of a requirement?

Workshops, interviews and expressing examples using Fit or FitNesse are great ways to elicit requirements. Collaboration and communication are key to getting the examples, and therefore the tests, right. I consider software defects to be requirements; after all, a defect is a non-conformance to a requirement, right?

Story + Example + Conversation = Requirement A shared language is required; this is where the syntax/language created using FitNesse comes into its own. Cucumber is another tool that supports a syntax which you create. This syntax then connects to the fixtures.

There are two standard forms for writing stories

Good user stories call out desirable behavior and, conversely, they also call out undesired behavior. Simplicity is essential; you must strive to ensure your requirements are simple.

■■

■■

Mike Cohn [2004] “User Stories Applied” □□

Form: Role, function, business value

□□

As a (role), I want (function) so that (business value).

Mark Balbes [2012] “Defining Your Product with Agile Stories” □□

Acceptance criteria for a story take the form of “Given… When… Then…”.

The second form is used to describe acceptance criteria for a story. The acceptance criteria serve multiple purposes. They are used by the developers and testers to know when a story is done. They also act as documentation for the behavior of the system, e.g. for technical writers and customer service. The acceptance criteria for a story take the form of “Given… When… Then…” A single story may have multiple acceptance criteria. … When thinking of the acceptance criteria, it is often useful to apply the “zero, one, many” rule. Think about what happens in the system if nothing meets your criteria, if one thing meets your criteria, and if multiple things meet your criteria. Mock-ups can convey requirements for a GUI or report more quickly and clearly than a thousand words. What do good requirements look like? Stories define the product, it really is that simple. The requirements must have clear boundaries, otherwise the testers won’t k now when to stop or whether they have encountered a defect. User stories with acceptance tests, what are they? How do we use them? ■■

They are a brief conversation point (basically a dialog)

■■

Agile teams expand on stories until they have enough information to write code

■■

Testers elicit examples from the customer

■■

Testers help customers write tests

■■

The tests guide the developers

The scope The requirements must be written in such a way that testers know exactly when they have reached the boundary of the requirement. If the boundary isn’t well defined, testers may end up thinking that they have found a defect. The scope needs to clearly specify what’s in and what’s out. Focus on the core functionality; get the customers to focus on this Prioritized requirements Are all of the requirements really needed? Those that are need to be prioritized. Personally, I like to use MoSCoW, i.e. Must, Should, Could or Won’t. Others use High, Medium, or Low. By doing this you create a prioritized backlog. This simple activity eliminates so much confusion. When it comes to digging into the requirements and trying to figure out what is going on, try to balance discovery with delivery. ■■

Discovery: understanding the right product to build

■■

Delivery: building the product right

### A requirement is a combination of the story + conversation + a user scenario or supporting picture if needed + a coaching test or example. The stories must be testable. How do you test them?, an example goes a long way toward “testing the testability” A useful phrase for any agile team member is “How can we test this?”. The customer has to decide the specific requirements and capture them in the form of examples, conditions of satisfaction, and test cases. One thing that is lost on most people is that User stories are not about requirements, they are about work flow, i.e. value to the user. The idea is to break down the project into small stories, each of which is self-contained, provides a small amount of value to the end user, and represents a thin vertical slice through the layered application design, which can be shipped at the end of each sprint. This is the potentially shippable increment in functionality. Story sizing This is never an easy thing to do as people are bad at giving exact values. A simple way to estimate the size of a story is to use T-shirt

page 41

Agile Record – www.agilerecord.com

sizing. Values such as XS, S, M, L, XL, XXL work for most, they could be any value you like. If T-shirt sizing isn’t your thing (and it isn’t mine), you could use “Planning Poker” instead, also known as “Scrum Poker”. The concept is simple: The estimators select their effort estimate from the deck of cards and everyone turns their chosen card over at the same time. The Fibonacci sequence is used, as it reflects the uncertainty of the larger estimates. This then leads to a team discussion about why one person thought it was two days’ work and why one person thought it was twenty-two days’ work. This actually happens. Preferred methods of communication Face-to-face is the best method of communication, so long as the person you are speaking to is professional, and prepared well in advance of the meeting. If face-to-face isn’t feasible due to location and distances involved or a travel budget that couldn’t send a ferry across the Mersey, then the Trifecta would be a quiet office or meeting room using a shared desktop, web cam and instant messaging. The audio quality of webcams can be poor, so why not also “fire up” a high-fidelity conference call? Or maybe just a plain old phone call. E-mail is okay, but you don’t know when you’ll get a response, delivery isn’t guaranteed, your email could get caught in a spam filter, and it takes an age to get a response. All too often, much of the question/message is lost because communication is 90% non-verbal and only 10% of your message is getting through to the recipient, this is why they may misinterpret your question and send back something you were not expecting.

■■

End of sprint demonstrations are an essential part of the feedback loop. The product owner, or customer proxy, needs to see what the increment in functionality looks like and get hands on-experience. If the customer is happy, that’s great, if not, the next sprint can be used to reprioritize the tasks that development needs to work on.

Roles within the agile team ■■ The different roles in the team work together to define the requirement using tests and examples. ■■

■■

Customer team □□

These teams write the stories.

□□

The teams can be made up of QA, BAs, SMEs, product managers

Developer team □□

These teams deliver the stories.

□□

The teams can be made up of programmers, architects, system admininistrators, and, of course, the, testers as they are core.

Who solves the problem? Writing the tests is a collaborative effort, in which all roles (product owner, testers, developers, SMEs, BAs) are involved. The detailed functional tests are written to flesh out the requirements. The customers clarify and prioritize the requirements by providing ■■

Concrete examples of desired behavior required

Executable tests are a great way to communicate

■■

User stories

■■

A few examples combined with high level tests are what programmers need to get coding

■■

Communication between the customer, QA and the developers is essential.

■■

The tests must cover the basic happy path, but don’t forget to cover the unhappy path

■■

Use any collaboration tool that is available.

Don’t forget that use cases are an excellent way to describe the “big picture”.

Testers work closely with customers, or their proxies, to define stories and acceptance tests. When With Waterfall someone wrote the requirements before the coding began…

Feedback Feedback can take different forms: ■■

You must take the time to solicit feedback from your customers, and also that of other testers. Ask the customers if they think their requirements are being met.

■■

A continuous integration framework must be set up to provide continual feedback on the state of the tests, the code and therefore the requirements.

When is a story complete? When tests demonstrate that minimal functionality has been met.

■■

Embed QA with development to ensure the tester can write tests, which guide the programmer, which can be confirmed working with the new code each day.

Delivery of the steel thread/happy path/minimal increment in functionality (thin vertical slice) is essential.

■■

Did you build the right thing? Was it built right? These questions relate to the feedback from verification and validation).

page 42

With Agile, on the other hand, the requirements are defined and illustrated using test cases days before the coding begins.

Sources iSixSigma Magazine, Mukesh Soni [2008] IBM Systems Sciences Institute report that the cost to fix an error found after product release was four to five times as much as one uncovered during design, and up to 100 times more than one identified in the maintenance phase. Agile Record – www.agilerecord.com

References Agile Testing: A Practical Guide for Testers and Agile Teams (Addison-Wesley Signature)

> About the author David Kirwan started working on Agile teams in 2008. He works as a Quality Assurance Manager who has transitioned QA teams from Waterfall to Agile (Scrum). He graduated in 1995 with a Honors degree in Computer Science and Software Engineering from Trinity College Dublin. David who is ISEB, PRINCE2 and CISSP certified, has worked in the software sector since 1995 applying his expertise to the disciplines of software development.

License ISTQB and IREB Training Materials!

Díaz & Hilterscheid creates and shares ISTQB and IREB training material that provides the resources you need to quickly and successfully offer a comprehensive training program preparing students for certification. Save money, and save time by quickly and easily incorporating our best practices and training experience into your own training.

Our material consists of PowerPoint presentations and exercises in the latest versions available. Our special plus: we offer our material in 4 different languages (English, German, Spanish and French)!

For pricing information, and all other product licensing requests, please contact us either by phone or e-mail. Phone: +49 (0)30 74 76 28-0 E-mail: [email protected]

page 43

Agile Record – www.agilerecord.com

Requirements Elicitation using Testing Techniques by Nishant Pandey

Importance of Sound Elicitation Practices The importance of requirements elicitation discipline has been better understood by business and IT professionals in the recent past. The Business Analysis community has been able to organize, articulate and communicate the need of good elicitation practices and various practitioners have recommended elicitation techniques to support this important aspect of the software construction process. Sound elicitation methods enable the organization to know precisely and understand exactly the nature, intent and specific dimensions of the business requirements. Elicitation activities bring forth the true nature of the business requirement so that a clear distinction between the stated needs and actual requirements of the enterprise can be understood. Once the true requirements are recognized and understood, they can be effectively articulated, analyzed, packaged and communicated as per the needs of the enterprise. Testing and Requirements Elicitation It would be fair to say that the testing community has invested considerable effort in communicating the need, importance and return of investment associated with software testing. This investment has enabled greater recognition of the strengths of testing techniques and principles, motivating professionals to find new and innovative ways of using testing techniques for reducing risks and maximizing business value. Over the past few years, testing has transformed itself from a ‘later in the lifecycle’ reactive activity to a more proactive risk management tool. With the advent and popularization of test driven development and agile methods, testing is now used to drive development, partner and collaborate for meeting business needs. Agile methods have used testing effectively to reduce time to market.

pany such an approach are elaborated by the author, along with considerations that might impact the success of this particular elicitation method. Testing for Requirements Elicitation It is not uncommon to encounter software development projects, where there is little or no documentation with regards to the current (as-is) functionality. Many a times, contractual problems or intellectual property rights limit the nature of information that is available about such systems and processes. Whatever information is available (if any), is sometimes only known to specific individuals who have been associated with these systems for a long time. The reliability, accuracy and availability of such information is often questionable and in such situations (where the current state itself is not fully understood), the business analysis community relies on information gathered using techniques like interviews, surveys and questionnaires to understand the current (as-is) functionality. Understanding the current state is an important step towards transforming to the future (to-be) state of the enterprise. This aspect is of increased significance when the project involves system transformation and enhancements to an existing information system. Testing can be very effectively used to understand the current functionality and study the capabilities and features of existing systems. This understanding enables the elicitation of future needs and requirements of the system(s) undergoing transformation. Results of testing performed on the system can be used to verify the information gathered by other techniques like interviews. One important advantage of such an approach could be removal of cognitive biases that might be impacting the information being provided by the interviewed stakeholders. The experience of ‘doing’ and ‘seeing’ something happen real time, can be used to explore realistic and cost effective options for the desired future state.

This article explores the possibilities of using testing as a technique for requirement elicitation. The advantages that could accom-

page 44

Agile Record – www.agilerecord.com

As-Is State

1

Testing for elicitation

2

Preparation for elicitation

Repeated based on results Answers & More Questions 4

3

Elicitation Activity

Increased Understanding of Requirements

Figure 1: Using testing for requirement elicitation.

Possible Advantages of Using Testing for Elicitation Advantages of using testing for supporting elicitation are not limited to those depicted in the above example. Testing can be performed to support existing elicitation methods and this approach could help provide the following benefits: Estimate & plan elicitation effort It is possible to use the information gathered from initial tests to make inferences about the nature of elicitation task at hand. Testing can thus help in planning elicitation activities and estimating elicitation effort. This knowledge, coupled with analysis of stakeholders, can help the Business Analyst formulate elicitation questions, choose appropriate elicitation techniques, and package these in the form of an elicitation reference deck that can be referred to by various projects in the organization. Evaluate assumptions Assumptions made by stakeholders can be verified by testing these in the beginning of the software construction process. Testing of assumptions may not necessarily be limited to software systems; this can be extended to testing assumptions about business processes and other related testable assumptions with regards to service levels and performance. Evaluating assumptions using testing of assumptions can ensure that elicitation activity results in increased understanding, clarity of thought, and a more realistic and accurate representation of the business requirements. Testing can provide concrete evidence related to the current state and has the potential to serve as an agent for cognitive bias mitigation. Recognizing and correcting cognitive biases during the elicitation process can lead to an increased degree of ‘built-in quality’ in the product. Improve ability for performing elicitation Gaining insight into existing functionality and features is an important aspect in preparing for the elicitation activity. Testing can not only help confirm understanding of the business analysts and correct assumptions; it can also help the business team channelize and focus elicitation activity. This ability can prove to be a differentiator when there are particular areas which are complex, demanding more attention during the elicitation process due to the inherently complex nature of the task. Decide appropriate techniques for further elicitation After the elicitation activity has progressed to some extent, testing techniques can be used to verify the accuracy of the elicited information. This can enable selection of appropriate techniques page 45

for further elicitation. The approach for sandwiching testing activities in elicitation phases can help improve technique selection by verifying the elicited information early on during elicitation. Effectiveness of elicitation techniques Testing assumptions and results of various elicitation techniques, over a period of time, can provide an insight into the relative effectiveness of various elicitation techniques in particular organizations, departments and situation. Discovering un-used features In complex environments that have limited availability of quality documentation on system architecture and behavior, it is not uncommon to discover features and capabilities that have existed in the system. Organizations unaware of these features can initiate projects to build solutions, portions of which might already exist. Testing performed during elicitation can be used effectively to eliminate such occurrences.

■■

Effectiveness of elicitation techniques ■■

■■

Evaluate assumptions ■■

■■

Estimate & plan elicitation effort

Improve ability for performing elicitation

Discovering un-used features ■■

Decide techniques for further elicitation

Figure 2: Advantages of using testing for requirement elicitation

Approach Considerations It is important to take various factors into account while deciding to use testing techniques for requirements elicitation. A few of these considerations have been elaborated in this section. Skill & expertise requirements In order to use testing for requirements elicitation and analysis, it is important to ensure that the team performing testing for elicitation is well versed with business analysis concepts and elicitation techniques. The elicitation activity should be performed with a judicious mix of testing and business analysis skill sets. Knowledge aspects related to the particular industry and solution domain should also be considered. Agile Record – www.agilerecord.com

Stakeholder support and business case Testing assumptions and intermediate findings of elicitation could add to the initial investment planned for the particular initiative. Since this approach could be different from the existing organization norms, it is important that stakeholder support and management approval be sought in advance. The business case of using testing as a part of the elicitation techniques could contain the risk reduction and quality improvement aspects along with the cost benefit analysis. Interpreting results It is important that the results of testing performed for elicitation are interpreted accurately. Inaccurate or partial understanding of results can lead to rework, impact quality of elicitation and might prove counter-productive. Judicious effort and time expenditure The time, effort, energy and approach towards testing for elicitation should be focussed on the task at hand, which is ensuring elicitation leads to the right requirements. This testing effort will employ a different set of techniques, tools and approach as compared to established testing methods where testing is performed for verification and validation of created software. The resources, time and effort spent on this activity should be decided based on the nature of the elicitation needs.

> About the author Nishant Pandey works with Capgemini and is based in Chicago, USA. He manages Business Analysis and Testing engagements for clients in the Financial Services domain. Nishant is a certified PMP® and holds the Advanced Level Certificate in Test Management from ISTQB®. Nishant’s articles on software engineering and management have been published in Agile Record, Testing Experience and Quality Matters magazines. Recently, Nishant has authored a chapter on Benchmarking in the book ‘The IFPUG Guide to IT and Software Measurement’. In his spare time, Nishant likes to make short films and write songs.

Established Elicitation Techniques ■■

Analysis of Documents

■■

Stakeholder Interviews

■■

Analysis of System Interfaces

■■

Prototyping

■■

Observation

■■

Brainstorming

■■

Opinion Polls

■■

Surveys and Questionnaires

■■

Testing?

Figure 3: Elicitation techniques

Conclusion It is not a very common practice (yet) to use testing techniques for requirements elicitation. With the popularization of agile methods, as the industry practitioners start looking at testing in a new light, it is expected that the use of testing techniques for requirements elicitation will gain popularity. The Business Analysis and Testing communities could gain substantially as testing matures as an elicitation technique and presents itself in a new and refined avatar.

page 46

Agile Record – www.agilerecord.com

Good Practices in Agile Requirements that Build Great Products by Rathinakumar Balasubramanian

It is needless to say that great products evolve from effective requirements. In traditional approaches, requirements gathering is a one-time, upfront exercise and these requirements are frozen. This results in products that are often rejected or of little use to the end-users. The agile approach to requirements, on the other hand ensures that the products that the customers get is very close to what they wanted by accommodating the changes in requirements. In this article, I will show some good agile requirements practices that help build great products. Practice #1: Allow requirements to evolve throughout the product development “The hardest single part of building a software system is deciding precisely what to build.” said Fred Brooks in his 1987 essay “No Silver Bullet.” [1] No Product Owner can predict or prophesy all the requirements at the beginning. It is important to understand that the requirements will evolve throughout the product development. In his book “Serious Play”, Michael Schrage [2] suggests that we can only really determine what we want by interacting with a prototype. This is one of the reasons why a Product Owner keeps improving (or changing) requirements as the development team starts building and showcasing the product increments. Lesson #1: Keep a product backlog that is evolving throughout the product development. Ensure the development team understands why it does so.

Practice #2: Let the requirements come from a wide range of stakeholders Many times the development teams build great products with a lot of hard work, only to realize that they are shelved after a few months of use by the end-users. The reasons could be manifold –

page 47

from productivity loss to user-unfriendliness. The message is that we need to accept the fact that there many stakeholders – not one – who are the sources of the requirements. The stakeholders include end-users, customers, research community, sponsor(s), top management, security analysts, business analysts, and the development team. In some cases, government, legal and compliance teams, and even the competition could lead to unexpected but important requirements. Lesson #2: Let the requirements flow from every stakeholder. Ensure your Product Owner keeps all the relevant stakeholders in the loop while building requirements.

Practice #3: Understand that requirements are not static documentation Here is the most stunning truth. Never assume that having a signedoff requirements document means you have the requirements. A static documentation is not the same as product requirements. On the other hand, it does actually serve as a starting point to understand what the Product Owner wants to build – but that is not exactly what the Product Owners will end up asking for. Requirements are interactions between Product Owners and the development team. These conversations serve as a successful tool that transfers the mental mapping of the requirements from the Product Owner to the development team. In Agile, requirements are not just conversations, but also executable test packages. Customer tests (as practiced by an XP team) are the other side of the same requirements coin. Every acceptance test package (including scenarios, scripts, test cases and models) is part of the requirements.

Agile Record – www.agilerecord.com

Lesson #3: Agile requirements are essentially conversations between the people who will use the product and the people who build the product.

Highest Priority Most Detailed

Insertations

Practice #4: Acknowledge that requirements do not just come as “features” Many traditional projects have failed, in spite of building all the features (functional requirements) as required. This is primarily because the other requirements, commonly known as non-functional requirements or technical or operational requirements, which are not supported by the product. I have been part of couple of product deliveries where we have shipped the products with hundreds of bugs, commonly referred to as ‘known issues’.

Removals

Reordering

In Agile, requirements do not just come in the form of functional requirements. There are two other types of requirements, namely technical requirements (including security, availability, usability, concurrency, response time, etc.) and defects. Technical requirements evolve with more conversations among various stakeholders who will use the product. Here the development team needs to take the lead and drive the discussions to understand technical requirements. Defects (especially, those inherited from legacy systems) are also to be seen as potential requirements that can improve the product’s quality and behavior. Lesson #4: Agile requirements come in three different types: Business requirements (or features), technical requirements (or nonfunctional requirements,) and defects.

Practice #5: Build a robust stack of requirements that works An Agile Project will have a product backlog that contains the requirements. The Product Owner keeps them in prioritized order. Building a robust product backlog is like having a prioritized stack of requirements. Grooming the product backlog and keeping it up-to-date should be easy and less time-consuming for a Product Owner. Inserting a new requirement, removing any existing requirement, or re-ordering the requirements should be possible without much effort. An illustrative version of a product backlog in the form of a prioritized stack is shown in first diagram. Lesson #5: Build a robust product backlog mechanism. Ensure it is easy to maintain and keep it up-to-date.

Practice #6: Transition to pool-based product backlog from the stack model Stack-type product backlogs are easy to use and good for agile teams that are inexperienced. More advanced and experienced agile teams will soon realize that there are some challenges with the stack model. Firstly, there is no way to ensure your work-in-progress (WIP) items are within limits to ensure high throughput. Secondly, page 48

LEGEND Business Stories

Technical Stories Lowest Priority Least Detailed

Defect Stories

there is a possibility of rework which is due to re-prioritization in the product backlog. A pool-based model ensures that you have a pool of requirements that can have WIP controls; this reduces rework by using Just-InTime (JIT) prioritization through a pull mechanism (practiced by KANBAN teams).[3]

3

Highest Priority Zone

Denotes a Pull by the team

2

2 WIP LIMITS The second figure shows a pool based product backlog mechanism. All the product backlog items are kept in a pool of requirements (as opposed to a stack of requirements). The highest priority requirements are moved to the zone from where the team can pull the items to work on based on their available capacity. Depending on the emptiness due to pull, the Product Owner can do a JIT prioritization from the pool. Work-in-progress limits are applied in

Agile Record – www.agilerecord.com

the highest priority zone so that WIP are under limits ensuring that there are no bottlenecks. Lesson #6: Leverage the power of WIP limits and the pull mechanism by switching to pool-based product backlog maintenance.

References 1. http://inst.eecs.berkeley.edu/~maratb/readings/NoSilverBullet.html 2. “Serious Play: How the World’s Best Companies Simulate to Innovate” by Michael Schrage Harvard Business School Press (1999) ISBN: 0875848141 3. http://www.agilemodeling.com/essays/prioritizedRequirements.htm#LeanStrategy

> About the author Rathinakumar is an expert in agile project management methodologies. With more than 15 years of rich experience in traditional and agile project management practices at organizations like Infosys Ltd., Accelrys Software Solutions and Valtech India, he has coached and trained hundreds of project practitioners in agile methodologies. His expertise includes transforming project teams to agile delivery, building high-performance agile teams, establishing agile processes, managing agile product development, saving troubled projects, and setting up PMO. He has authored several white papers and presented insights on agile project management to participants at various international project management forums. A certified PMP, PMI-ACP (PMI-Agile Certified Practitioner) and CSM (Certified Scrum Master), Rathinakumar currently heads the Agile Practice of SABCONS, India’s First REP.

Reader’s Opinion You’d like to comment on an article? Please feel free to contact us: [email protected] page 49

Agile Record – www.agilerecord.com

Agile Requirements: Lessons from Five Unusual Sources by Raja Bavani

Requirements engineering is composed of four key activities – requirements elicitation, requirements analysis and negotiation, requirements specification or documentation, and requirements validation. Requirements elicitation is performed to discover system requirements through consultation with stakeholders. Some of the sources of this discovery could be system documents, domain knowledge and market studies. Requirements analysis and negotiation is there to analyze requirements in details and negotiate with stakeholders, which of the requirements are to be considered. Requirements specification or documentation is to document agreed requirements in a certain level of detail. Requirements validation is performed to review or validate requirements for clarity, consistency and completeness. These four key activities are critical to the success of all software projects, irrespective of the methodology followed. In projects that follow agile methodologies, these four activities of requirements engineering happen in almost all iterations on an ongoing basis. Hence, all team members have a role to play in refining and validating requirements, without which it is extremely challenging to minimize waste and rework in such projects. Here are the five unusual sources and lessons that can help agile teams improve the way they perform requirements engineering. Restaurants & Waiters: During early 2000, a very simple but profound incident happened when I was in an iteration planning meeting with two geographically distributed teams. We were developing a product for a small product engineering vendor, and the product manager was articulating product requirements for the newly started iteration to all team members. I was with my team in a conference room in India listening to him over the phone. Over the first 30 minutes, I sensed a pattern in his approach. He paused after every requirement or a set of related requirements and asked pointed questions to validate the understanding of team members. It helped him not only validate our understanding, but also provide additional examples to strengthen our understanding. From the next iteration planning meet, our team members started articulating their understanding

page 50

without waiting for a question from him. That was a simple but profound incident that triggered thoughts about restaurants and waiters. I started observing how waiters in restaurants understand and manage requirements. In this process of learning, I identified the following takeaways: 1. Welcome customers and stakeholders with a smile whenever you interact – even when you are on the phone 2. Listen well to understand the requirements 3. Rephrase or summarize your understanding to get confirmation before you end a conference call or a meeting 4. Believe in your expertise and stay committed 5. Be flexible enough to reprioritize and accommodate changes as long as it is not too late 6. When it is too late, be polite and communicate the impact 7. Be open and ask for intermediate feedback during your interactions 8. Feel free to talk about ‘what else’ 9. Value time and money 10. Apologize when things go wrong Airports and Flights: Airports are among the most dynamic places that we experience. In spite of all dynamism of day-to-day life at an airport, one can see how flights are prepared before take-off. Before every take-off, the crew reports on time, passengers are seated, checked-in baggage is loaded, the quantity of food and beverages including special requests is checked, and several other preparatory activities and verification steps are done in order to ensure a safe and comfortable journey. If for any reason a flight has to make an unplanned landing, e.g. to disembark a traveler or to bring someone on board, all stakeholders understand the consequences and there are role holders who are authorized to approve such needs. Besides, every commercial airport has a control tower which has a team

Agile Record – www.agilerecord.com

of experts who are coordinating the landing and take-off operations. This involves a lot of coordination in terms of negotiation and facilitation. From this unusual source, I have learned some of the practical lessons, e.g.: 1. Preparation is critical to success. This is applicable to the entire agile team, including product managers or product owners, agile project managers or scrum masters and agile teams. 2. If you arrive late, you miss the flight. In the same way, you can’t accommodate late arrival of requirements during iterations. 3. Any delays or changes can only lead to further delays. 4. Change management can never be effective when there is no support from all stakeholders. You cannot blame an individual or a small group of individuals when things go wrong because of ineffective change management. 5. Dependency management in large projects is necessary to avoid wait time and delays. In case of large projects with many related projects, we need a function or a team that plays a role similar to that of the control tower in airports. In most cases, we call it the governance team.

Families and Children: Successful families focus on nurturing their children and acting responsibly. By nature, children are curious, genuine and forgiving. They explore and ask questions with no inhibition. Some of the simple things I learned from this unusual source have helped me in doing requirements engineering effectively in agile projects. For example: 1. Children are curious. They ask questions in different ways to understand the things around them. Likewise, agile team members have to be curious. Collaboration without curiosity can become passive over time. 2. Children explore without any inhibition. Agile teams need to develop the habit of self-led exploration. Qualities such as action orientation, staying curious, and exploration are essential when agile teams collaborate to understand and refine requirements. Without these qualities they give up by making assumptions or waiting until someone delivers clear requirements. In the real world, clarity of requirements increases through team collaboration, and validation of assumptions is mandatory to improve the clarity of requirements.

Advertise at www.agilerecord.com

page 51

Agile Record – www.agilerecord.com

Schools and Teachers: I have observed schools and teachers for many years. Schools do make annual plans and daily time tables. However, teachers make minor adjustments as they step forward from week to week. Great teachers are excellent communicators. Also they focus on seeing the big picture and believe in continuous improvement. I learned multiple ways or styles of communication from teachers. Also, I learned following-up and following-through from them. From schools and teachers, here are some key takeaways for agile teams in order to ensure effective requirements engineering. 1. Master your communication skills. Explore different ways and styles of communication to understand requirements. 2. The most important thing for teachers is the success of their students. In order to make this happen, they spend adequate time with students and parents, understand their needs and pay attention to coach their students. Just as great teachers believe in their students and care about their success, agile teams need to focus on the success of the project. With this focus, agile teams will do the right things to ensure that requirements engineering activities are done right on time. Ant Colonies: Ant colonies are home to one of the most amazing creatures on earth. Ants are proactive because they gather food during summer for the winter days. They are focused and efficient. They do this by performing one task at a time and reduce task switching. Each individual ant appears to be a specialist in some task. However, ants are capable of performing every task. Qualities such as these inspired me long ago. What I learned from ants includes:

> About the author Raja Bavani is Chief Architect of MindTree’s Product Engineering Services (PES) and IT Services (ITS) groups and plays the role of Agile Evangelist. He has more than 20 years of experience in the IT industry and has presented at international conferences on topics related to code quality, distributed Agile, customer value management and software estimation. He is a member of IEEE and IEEE Computer Society. He regularly interfaces with educational institutions to offer guest lectures and writes for technical conferences. He writes for magazines such as Agile Record, Cutter IT Journal and SD Times. His distributed agile blog posts, articles and white papers are available at http://www.mindtree.com/blogs/category/softwareproduct-engineering and http://mindtree.com/category/tags/agile. He can be reached at [email protected].

1. When agile teams move from iteration to iteration, they need to anticipate winter days and prepare themselves in such a way that they can minimize rework or mitigate the impact of conflicts in requirements. 2. In addition to being specialists in one area, team members have to be capable of contributing to one or more additional areas. Also, every team member has to be actively involved in requirements engineering activities by participating in activities such as asking questions, thinking about complex test scenarios, active participation in refining requirements, etc., Conclusion: In the 8th issue of Agile Record, I wrote an article titled, “Distributed Agile: Steps to Improve Quality before Design” in order to emphasize the fact that quality is a journey that starts from the early stages of projects. When we open our eyes and ears to the world around us and learn from unusual sources, we get an opportunity to apply such lessons and understand how simple things make big differences.

page 52

Agile Record – www.agilerecord.com

Online Training & Exam Preparation

ISTQB® Certified Tester Foundation Level (English & German) ISTQB® Certified Tester Advanced Level – Test Manager (English) ISTQB® Certified Tester Advanced Level – Test Analyst (English) ISTQB® Certified Tester Advanced Level – Technical Test Analyst (English) ISEB Intermediate Certificate in Software Testing (English) Sample Questions & Answers, Exam Simulation, Quiz & Exercises, Videos

Our company saves up to

60% of training costs by online training. The obtained knowledge and the savings ensure the competitiveness of our company.

© iStockphoto/Yuri_Arcurs

www.te-trainings-shop.com

Outlining Agile by David Gelperin

“I need some advice.” Bob said to Sue as they sat down for lunch in the company cafeteria. “I’m told that you know a lot about Agile requirements. I’m starting an Agile project and looking for better ways to do requirements.” “What are you doing now?” asked Sue. “The usual stuff. We record user stories on index cards and early test designs in our test tool.”

Types Up Front.” This checklist helps me avoid overlooking types of requirements information that may turn out to be critical.” “So far, you’ve made my problem worse. I’m looking for ways to improve the management of requirements information that I collect now, and you suggest there may be additional types of information I should be collecting.” “Sorry. Maybe I can redeem myself. Have you thought about recording your information using automation?”

“What do you do with definitions?” “We put everything like that into our user stories, but things get messy as the card pile grows. For example, when we’re in our fourth iteration and we need to define a term, we check the cards from earlier iterations to make sure we haven’t already defined it or something close to it. The same thing goes for other types of supplementary information.” “Do you have a list of information that you record in addition to basic user stories and test designs?”

“Yes, but neither word processors nor spreadsheets are very appealing.” “Have you ever used an outliner?” “Do you mean the outlining feature in Word?” “No, I mean a tool built specifically to make outlining easy. There are about three dozen such tools.” “Never have.”

“Yes, I’ve made notes about things we’ve found useful.” “Great. You can compare them to the list I use. [Appendix A] I think of this as a checklist of “homeless” types of requirements information because Agile doesn’t provide guidance on where to put it when you need it. I haven’t used all of these types, but I have used different types on different projects.” “Your list is much longer than mine. Are you doing “Big Requirements Up Front”?” “No. My checklist identifies information that may be useful. It is not a list of specific requirements, nor do you need to capture and record any of the information types on the list unless they have value in some iteration. I’m doing “Comprehensive Information

page 54

“You should try one. In addition to helping create and edit a standard form outline that expands and contracts (like Favorites in Explorer), outliners have a free-form notes page associated with each outline entry. You can put all the information about a user story on a notes page. In addition, by using a few conventions (that you define) you can put type tags (e.g., [definition]), states (e.g. priority = high), and explicit links (e.g. &accessibility) on the notes pages [Example in Appendix B]. There are both PC-based and web-based outliners. At least two of the PC-based outliners (WhizFolders and TreePad) have full Boolean search capabilities.

Agile Record – www.agilerecord.com

If you record your information using an outliner with full Boolean search and define the conventions I mentioned before, your term searches will be much easier. It will also be easy to search for information of a particular type in a particular state (e.g. high priority user stories in a specific iteration) and all links to a specific information item (e.g. all test designs linked to a specific use case). In addition to requirements management, outliners are useful for many other things [Appendix C]. They are also easy to learn and use. I think you’ll find them much more helpful than index cards, word processors, or spreadsheets.” “Great, I’ll give one a try.” “As far as where to put the homeless information, you have at least three choices. You can embed it in user stories as you do now. You can record each type of information in a group of its own e.g. definitions together, and then put pointers in the user stories linking to a specific definition. You can do both by embedding “unique” items in a story and reusable items in a group with links. Using groups and pointers makes user stories a bit harder to understand than embedding, but groups make it easier to detect missing information and inconsistencies between items. If you need help setting up your outliner, let me know.” “Thanks, I will. You’ve given me a lot to think about. I really appreciate it. Thanks again.” Appendix A – Homeless Types of Requirements Information Definitions Rich definitions Action contract Derived condition Derived task Derived value Quality profile Plain definitions Acronyms and abbreviations Facts

Constant attribute values, relationships, and conditions Conditional values and relationships Condition dependencies

Qualities Accessibility Appeal Availability Reliability Stability and Robustness Backup and Recovery Testability Compliance Ease (of) understanding installation tailoring use operation change Flexibility Interoperability Portability Scalability Sustainability Performance Capacity Efficiency Response Time Throughput Accuracy Precision Completeness Consistency Privacy Safety Security Transparency Verifiability Auditability Environmental Requirements Platform and Portability Interoperability Internationalization Localization Compatibility Operational Requirements Supplier Installation/Deinstallation Documentation and Help Training and Support

External Interface Modification User Hardware Software Communication

Developmental Constraints Design constraints Implementation constraints Verification constraints Project constraints

External Data Acquisition Data quality metrics and required values Data extraction criteria Data validation tactics Data formatting



page 55

Essential Use Cases

Agile Record – www.agilerecord.com

References and Links Adzic, Gojko Bridging the Communication Gap: Specification by Example and Agile Acceptance Testing Neuri Limited 2009

Appendix B – User Story Template Example [User Story]

As a , I can , so that

Cohn, Mike User Stories Applied: For Agile Software Development Addison-Wesley Professional 2004

Acceptance Criteria & Tests

Gelperin, David “Bridging the Understanding Gap” Downloadable from http://www.literm.com/outlining-papers

Limits & Constraints (on both functionality and usage) Gelperin, David “What’s It Mean?” Downloadable from http://www. literm.com/outlining-papers

Assumptions/Rationale Implications/Expectations Other Attributes a. b. c. d.

> About the author

Value (to org) = [essential, necessary, desirable] Cost (to impl) = [high, medium, low] Priority (to impl) = [high, medium, low] Iteration:

STATES a. a. Story state is b. b. Developer Understanding state is ASSOCIATIONS a. b. c. d. e.

Definitions: & Facts: &FUser Roles: &URUse Cases: &UCAcceptance Tests: &AT-

David Gelperin is Chief Technology Officer of ClearSpecs Enterprises. He has more than 40 years experience in software engineering with an emphasis on requirements risk management as well as software quality, verification, and test. David cofounded Software Quality Engineering and catalyzed the launch of Better Software magazine. More information is available at www.clearspecs.com (under the About tab).

Notes Appendix C – Other Uses for an Outliner take notes organize thoughts or ideas analyze situations manage lists plan events create agendas create outlines create talks or presentations write reports or papers write documentation

write training materials write books or plays record contacts record tasks record changes track issues or bugs design processes design websites perform root cause analysis perform hazard analysis

page 56

Agile Record – www.agilerecord.com

Performance Requirements in Agile Projects by Alexander Podelko

What is Special? It looks like agile methodologies are somewhat struggling with performance requirements (and non-functional requirements in general). Probably there are several reasons for that. One is that actually even traditional software development methodologies and processes never came with a good approach to handle performance requirements. They are, of course, considered in both literature and practical projects – but are usually handled rather in ad hoc manner. Actually the process of gathering and elaboration of performance requirements is rather agile in itself, and attempts to make it rigorous and formal look unnatural and have never fully succeeded – so it should be easier and more natural to do it as part of agile methods. Still the challenge of handling multidimensional and difficult to formalize performance requirements remains intact and the difference is rather in minor adjustments1 to agile processes than in the essence of performance requirements. Another reason is that practical agile development is struggling with performance in general. Theoretically it should be a piece of cake: every iteration you have a working system and know exactly where you stand with the system’s performance. You shouldn’t wait until the end of the waterfall process to figure out where you are – on every iteration you can track your performance against requirements and see the progress (making adjustments on what is already implemented and what is not yet). Clearly it is supposed to make the whole performance engineering process much more straightforward. Unfortunately, it looks like it doesn’t always work this way in practice. So such notions as “hardening iterations” and “technical debt” get introduced. Although it is probably the same old problem: functionality gets priority over performance (which is somewhat explainable: you need first some functionality before you can talk about its performance). So performance related activities slip toward the end of the project, and the chance is missed to implement a proper performance engineering process built around performance requirements. Another issue here is that agile methods are

page 57

oriented toward breaking projects into small tasks, which is quite difficult to do with performance (and many other non-functional requirements)6 – performance-related activities usually span the whole project. Let’s now consider performance requirements in detail, keeping these issues in mind, to see if the statements above have some ground. Performance metrics Before diving into details of the performance requirements process, let’s discuss the most important performance metrics (sometimes referred to as Key Performance Indicators, KPIs). It is a challenge to get all stakeholders to agree on specific metrics and ensure that they can be measured in a compatible way at every stage of the lifecycle (which may require specific monitoring tools and application instrumentation). Let’s take a high-level view of a system (Fig.1). On one side we have users who use the system to satisfy their needs. On another side we have the system, a combination of hardware and software, created (or to be created) to satisfy users’ needs.

Users

Software

Hardware

Fig.1. A high-level view of a system. Business performance requirements Users are not interested in what is inside the system and how it functions as soon as their requests get processed in a timely manner (leaving aside personal curiosity and subjective opinions). So

Agile Record – www.agilerecord.com

business requirements should state how many requests of each kind must go through the system (throughput) and how quickly they need to be processed (response times). Both parts are vital: good throughput with long response times usually is as unacceptable as are good response times with low throughput. Throughput is a business requirement, whereas response times have both usability and business components. Throughput is the rate at which incoming requests are completed. Throughput defines the load on the system and is measured in operations per time period. It may be the number of transactions per second or the number of processed orders per hour. In most cases we are interested in a steady mode when the number of incoming requests would be equal to the number of processed requests. Defining throughput may be pretty straightforward for a system doing the same type of business operations all the time, like processing orders or printing reports when they are homogenous. Clustering requests into a few groups, such as small, medium, and large reports, may be needed if requests differ significantly. It may be more difficult for systems with complex workloads because the ratio of different types of requests can change with the time and season. Homogenous throughput with randomly arriving requests (sometimes assumed in modeling and requirements analysis) is a simplification in most cases. Throughput usually varies with time. For example, throughput can be defined for a typical hour, peak hour, and non-peak hour for each particular kind of load. In environments with fixed hardware configuration the system should be able to handle peak load, but in virtualized or cloud environments it may be helpful to further detail what the load is hour-by-hour to ensure better hardware utilization. Quite often, however, the load on the system is characterized by the number of users. Partially it comes from the business (in many cases the number of users is easier to find out), and partially it comes from performance testing: Unfortunately, quite often performance requirements get defined during performance testing and the number of users is the main lever to manage load in load generation tools. However, the number of users doesn’t, by itself, define throughput. Without defining what each user is doing and how intensely (i.e. throughput for one user), the number of users doesn’t make much sense as a measure of load. For example, if 500 users are each running one short query each minute, we have throughput of 30,000 queries per hour. If the same 500 users are running the same queries, but only one query per hour, the throughput is 500 queries per hour. So there may be the same 500 users, but a 60X difference between loads (and at least the same difference in hardware requirements for the application – probably more, considering that not all systems achieve linear scalability). In addition to different kinds of requests, most systems use sessions: some system resources are associated with the user (source of requests). So the number of parallel users (sessions) would be an important requirement further qualifying throughput. In a more page 58

generic way this metric may be named concurrency: the number of simultaneous users or threads. It is important: even inactive, but connected users still hold some resources. The number of online users (the number of parallel sessions) looks like the best metric for concurrency (complementing throughput and response time requirements). However, terminology is somewhat vague here, sometimes “the number of users” has a different meaning. For example, it may be named or “truly concurrent” users. Response times (in the case of interactive work) or processing times (in the case of batch jobs or scheduled activities) define how fast requests should be processed. Acceptable response times should be defined in each particular case. A time of 30 minutes could be excellent for a big batch job, but absolutely unacceptable for accessing a web page in a customer portal. Response times depend on workload, so it is necessary to define conditions under which specific response times should be achieved; for example, a single user, average load or peak load. Response time is the time in the system (the sum of queuing and processing time). Usually there is always some queuing time because the server is a complex object with sophisticated collaboration of multiple components including processor, memory, disk system, and other connecting parts. That means that response time is larger than service time (to use in modeling) in most cases. Significant research has been done to define what the response time should be for interactive systems, mainly from two points of view: what response time is necessary to achieve optimal user’s performance (for tasks like entering orders), and what response time is necessary to avoid web site abandonment (for the Internet). Most researchers agreed that for most interactive applications there is no point in making the response time faster than one to two seconds, and it is helpful to provide an indicator (like a progress bar) if it takes more than eight to 10 seconds. Response times for each individual transaction vary, so we need to use some aggregate values when specifying performance requirements, such as averages or percentiles (for example, 90 percent of response times are less than X). The Apdex standard18 uses a single number to measure user satisfaction. It is very difficult to consider performance (and, therefore, performance requirements) without full context. It depends, for example, on the volume of data involved, hardware resources provided, and functionality included in the system. So if any of that information is known, it should be specified in the requirements. Not everything may be specified at the same point: while the volume of data is usually determined by the business and should be documented at the beginning, the hardware configuration is usually determined during the design stage. Technological performance requirements The performance metrics of the system (the right side of the fig.1) are not important from the business (or user) point of view, but are very important for IT (people who create and operate the system). These internal (technological) requirements are derived from business and usability requirements during design and development Agile Record – www.agilerecord.com

and are very important for the later stages of the system lifecycle. Traditionally such metrics were mainly used for monitoring and capacity management because they are easier to measure, and only recently tools measuring end-user performance get some traction.

Z requests as N=Z*X/Y. The reality, of course, is more sophisticated. First of all, we have different kinds of hardware resources: processors, memory, I/O, and network. Usually we concentrate on the most critical one keeping in mind others as restrictions.

The most wide-spread metric, especially in capacity management and production monitoring, is resource utilization. The main groups of resources are CPU, I/O, memory, and network. However, the available hardware resources are usually a variable in the beginning. It is one of the goals of the design process to specify hardware needed for the system from the business requirements and other inputs like company policies, available expertise, and required interfaces.

Scalability is a system’s ability to meet the performance requirements as the demand increases (usually by adding hardware). Scalability requirements may include demand projections such as increases in the number of users, transaction volumes, data sizes, or adding new workloads. How response times will increase with increasing load or data is important too (load or data sensitivity).

When resource requirements are measured as resource utilization, they are related to a particular hardware configuration. They are meaningful metrics when the hardware configuration is known. But these metrics don’t make sense as requirements until the hardware configuration would be decided upon; how can we talk, for example, about processor utilization if we don’t know yet how many processors we would have? And such requirements are not useful as requirements for software if it gets deployed to different hardware configurations, and, especially, for Commercial Off-theShelf (COTS) software. The only way we can speak about resource utilization in early phases of the system lifecycle is as a generic policy. For example, corporate policy may be that CPU utilization should be below 70 percent. When required resources are specified in absolute values, like the number of instructions to execute or the number of I/O operations per transaction (as sometimes used, for example, for modeling), it may be considered as a performance metric of the software itself, without binding it to a particular hardware configuration. In the mainframe world, MIPS was often used as such a metric for CPU consumption, but there is no such widely used metric in the distributed systems world.

From a performance requirements perspective, scalability means that you should specify performance requirements not only for one configuration point, but as a function of load or data. For example, the requirement may be to support throughput increase from five to 10 transactions per second over the next two years with a response time degradation of not more than 10 percent. Scalability is also a technological (internal IT) requirement, or perhaps even a “best practice” of systems design. From the business point of view, it is not important how the system is maintained to support growing demand. If we have growth projections, we probably need to keep the future load in mind during the system design and have a plan for adding hardware as needed. Requirements process The IEEE Software Engineering Book of Knowledge11 defines four stages of requirements process: ■■

Elicitation: Identifying sources and collecting requirements.

■■

Analysis: Classifying, elaborating, and negotiating requirements.

■■

Specification: Producing a document. While documenting requirements is important, the way to do this depends on the software development methodology used, corporate standards, and other factors.

The importance of resource-related requirements is increasing again with the trends of virtualization, cloud computing, and service-oriented architectures. When we depart from the “server(s) per application” model, it becomes difficult to specify requirements as resource utilization, as each application will add only incrementally to resource utilization. There are attempts to introduce such metrics. For example, the ‘CPU usage in MHz’ or ‘usagemhz’ metric used in the VMware world, or the ‘Megacycles’ metric sometimes used by Microsoft14. Another related metric sometimes (but rarely) used is efficiency when it is defined as throughput divided by resources (however, the term is often used differently).

■■

Validation: Making sure that requirements are correct.

In the ideal case (for example, when the system is CPU bound and we can scale the system linearly by just adding processors), we can easily find the needed hardware configuration if we have an absolute metric of resources required.

Elicitation We may classify performance requirements into business, usability, and technological requirements.

For example, if software needs X units of hardware power per request and a processor has Y units of hardware power, we can calculate the number of such processors N needed for processing page 59

Seeing the words ‘elaborating’ and ‘negotiating’ in the stage descriptions, we may assume that it should fit agile methods well. Elicitation would match initial requirements gathering (such a creation of user stories) and Analysis – Specification – Validation fit well in the iterative process when in each iteration we elaborate these requirements further in close cooperation with all stakeholders. Let’s consider each stage and its connection with other software life cycle processes.

Business requirements come directly from the business and may be captured very early in the project lifecycle, before design starts. For example, a customer representative should enter 20 requests per hour and the system should support up to 1,000 customer Agile Record – www.agilerecord.com

© iStockphoto.com / davidnay

Prof van Testing recommends

Follow me @vanTesting

IREB – Certified Professional for Requirements Engineering – Foundation Level Description The training is aimed at personnel mainly involved in the tasks of software requirements engineering. The tutorial is designed to transfer the knowledge needed to pass the examination to become an IREB CPRE-FL.

More information regarding the required knowledge can be found in the IREB Syllabus, which can be downloaded from the IREB web site: http://www.certified-re.com

© iStockphoto.com/numbeos

After earning a CPRE-FL certificate, a certificate holder can assess whether a given situation calls for requirements engineering. He understands the fundamental characteristics of the discipline and the interplay of methodological approaches, e.g. interview techniques, description tools or forms of documentation.

Dates*

3 days

25.–27.09.12

Mödling/Austria (de)

16.–18.10.12

Berlin (de)

16.–18.10.12

Stockholm/Sweden

27.–29.11.12

Berlin (de)

27.–29.11.12

Oslo/Norway (en)

27.–29.11.12

Helsinki/Finland (en)

18.–20.12.12

Berlin (de) *subject to modifications

Website: http://training.diazhilterscheid.com

page 60

Agile Record – www.agilerecord.com

representatives. Translated into more technical terms, the requests should be processed in five minutes on average, throughput would be up to 20,000 requests per hour, and there could be up to 1,000 parallel user sessions. The main trap here is to immediately link business requirements to a specific design, technology, or usability requirements, thus limiting the number of available design choices. If we consider a web system, for example, it is probably possible to squeeze all the information into a single page or have a sequence of two dozen screens. All information can be saved at once at the end, or each page of these two dozen pages can be saved separately. We have the same business requirements, but response times per page and the number of pages per hour would be different. While the final requirements should be quantitative and measurable, it is not an absolute requirement for initial requirements. Scott Barber, for example, advocates that we need to gather qualitative requirements first3. While business people know what the system should do and may provide some numeric information, they are usually not trained in requirement elicitation and system design. If asked to provide quantitative and measurable requirements, they may finally provide them based on whatever assumptions they have about system’s design and human-computer interaction, but quite often it results in wrong assumptions being documented as business requirements. We should document real business needs in the form in which they are available (perhaps as user stories from the business point of view), and only then elaborate them into quantitative and measurable requirements (during the project’s iterations). One often missed issue, as Scott Barber notes, is goals versus requirements3. Most of response time “requirements” (and sometimes other kinds of performance requirements) are goals, not requirements. They are something that we want to achieve, but missing them won’t necessarily prevent deploying the system. In many cases, especially for response times, there is a big difference between goals and requirements (the point when stakeholders agree that the system can’t go into production with such performance). For many corporate web applications, response time goals are two to five seconds, and requirements may be somewhere between eight seconds and a minute. Determining what the specific performance requirements are is another large topic that is difficult to formalize. Consider the approach suggested by Peter Sevcik for finding T, the threshold between satisfied and tolerating users. T is the main parameter of the Apdex (Application Performance Index) methodology, providing a single metric of user satisfaction with the performance of enterprise applications. Peter Sevcik defined ten different methods18. ■■

Default value (the Apdex methodology suggests 4 sec)

■■

Empirical data

■■

User behavior model (number of elements viewed / task repetitiveness)

■■

Outside references page 61

■■

Observing the user

■■

Controlled performance experiment

■■

Best time multiple

■■

Find frustration threshold F first, and calculate T from F (the Apdex methodology assumes that F = 4T)

■■

Interview stakeholders

■■

Mathematical inflection point

The idea is the use of several of these methods for the same system. If all come to approximately the same number, they give us T. While this approach was developed for production monitoring, there is definitely a strong correlation between T and the response time goal (having all users satisfied sounds like as a pretty good goal), and between F and the response time requirement. So the approach probably can be used for getting response time requirements with minimal modifications. While some specific assumptions like four seconds for default or the F=4T relationship may be up for argument, the approach itself conveys the important message that there are many ways to determine a specific performance requirement, and it would be better for validation purposes to get it from several sources. Depending on your system, you can determine which methods from the above list are applicable (or what other methods may make sense in your particular case), get the metrics and determine your requirements. Usability requirements, mainly related to response times, are based on the basic principles of human-computer interaction. Many researchers agree that users lose focus if response times are more than 8 to 10 seconds and that making the response time faster than one to two seconds doesn’t help productivity much. These usability considerations may influence design choices (such as using several web pages instead of one). In some cases, usability requirements are linked closely to business requirements; for example, make sure that your system’s response times are not worse than the response times of similar or of competitor’s systems. As long ago as 1968, Robert Miller’s paper ‘Response Time in ManComputer Conversational Transactions’ described three threshold levels of human attention15. Jakob Nielsen believes that Miller’s guidelines are fundamental for human-computer interaction, so they are still valid and not likely to change with whatever technology comes next16. These three thresholds are: ■■

Users view response time as instantaneous (0.1-0.2 second)

■■

Users feel they are interacting freely with the information (1-5 seconds)

■■

Users are focused on the dialog (5-10 seconds)

Users view response time as instantaneous (0.1-0.2 second): Users feel that they directly manipulate objects in the user interface. For example, the time from the moment the user selects a column in a table until that column highlights or the time between typing a symbol and its appearance on the screen. Robert Miller reported that threshold to be 0.1 seconds. According to Peter Bickford15 0.2

Agile Record – www.agilerecord.com

second forms the mental boundary between events that seem to happen together and those that appear as echoes of each other5. Although it is a quite important threshold, it is often beyond the reach of application developers. That kind of interaction is provided by operating system, browser, or interface libraries, and usually happens on the client side, without interaction with servers (except for dumb terminals, that is rather an exception for business systems today). Users feel they are interacting freely with the information (1-5 seconds): They notice the delay, but feel the computer is “working” on the command. The user’s flow of thought stays uninterrupted. Robert Miller reported this threshold as one-two seconds15. Peter Sevcik identified two key factors impacting this threshold17: the number of elements viewed and the repetitiveness of the task. The number of elements viewed is, for example, the number of items, fields, or paragraphs the user looks at. The amount of time the user is willing to wait appears to be a function of the perceived complexity of the request. The complexity of the user interface and the number of elements on the screen both impact the thresholds. Back in the 1960s through 1980s, the terminal interface was rather simple and a typical task was data entry, often one element at a time. So earlier researchers reported that one to two seconds was the threshold to keep maximal productivity. Modern complex user interfaces with many elements may have higher response times without adversely impacting user productivity. Users also interact with applications at a certain pace depending on how repetitive each task is. Some are highly repetitive; others require the user to think and make choices before proceeding to the next screen. The more repetitive the task is, the better the expected response time. That is the threshold that gives us response time usability goals for most user-interactive applications. Response times above this threshold degrade productivity. Exact numbers depend on many difficult-to-formalize factors, such as the number and types of elements viewed or repetitiveness of the task, but a goal of two to five seconds is reasonable for most typical business applications. There are researchers who suggest that response time expectations increase with time. Forrester research8 suggests two seconds response time; in 2006 similar research suggested four seconds (both research efforts were sponsored by Akamai, a provider of web accelerating solutions). While the trend probably exists, the approach of this research was often questioned because they just asked users. It is known that user perception of time may be misleading. Also, as mentioned earlier, response time expectations depends on the number of elements viewed, the repetitiveness of the task, user assumptions of what the system is doing, and UI showing the status. Stating a standard without specification of what page we are talking about may be overgeneralization. Users are focused on the dialog (5-10 seconds): They keep their attention on the task. Robert Miller reported that threshold as 10 seconds15. Users will probably need to reorient themselves when they return to the task after a delay above this threshold, so productivity suffers. page 62

Peter Bickford investigated user reactions when, after 27 almost instantaneous responses, there was a two-minute wait loop for the 28th time for the same operation. It took only 8.5 seconds for half the subjects to either walk out or hit the reboot5. Switching to a watch cursor during the wait delayed the subject’s departure for about 20 seconds. An animated watch cursor was good for more than a minute, and a progress bar kept users waiting until the end. Bickford’s results were widely used for setting response times requirements for web applications. That is the threshold that gives us response time usability requirements for most user-interactive applications. Response times above this threshold cause users to lose focus and lead to frustration. Exact numbers vary significantly depending on the interface used, but it looks like response times should not be more than 8 to 10 seconds in most cases. Still, the threshold shouldn’t be applied blindly; in many cases, significantly higher response times may be acceptable when an appropriate user interface is implemented to alleviate the problem. Analysis and specification The third category, technological requirements, comes from chosen design and used technology. Some technological requirements may be known from the beginning if some design elements are given, but others are derived from business and usability requirements throughout the design process and depend on the chosen design. For example, if we need to call ten web services sequentially to show the web page with a three-second response time, the sum of response times of each web service, the time to create the web page, transfer it through the network and render it in a browser should be below 3 seconds. That may be translated into response time requirements of 200-250 milliseconds for each web service. The more we know, the more accurately we can apportion overall response time to web services. Another example of technological requirements is resource consumption requirements. For example, CPU and memory utilization should be below 70% for the chosen hardware configuration. Business requirements should be elaborated during iterations and merge together with usability and technological requirements into the final performance requirements, which can be verified during testing and monitored in production. The main reason why we separate these categories is to understand where the requirement comes from. Is it a fundamental business requirement and the system fails if we miss it, or is it a result of a design decision that may be changed if necessary. A significant difference between traditional and agile methods is in specification. Traditional requirements engineering / architect’s vocabulary is very different from the terminology used in development, performance testing, or capacity planning. Performance and scalability are often referred to as examples of Quality Attributes (QA), a part of Non-functional Requirements (NFR). In addition to specifying requirements in plain text, there are multiple approaches to formalize documenting of requirements. For Agile Record – www.agilerecord.com

© Pitopia / Klaus-Peter Adler, 2007

Enjoy Test Case Design in the Cloud CaseMaker SaaS supports systematically the test cases design by covering the techniques taught in the “ISTQB® Certified Tester program” and standardized within the British Standard BS 7925. The implemented techniques are: Equivalence Partitioning, Boundary Check, Error Guessing, Decision Tables and Pairwise Testing. Furthermore Risk Based Testing is supported. CaseMaker SaaS fits between Requirement Management and Test Management/Test Automation.

Subscribe and try for free! Decide afterwards. One license starts at

75€

/month (+ VAT) http://saas.casemaker.eu

example, Quality Attribute Scenarios by The Carnegie Mellon Software Engineering Institute (SEI) or Planguage (Planning Language) introduced by Tom Gilb. The QA scenario defines source, stimulus, environment, artifact, response, and response measure4. For example, the scenario may be that users initiate 1,000 transactions per minute randomly under normal operations, and these transactions are processed with an average latency of two seconds. For this example: ■■

Source is a collection of users.

■■

Stimulus is the random initiation of 1,000 transactions per minute.

■■

Artifact is always the system’s services.

■■

Environment is the system state, normal mode in our example.

■■

Response is processing the transactions.

■■

Response measure is the time it takes to process the arriving events (an average latency of two seconds in our example).

Planguage was suggested by Tom Gilb and may work better for quantifying quality requirements19. Planguage keywords include: ■■

Tag: a unique identifier

■■

Gist: a short description

■■

Stakeholder: a party materially affected by the requirement

■■

Scale: the scale of measure used to quantify the statement

■■

Meter: the process or device used to establish location on a Scale

■■

Must: the minimum level required to avoid failure

■■

Plan: the level at which good success can be claimed

■■

Stretch: a stretch goal if everything goes perfectly

■■

Wish: a desirable level of achievement that may not be attainable through available means

■■

Past: an expression of previous results for comparison

■■

Trend: an historical range or extrapolation of data

■■

Record: the best known achievement

It is very interesting that Planguage defines four levels for each requirement: minimum, plan, stretch, and wish. There is no standard approach to specifying performance requirements in agile methods. Mostly it is suggested to present them as user stories7,10 or as constraints13. And the difference is not so much in the way the requirements are presented, both ways rather use plain text. User stories assume using a user voice form. Cohn, for example, suggests to use the “As a , I want , so that ” template7 for user stories (although he cautions that the user story template should only be used as a thinking tool, it should not be used as a fixed

page 64

template). For constraints, both traditional expressions and user voice forms may be used13. The difference between user stories and constraints approaches is not in performance requirements per se, but how to address them during the development process. The point of the constraint approach is that user stories should represent finite manageable tasks, while performance-related activities can’t be handled as such because they usually span multiple components and iterations. Those who suggest to use user stories address that concern in another way – for example, separating cost of initial compliance and cost of ongoing compliance9. Another question is how to specify response time requirements or goals. Individual transaction response times vary, so aggregate values should be used. For example, such metrics as average, maximum, different kinds of percentiles, or median. The problem is that whatever aggregate value you use, you lose some information. Percentiles are more typical in SLAs (Service Level Agreements). For example, 99.5 percent of all transactions should have a response time of less than five seconds. While that may be sufficient for most systems, it doesn’t answer all questions. What happens with the remaining 0.5 percent? Do these 0.5 percent of transactions finish in six to seven seconds, or do all of them timeout? You may need to specify a combination of requirements: for example, average four seconds and maximal 12 seconds, or average four seconds and 99 percent below 10 seconds. Moreover, there are different viewpoints for performance data that need to be provided for different audiences. You need different metrics for management, engineering, operations, and quality assurance. For operations and management, percentiles may work best. If you do performance tuning and want to compare two different runs, average may be a better metric to see the trend. For design and development, you may need to provide more detailed metrics; for example, if the order processing time depends on the number of items in the order, it may be separate response time metrics for one to two, three to 10, 10 to 50, and more than 50 items. Often different tools are used to provide performance information to different audiences; they present information in a different way and may measure different metrics. For example, load testing tools and active monitoring tools provide metrics for the used synthetic workload that may differ significantly from the actual production load. This becomes a real issue if you want to implement some kind of process, such as ITIL Continual Service Improvement or Six Sigma, to keep performance under control throughout the whole system lifecycle. Things get more complicated when there are many different types of transactions, but a combination of percentile-based performance and availability metrics usually works in production for most interactive systems. While more sophisticated metrics may be necessary for some systems, in most cases they make the process overcomplicated and results difficult to analyze.

Agile Record – www.agilerecord.com

Call for Proposals The call for papers ends by September 3, 2012. The theme of this year’s conference is “Pulling down the walls for boosting up the business!”. Go to www.belgiumtestingdays.com and submit your paper!

BTD 2013 conference embraces even more than ever innovating and controversial ideas, interesting case studies, in-depth & practical workshops especially when dealing with the new technologies. Ergo, your experiences showing to us how “you pulled down a wall”. How did you influence a change? What did you do to “boost up the business”?

www.belgiumtestingdays.com

There are efforts to make an objective user satisfaction metric. For example, Apdex, Application Performance Index18, is a single metric of user satisfaction with the performance of enterprise applications. The Apdex metric is a number between 0 and 1, where 0 means that no users were satisfied, and 1 means all users were satisfied. The approach introduces three groups of users: satisfied, tolerating, and frustrated. Two major parameters are introduced: threshold response times between satisfied and tolerating users T, and between tolerating and frustrated users F. There probably is a relationship between T and the response time goal, and between F and the response time requirement. However, while Apdex may be a good metric for management and operations, it is probably too high-level for engineering. Validation and verification Requirements validation is making sure that requirements are valid (although the term ‘validation’ is quite often used in the meaning of checking against test results instead of verification). A good way to validate a requirement is to get it from different independent sources: if all numbers are about the same, it is a good indication that the requirement is probably valid. Validation may include, for example, reviews, modeling, and prototyping. Requirements process is iterative by nature and requirements may change with time, so to be able to validate them it is important to trace requirements back to their source. Requirements verification is checking if the system performs according to the requirements. To make meaningful comparisons, both the requirements and results should use the same metrics. One consideration here is that many load testing and many monitoring tools measure only server and network time. While end user response times, which business is interested in and which is usually assumed in performance requirements, may differ significantly, especially for rich web clients or thick clients due to client-side processing and browser rendering. Verification should be done using load testing results as well as during ongoing production monitoring. Checking production monitoring results against requirements and load testing results is also a way to validate that load testing was done properly. Requirement verification presents another subtle issue: how to differentiate performance issues from functional bugs exposed under load. Often, additional investigation is required before you can determine the cause of your observed results. Small anomalies from expected behavior are often signs of bigger problems, and you should at least figure out why you get them. When 99 percent of your response times are three to five seconds (with the requirement of five seconds) and 1 percent of your response times are five to eight seconds, it usually is not a problem. However, it probably should be investigated if this 1 percent fail or have strangely high response times (for example, more than 30 sec) in an unrestricted, isolated test environment. This is not due to some kind of artificial requirement, but is an indication of an anomaly in system behavior or test configuration. This situation often is analyzed from a requirements point of view, but it shouldn’t be, at least not until the reasons for that behavior become clear.

page 66

These two situations look similar, but are completely different in nature: 1. The system is missing a requirement, but results are consistent. This is a business decision, such as a cost vs. response time trade-off. 2. Results are not consistent (while requirements can even be met). That may indicate a problem, but its scale isn’t clear until investigated. Unfortunately, this view is rarely shared by development teams too eager to finish the project, move it into production, and move on to the next project. Most developers are not very excited by the prospect of debugging code for small memory leaks or hunting for a rare error that is difficult to reproduce. So the development team becomes very creative in finding “explanations”. For example, growing memory and periodic long-running transactions in Java are often explained as a garbage collection issue. That is false in most cases. Even in the few cases when it is true, it makes sense to tune garbage collection and prove that the problem went away. Another typical situation is getting some transactions failed during performance testing. It may still satisfy performance requirements, which, for example, state that 99% of transactions should be below X seconds – and the share of failed transaction is less than 1 percent. While this requirement definitely makes sense in production, where we may have network and hardware failures, it is not clear why we get failed transactions during the performance test if it was run in a controlled environment and no system failures were observed. It may be a bug exposed under load or a functional problem for some combination of data. When some transactions fail under load or have very long response times in the controlled environment and we don’t know why, we have one or more problems. When we have unknown problems, why not trace them down and fix them in the controlled environment? It would be much more difficult in production. What if these few failed transactions are a view page for your largest customer and you won’t be able to create any order for this customer until the problem is fixed? In functional testing, as soon as you find a problem, you usually can figure out how serious it is. This is not the case for performance testing: usually you have no idea what caused the observed symptoms and how serious it is, and quite often the original explanations turn out to be wrong. As Richard Feynman said in his appendix to the Rogers Commission Report on the Challenger space shuttle accident12, “The equipment is not operating as expected, and therefore there is a danger that it can operate with even wider deviations in this unexpected and not thoroughly understood way. The fact that this danger did not lead to a catastrophe before is no guarantee that it will not the next time, unless it is completely understood.” Summary We need to specify performance requirements at the beginning of any project for design and development (and, of course, reuse them during performance testing and production monitoring). While performance requirements are often not perfect, forcing stakeAgile Record – www.agilerecord.com

holders just to think about performance increases the chances of project success. Agile methods provide a unique opportunity to verify performance requirements early and track performance through all iterations. What exactly should be specified – goal vs. requirements (or both), average vs. percentile vs. APDEX, etc. – depends on the system and environment. Whatever it is, it should be elaborated into quantitative and measurable in the end. Making requirements too complicated may hurt. We need to find meaningful goals and requirements, not invent something just to satisfy some bureaucratic process. If we define performance requirements in the beginning of the project, they become the backbone of the performance engineering process and we can use and elaborate them throughout all iterations and track our progress from the performance engineering point of view. Continuing to trace them in production creates a performance feedback loop providing us with input to system maintenance and future development. References 1. Agile Non-Functional Requirements. 2009. http://tynerblain.com/blog/2009/02/10/agile-non-functional-reqs/

10. Howard, K. Handling Non-Functional Requirements On an Agile Project. Agile 2009. http://www.slideshare.net/kenhoward01/handling-nonfunctional-requirements-on-an-agile-project 11. Guide to the Software Engineering Body of Knowledge (SWEBOK). IEEE, 2004. http://www.computer.org/portal/web/swebok 12. Feynman R.P. Appendix F – Personal observations on the reliability of the Shuttle. http://science.ksc.nasa.gov/shuttle/missions/51-l/docs/ rogers-commission/Appendix-F.txt 13. Leffingwell D, Shriver R.Nonfunctional Requirements (System Qualities) Agile Style. Agile 2010. http://www.theagileengineer.com/public/Home/Entries/2010/8/12_Agile_2010_Presentation__Non_Functional_Requirements_(Qualities),_Agile_Style.html 14. Mailbox Server Processor Capacity Planning. http://technet.microsoft.com/en-us/library/ee712771.aspx 15. Miller, R. B. Response time in user-system conversational transactions, In Proceedings of the AFIPS Fall Joint Computer Conference, 33, 1968, 267-277. 16. Nielsen J. Response Times: The Three Important Limits, Excerpt from Chapter 5 of Usability Engineering, 1994. http://www.useit.com/papers/responsetime.html

2. Ambler, S.W. Beyond Functional Requirements On Agile Projects. Dr.Dobb’s, 2008. http://www.drdobbs.com/architecture-and-design/210601918

17. Sevcik, P. How Fast Is Fast Enough, Business Communications Review, March 2003. http://www.bcr.com/architecture/network_forecasts%10sevcik/how_fast_is_fast_ enough?_20030315225.htm

3. Barber, S. Get performance requirements right – think like a user, Compuware, 2007. http://www.perftestplus.com/resources/requirements_ with_compuware.pdf

18. Sevcik, P. Using Apdex to Manage Performance, CMG, 2008. http://www.apdex.org/documents/Session318.0Sevcik.pdf

4. Bass L., Clements P., Kazman R. Software Architecture in Practice, Addison-Wesley, 2003. http://etutorials.org/Programming/Software+architecture+ in+practice,+second+edition 5. Bickford P. Worth the Wait? Human Interface Online, View Source, 10/1997. http://web.archive.org/web/20040913083444/http://developer.netscape.com/viewsource/bickford_wait.htm 6. Cohn, M. Estimating Non-Functional Requirements. 2011. http://www.mountaingoatsoftware.com/blog/estimatingnon-functional-requirements 7. Cohn, M. Non-functional Requirements as User Stories. 2008. http://www.mountaingoatsoftware.com/blog/non-functional-requirements-as-user-stories/ 8. eCommerce Web Site Performance Today. Forrester Consulting on behalf of Akamai Technologies, 2009. http://www.akamai.com/html/about/press/releases/2009/press_091409.html 9. Hazrati, V. Nailing Down Non-Functional Requirements. InfoQ, 2011. http://www.infoq.com/news/2011/06/nailing-qualityrequirements

page 67

19. Simmons E. Quantifying Quality Requirements Using Planguage, Quality Week, 2001. http://www.clearspecs.com/downloads/ClearSpecs20V01_Quantifying%20Quality%20Requirements.pdf

> About the author Alex Podelko For the last fifteen years Alex Podelko has worked as a performance engineer and architect for several companies. Currently he is Consulting Member of Technical Staff at Oracle, responsible for performance testing and optimization of Hyperion products. Alex serves as a director for the Computer Measurement Group (CMG) http://cmg.org, a volunteer organization of performance and capacity planning professionals. He blogs at http://alexanderpodelko.com/blog and can be found on Twitter as @apodelko.

Agile Record – www.agilerecord.com

Industrial-Strength Agile by Jeff Ball

Agile was born in 2001 as a reaction to Waterfall methods for project management. The old “plan driven” or “specification driven” methods were failing, the new Agile approaches were the way forward. Today, Agile has become popular and many organizations are looking to use Agile as their enterprise approach for project management. They need an industrial-strength Agile solution. That’s a challenge, as Agile was a rejection of many established business methods. The 2001 Agile manifesto expressed preferences for People and Interactions

over

Processes and Tools

Working Software

over

Comprehensive Documentation

Customer Collaboration

over

Contract Negotiation

Responding to Change

over

Following a Plan

Let’s call these preferences the LHS (left hand side) and RHS (right hand side). Agile prefers the LHS to the RHS.

How to reconcile the LHS and the RHS? One variant of Agile, the Atern approach from the DSDM consortium bridges the gap. How can Atern bridge the gap and remain Agile? Firstly, Atern remains true to the manifesto, and implements the LHS, the core preferences from the Manifesto. LHS

Atern solution

People and Interactions

Self-managing team

Working Software

Incremental delivery

Customer Collaboration

Strong customer voice in teams

Responding to Change

Iterative approach with late decision making

Atern has 9 fundamental principles. These also serve to bridge the gap between the LHS and the RHS. Four of Atern’s principles relate to the manifesto (to the LHS)

In the last 10 years, these Agile manifesto (LHS) preferences have served to guide Agile in its various forms. Many business teams have enthusiastically moved to lightweight versions of Agile (such as SCRUM or XP) based on the LHS preferences. However, many business teams still have a strong need or strong affinity for the RHS (right hand side): ■■

Processes and tools – companies need defined ways of working, supported by tools

■■

Comprehensive documentation – operational teams need to understand project deliverables

■■

Contract negotiation – the “fixed price, fixed time” approach remains a core business concept

■■

Following a plan – companies use plans to manage resource and delivery dependencies

page 68

■■

Collaborate

■■

Build incrementally

■■

Develop iteratively

■■

Communicate continuously and clearly

while the other 5 Atern principles relate to the needs of the enterprise (to the RHS) ■■

Focus on the business need

■■

Deliver on time

■■

Never compromise quality

■■

Build from firm foundations

■■

Demonstrate control

Agile Record – www.agilerecord.com

To create an industrial-strength variant of Agile, Atern then focuses on business imperatives ■■

On time, on budget delivery

■■

An architected solution (Atern calls this “EDUF” = Enough Design Up Front)

■■

Value for money

■■

Enough planning to keep control

■■

A planned handover to operations

All these are built on the nine principles, so they attend to the double needs of the LHS and the RHS. Finally, the package is wrapped up for general business use: 1. Atern is a general purpose approach (not specific to IT or software projects) 2. Atern documentation is published as a book (available from Amazon, etc.) 3. Atern training is available internationally, from established training providers such as QRP International

> About the author Jeff Ball Agile Project Management trainer for QRP International, who provide Agile training and certification across Europe (both inhouse courses and public courses). Jeff has 25 years of Project and Programme Management experience; he has set up P3O structures at NEC Computers and Fortis Bank. Moreover, he managed Programme Offices for major enterprise transformation programmes at NEC Computers and BNP Investment Partners. Jeff is P3O lead trainer, responsible for the development and update of the P3O material. He is also PRINCE2, MSP and MoP multilingual trainer, able to perform courses both in English and in French.

4. Atern certification is performed by established certification agencies such as APMG So ten years after the Agile manifesto, Atern is open for business. It doesn’t abandon Agile, it still recognizes the truth of the manifesto. It has adjusted to business needs without returning to Waterfall methods. Atern is truly industrial-strength Agile.

page 69

Agile Record – www.agilerecord.com

Masthead



EDITOR Díaz & Hilterscheid Unternehmensberatung GmbH Kurfürstendamm 179 10707 Berlin, Germany

LAYOUT & DESIGN Díaz & Hilterscheid

ISSN 2191-1320

WEBSITE www.agilerecord.com

Phone: +49 (0)30 74 76 28-0 Fax: +49 (0)30 74 76 28-99 E-Mail: [email protected]

ARTICLES & AUTHORS [email protected]

Díaz & Hilterscheid is a member of “Verband der Zeitschriftenverleger Berlin-Brandenburg e.V.”

ADVERTISEMENTS [email protected] PRICE online version: free of charge

EDITORIAL José Díaz

In all publications Díaz & Hilterscheid Unternehmensberatung GmbH makes every effort to respect the copyright of graphics and texts used, to make use of its own graphics and texts and to utilise public domain graphics and texts. All brands and trademarks mentioned, where applicable, registered by third-parties are subject without restriction to the provisions of ruling labelling legislation and the rights of ownership of the registered owners. The mere mention of a trademark in no way allows the conclusion to be drawn that it is not protected by the rights of third parties. The copyright for published material created by Díaz & Hilterscheid Unternehmensberatung GmbH remains the author’s property. The duplication or use of such graphics or texts in other electronic or printed media is not permitted without the express consent of Díaz & Hilterscheid Unternehmensberatung GmbH. The opinions expressed within the articles and contents herein do not necessarily express those of the publisher. Only the authors are responsible for the content of their articles. No material in this publication may be reproduced in any form without permission. Reprints of individual articles available.

Picture Credits © MarketOlya – Fotolia.com  1

© Guido Vrola – fotolia.com  44

Marvin Siefke / pixelio.de   13

© flowpix – fotolia.com  47

Gerd Altmann / pixelio.de   22

© iStockphoto.com/srebrina  50

© iStockphoto.com/DNY59  25

® Eventimages21 – Fotolia.com  54

s.media / pixelio.de   30

© iStockphoto.com/contour99  57

© V. Yakobchuk – fotolia.com  35

© iStockphoto.com/KarenMower  68

© iStockphoto.com/AndrewJohnson  38 © Pitopia / Olaf Schuelke, 2007  40

Index Of Advertisers Agile Dev Practices  18

iSQI  11

Agile Testing Days  4

Knowledge Transfer  28

Belgium Testing Days  65

Online Training  53

CaseMaker SaaS  63

SWE Guild  24

CAT – Certified Agile Tester  2

Testen für Entwickler  34

Díaz & Hilterscheid  71 IREB  60

page 70

Agile Record – www.agilerecord.com

Course

Language Place

Anforderungsmanagement

de

Berlin

Price in €

27.08.12–31.08.12

CAT – Certified Agile Tester (DE)

de

Frankfurt

2,100

10.09.12–14.09.12

CAT – Certified Agile Tester (DE)

de

Berlin

2,100

24.09.12–28.09.12

CAT – Certified Agile Tester (DE)

de

Stuttgart

2,100

29.10.12–02.11.12

CAT – Certified Agile Tester (DE)

de

Berlin

2,100

03.12.12–07.12.12

CAT – Certified Agile Tester (DE)

de

Cologne/Düsseldorf

2,100

03.09.12–07.09.12

CAT – Certified Agile Tester (EN)

en

Helsinki (Finland)

2,100

15.10.12–19.10.12

CAT – Certified Agile Tester (EN)

en

Oslo (Norway)

2,100

20.08.12–21.08.12

HP Quality Center

de

Berlin

08.10.12–09.10.12

HP QuickTest Professional

de

Berlin

25.09.12–27.09.12

IREB® Certified Professional for Requirements Engineering – Foundation Level (DE)

de

Mödling (Austria)

1,400

16.10.12–18.10.12

IREB® Certified Professional for Requirements Engineering – Foundation Level (DE)

de

Berlin

1,400

16.10.12–18.10.12

IREB® Certified Professional for Requirements Engineering – Foundation Level (EN)

en

Stockholm (Sweden)

1,600

13.08.12–16.08.12

ISTQB® Certified Tester Foundation Level

de

Berlin

1,800

27.08.12–30.08.12

ISTQB® Certified Tester Foundation Level

de

Köln/Düsseldorf

1,800

17.09.12–20.09.12

ISTQB® Certified Tester Foundation Level

de

Berlin

1,800

22.10.12–25.10.12

ISTQB® Certified Tester Foundation Level

de

Berlin

1,800

06.08.12–08.08.12

ISTQB® Certified Tester Foundation Level – Kompaktkurs (DE)

de

Hamburg

1,495

10.09.12–12.09.12

ISTQB® Certified Tester Foundation Level – Kompaktkurs (DE)

de

München

1,495

10.09.12–12.09.12

ISTQB® Certified Tester Foundation Level – Kompaktkurs (DE)

de

Mödling (Austria)

1,495

15.10.12–17.10.12

ISTQB® Certified Tester Foundation Level – Kompaktkurs (DE)

de

Dresden

1,495

15.10.12–17.10.12

ISTQB® Certified Tester Foundation Level – Kompaktkurs (DE)

de

Hamburg

1,495

29.10.12–31.10.12

ISTQB® Certified Tester Foundation Level – Kompaktkurs (DE)

de

Frankfurt

1,495

20.08.12–24.08.12

ISTQB® Certified Tester Advanced Level – Test Manager

de

Stuttgart/Karlsruhe

2,100

03.09.12–07.09.12

ISTQB® Certified Tester Advanced Level – Test Manager

de

Berlin

2,100

08.10.12–12.10.12

ISTQB® Certified Tester Advanced Level – Test Manager

de

München

2,100

08.10.12–12.10.12

ISTQB® Certified Tester Advanced Level – Test Manager

de

Mödling (Austria)

2,100

17.09.12–21.09.12

ISTQB® Certified Tester Advanced Level – Technical Test Analyst

de

Cologne/Düsseldorf

2,100

22.10.12–26.10.12

ISTQB® Certified Tester Advanced Level – Test Analyst

de

Frankfurt

2,100

11.10.12–12.10.12

Testen für Entwickler

de

Berlin

800

19.11.12–20.11.12

Testmetriken im Testmanagement

de

Berlin

998

more dates and onsite training worldwide in de, en, Spanish, French on our website: http://training.diazhilterscheid.com

499

998 998

Kurfürstendamm, Berlin © Katrin Schülke

Dates * 24.08.12–24.08.12

* subject to modifications | all prices plus VAT

Training with a View