issue 6 - Agile Record

3 downloads 752 Views 6MB Size Report
Apr 15, 2011 - CAT is no ordinary certification, but a professional jour- ney into the world of Agile. As with any voyag
The Magazine for Agile Developers and Agile Testers

April 2011

www.agilerecord.com © iStockphoto.com/GarySandyWales

free digital version

made in Germany

ISSN 2191-1320

issue 6

Pragmatic, Soft Skills Focused, Industry Supported

CAT is no ordinary certification, but a professional journey into the world of Agile. As with any voyage you have to take the first step. You may have some experience with Agile from your current or previous employment or you may be venturing out into the unknown. Either way CAT has been specifically designed to partner and guide you through all aspects of your tour. The focus of the course is to look at how you the tester can make a valuable contribution to these activities even if they are not currently your core abilities. This course assumes that you already know how to be a tester, understand the fundamental testing techniques and testing practices, leading you to transition into an Agile team.

The certification does not simply promote absorption of the theory through academic mediums but encourages you to experiment, in the safe environment of the classroom, through the extensive discussion forums and daily practicals. Over 50% of the initial course is based around practical application of the techniques and methods that you learn, focused on building the skills you already have as a tester. This then prepares you, on returning to your employer, to be Agile. The transition into a Professional Agile Tester team member culminates with on the job assessments, demonstrated abilities in Agile expertise through such forums as presentations at conferences or Special Interest groups and interviews. Did this CATch your eye? If so, please contact us for more details!

© Sergejy Galushko – Fotolia.com

Book your training with Díaz & Hilterscheid! Open seminars: May 16 - 20, 2011 in Berlin June 6 - 10, 2011 in Düsseldorf/Köln August 15 - 19, 2011 in Berlin October 10 - 14, 2011 in Berlin December 5 - 9, 2011 in Berlin

Díaz & Hilterscheid GmbH / Kurfürstendamm 179 / 10707 Berlin / Germany Tel: +49 30 747628-0 / Fax: +49 30 747628-99 www.diazhilterscheid.de [email protected]

Editorial Dear readers, Agile is like an ice-breaker; it gradually makes its way through all the companies, domains, technologies and “people”. There is no way back. It breaks any resistance and helps to achieve the desired targets. As a company, we see that more and more finance companies move into Agile. They take the challenge and face the problems associated with the transition. We have also seen many cases where companies mean Agile, but still do something else. It is important in my opinion to take the people with you, to help them move into Agile and to give them the required knowledge. We have recently run a CAT (Certified Agile Tester) course for the first time. It was quite an intensive training experience: for 50 percent of the time the attendees do exercises and they have to do homework, too. Quite challenging for trainer and attendees! The exam with the personal assessment, the essay and the practical part demands from the attendees not only theoretical, but also practical skills. The first experiences are quite positive. Training courses in the UK, Belgium, Germany and USA will follow. We have started the Call for Papers for a new conference called “AgileReq”. It is a conference for Product and Requirements Management in Agile. It will be held in London on September 9-11, 2011. This conference has a new concept: Friday will be the day for tutorials and Saturday and Sunday, conference days. We think that it is a good way for companies and employees to invest time and money in a smart way. The companies don’t really lose working days and the employee invests in his career during his/her “free time”. Please have a look at www.agilereq. com and spread the word. As you have probably seen, the program of the Agile Testing Days has been published. Have a look at www.agiletestingdays.com . The conference has the leitmotiv “collaboration”. Don’t miss it. We have planned an award for the most influence person in agile in 2010. This person should be voted by the community. The Major of the city of Potsdam will hand over the award to the honored person. Please follow the website and the newsletter to promote your favorite professional. I want to thank all our authors, sponsors and professionals that helped us set up this magazine. Without your help, the magazine wouldn’t be what it is today! I want to remind you that the democracy and the freedom of opinion is the base of our world. Business interest is of course important but not decisive for this life. We strongly believe that the world is big enough for all of us; advocates and skeptics. Let’s discuss but lets us behave correctly. For me it is important to act in the way that my mother, my family and children would be proud of me. What I say and how I act must be in the range of acceptance. Don’t you think so too? Enjoy reading









José Díaz

www.agilerecord.com

3

Contents Editorial  3 Agile product management using Effect Maps  6 by Gojko Adzic Agile Testing in Real Life: March 2011 – Practice, Practice, Practice!  17 by Lisa Crispin Agile Management Matters  20 by Jurgen Appelo Agile (Unified Process)   22 by Hannes Van Baelen Test Planning and Execution in a Mobile Game Development Project using SCRUM  26 by José Carréra Alvares Neto Agility in an outsourced context  34 by Jerry E. Durant Agile Requirements: Not an Oxymoron  38 by Ellen Gottesdiener Process performance indicators in a lean software enterprise  40 by Kristian Hamström Smoothing Out Lumpy Sprints  50 by Catherine Powell User Stories: A Skeptical View  52 by Tom and Kai Gilb How 100% Utilization Got Started  56 by Johanna Rothman Consensus decision-making: Better decisions in less time  58 by Linda Rising Becoming test-disinfected  61 by Alexander Tarnowski

4

www.agilerecord.com

Top 10 indications that you moved up from offshore staff augmentation into Agile software development  63 by Raja Bavani Introducing Agile – how to deal with middle management  66 by Armin Grau How to Succeed with Scrum  69 by Martin Bauer What Does Agile Mean To Us? A Survey Report  72 by Prasad Prabhakaran SCRUM in the FDA context  75 by Antonio Robres Masthead  78 Index Of Advertisers  78

www.agilerecord.com

5

© Katrin Schülke

Agile product management using Effect Maps by Gojko Adzic

Effect Mapping is a game-changing technique for high level project visualization. It provides stakeholders and sponsors with an excellent level of visibility and helps to drive software projects towards delivering the right product with a high level of quality. Effect Mapping facilitates the implementation of several techniques of agile planning, product design, prioritization and scoping. In practice, I’ve found the combination of these techniques by far the most powerful way of iterative product management so far. Introducing Effect Maps Effect Maps are charts of project scope which help teams ensure that software delivery is focused on business goals, stakeholders and their needs. Mijo Balic and Ingrid Ottersten present the technique in Effect Managing IT [Balic07], where they show that creating such a map helps to ensure that a project is focused on achieving the desired business effect (hence the name Effect Map). To create an Effect Map, draw a mind-map by answering these questions: •

6

Why are we doing this? What is the desired business change? This is the business goal. Put that goal in the centre of the map so that you can always keep that in mind.



Who are the people that can create the desired effect? Who can contribute to the goal or affect it? These are the project stakeholders. Put the stakeholders on the second level of the mind-map.



For each element on the second level: How can the target group contribute or obstruct the desired effect? In real life, not in software. These are stakeholder needs. Put them on the third level of the map.



For selected elements on the third level: What are the business activities or software capabilities that would support the needs of the stakeholders? These are features. Features are at the fourth level of the map.

www.agilerecord.com

Figure 1: Effect Map Structure

In practice, I’ve found that asking sponsors for examples of how someone could help them achieve the goal is a great way to identify stakeholders and their needs, in effect to drive the second and the third level of the map. Answers such as “For example, existing customers might help us by buying from us again” directly map to the Who and How levels. Note that Balic and Ottersten ask What the target group wants on the third level and How a product should be designed on the fourth level. I found it more useful to ask the same questions differently. Asking How is a good way to tease out an example, so it is more useful to ask that when looking for stakeholders and their needs. Likewise, What is better when describing software features as it helps us focus on business functionality rather than implementation detail (I write more about this in [Adzic11]). As a consequence, the summary diagram in Figure 1 shows How and What on different levels than the summary diagram in Effect Managing IT, but the levels in the diagrams are essentially the same from a semantic perspective. I also allow non-software items to be on the fourth level, because software is not always the answer. Paying advertisers has noth-

ing to do with software but might be a very effective way to contribute to a business goal and allow a delivery team to focus on building parts of software that have to be built. The maps described in Effect Managing IT have only four levels. The teams I work with came up with ideas for the fourth level that were too big to fit into a single iteration, even too big to be user stories. I found it more useful to break down large feature ideas into several items that could contribute to that feature. In typical agile jargon, items on the fourth level could be epics. We can split them further into items on the fifth or even sixth level, that represent user stories which can fit into an iteration. Another way to look at further levels could be feature themes at level 4, minimum marketable features at level 5 and user stories at level 6. I also found it useful not to dive in too deep early. Going up to the level of stakeholder needs at the start proved to be quite enough for anything not immediately important. This approach allowed the teams I worked with to focus on the things that were important immediately, whilst still keeping a high level overview of the entire scope for later. Not all stakeholder needs will be equally important or risky and not all software features will be equally complex. Visualizing those aspects of needs, activities and features can help to identify low hanging fruit and to avoid death marches. Having this information on the map supports effective planning and prioritization,

so I extended the maps with simple visual symbols to represent importance and technical complexity of stakeholder needs and features. I use stars to represent importance and numbers to represent complexity. All these are rough, relative estimates, so don’t worry too much about getting them right. An example: Facebook games project For an example, see Figure 2. This is a simplified Effect Map from one of my previous projects, an online games platform. The business stakeholders originally asked for levels and achievements in games as the next milestone. Instead of implementing that straight away, the team and the business stakeholders drew the map together. It turned out that the reason why the business users asked for levels and achievements was to significantly increase the number of active players (the target was set measurably to one million). Once we had understood the goal, we started thinking about who could contribute to it. One obvious group were advertisers, who could send bulk invitations or publish our ads. Another group were the existing players, who could invite their friends, write about our games or recommend them to other people online. A third group was the development company itself, who could organize PR events or send out invitations. These high level examples directly translate into the second and the third levels of the map.

Figure 2: An example Effect Map

www.agilerecord.com

7

We then added stars to mind-map elements to visualize the expected importance. Inviting friends works virally, because invited people can also invite others and it is a direct call to action. For that reason, it is likely to provide an exponential return on investment. Posting about games is not going to be that effective because it does not have a direct call to action. Recommending games is going to be even less effective because it has a more limited reach. We then came up with ideas how our software could support players in inviting their friends, posting about the games or recommending. We mostly focused on inviting and posting, as they seemed the most effective. With some branches we went to the fifth level, breaking down scope into smaller tasks and assigning numbers to those tasks to show a very rough estimate of implementation complexity. The original requirements (levels and achievements), showed up on the map as giving existing players more content to post about. We decided together that inviting friends is more likely to bring large numbers because it is viral. So we focused first on redesigning the web site to provide incentives for invitations. Levels and achievements didn’t come into the development pipeline for the next six months. Product management using Effect Maps Effect Maps are useful for much more than just an initial scoping exercise. I found them very useful as a catalyst in product management. They facilitate the implementation of several very good ideas for iterative product management, crucial for successful delivery of agile and lean projects: •

setting clear goals



providing a shared understanding of quality



prioritizing based on business value



iterative long-term product release planning



deriving scope from goals



focusing deliverables on business value increments



preventing scope creep



supporting scope change and reprioritization

Setting clear goals If a project succeeds in delivering expected business value, it is a success from a business perspective. This is true even if the delivered scope ends up being different to what was originally envisaged. On the other hand, if the project delivers exactly the requested scope to the word but misses the business target, it is a failure. This is true regardless of the fact that delivery teams can blame customers for not knowing what they want. Unfortunately, I’ve seen far too many projects where this business value is not clearly communicated to everyone. Very often it is not even defined, and in the cases when it is defined it is too vague to be useful. Such definitions make it hard to objectively measure whether the project actually delivered what was wanted. As a consequence, teams focus on ticking boxes by delivering scope as a measure of success. 8

www.agilerecord.com

Effect Mapping helps to define goals because it requires us to identify the expected goal as the first activity. But we can go further. Clear goals help teams design appropriate solutions. A solution to increase the number of players by 5% over 12 months is completely different to one that would increase the number of players by 100% over 6 months. To support the team in delivering the right system, we need to clearly define the business target. A great trick to ensure a shared understanding of the expected business effect is to decide how to measure it. After we have answered the question “Why”, we should go further and answer the following question: How will we measure how much any future delivery matches the expectations? Defining the goal in a measurable way requires sponsors to think really hard and define precisely what they really expect out of a software project. That will help to align the expectations of the sponsors, the stakeholders and the delivery team. It will also allow the delivery team to design the appropriate solution and invest effort proportionally to the expected return. Identifying measurable goals (such as “one million players” instead of just “more players”) is key to ensuring a shared understanding of what a project is supposed to deliver. Tom Gilb argues that measuring such effects and deciding how to measure them significantly improves the chances of success of a project [Gilb05]. He also presents several techniques for measuring things that do not seem easily measurable. For further examples of measuring seemingly intangible things, see [Hubbard10]. Providing a shared understanding of quality Similar to the way that project goals are often vague and not universally communicated, software quality is rarely defined precisely. This causes misunderstanding, misinterpretation and confusion about what a project needs to deliver. The interpretation of quality and the argumentation around that is delegated to a quality assurance role without much involvement from any business sponsors. With an arbitrary interpretation of quality, there is nothing to help teams decide how much to invest in certain aspects of their product. Teams over-invest in less important parts of a system and under-invest in more important parts. To prevent this, we have to first clearly define what “appropriate quality” means and communicate that to everyone. Although quality is often perceived as intangible, it isn’t that hard to define. Gerald Weinberg defined quality as value delivered to some person [Weinberg91]. To specify quality, we have to identify the following two concepts: •

Who is that person? Or alternatively, who are the people affected by our work?



What kind of value are they looking for from the system?

18-19 May 2011, London

User Stories [Cohn04] apply a similar technique to ensure that each story delivers business value by requesting the writer to identify who is a stakeholder for a story and why they want it. In order to ensure successful delivery of milestones or entire projects, we need to define these aspects of our system not just on the low scope level (user stories), but also holistically for the entire project or a milestone of a project. In fact, I find that high level definition of quality much more important. Effect Mapping facilitates this process because it requires us to clearly define the two aspects of quality – who the person is and what they expect – while drawing the map. They are effectively the second and the third levels of the map, the stakeholders and the stakeholder needs. Prioritizing based on business value The hierarchical nature of the map clearly shows who benefits from a feature, why, and how that contributes to the end goal. This clear visualization allows us to decide which activities best contribute to the end goal and where the risks are, which immensely helps with prioritization. Once we have identified a clear goal, stakeholders and their needs, we can estimate how much we expect that supporting each one of them will contribute to the end goal. In the gaming system example, supporting invitations was clearly more important than supporting posting. Effect Maps help us prioritize and invest appropriately in supporting activities depending on their expected value. In addition, this discussion provides a way to start thinking about how to measure whether a software feature has really delivered what we expect. Similar to the way that discussing how we can measure deliverables against a business goal, discussing how much a deliverable addresses a stakeholder need helps the team nail down what quality is and share the understanding. Iterative product release planning User stories are de facto standard today for managing long-term release planning. This often includes an “iteration zero”, a scoping exercise or a user story writing workshop at the start of a milestone. During “iteration zero”, key project sponsors and delivery team together come up with an initial list of user stories that will be delivered. A major problem with the “iteration zero” approach is the long stack of stories that have to be managed as a result of it. Navigating through hundreds of stories isn’t easy. When priorities change, it is hard to understand which of the hundreds of items on the backlog are affected. Jim Shore called this situation “user story hell” during his talk at Oredev 2010, citing a case of a client with 300 stories in an Excel spreadsheet. I’ve seen horror stories like that, perhaps far too often. From my experience, project sponsors think about prioritization in mid and long term in terms of the order of stakeholder needs they want to satisfy, not necessarily in terms of the order of system features. User stories try to address that, but having too many stories up-front clutters the visibility; on the other hand having too few stories up-front doesn’t give sponsors the confidence they need to support a project. Managing and delivering

10

www.agilerecord.com

projects with dynamic scope scares people who are responsible for successful delivery. Many iterative planning and delivery practices might sound like black magic at a higher level and they do not allow managers to see the forest for the trees, so they push for huge lists of stories and ask for more control. “People asking for control really want visibility”, said Elisabeth Hendrickson during her keynote at the Agile Testing Days 2010. She supported that statement with data from more than 100 workshops on team transitions she ran, which is as close to a scientific experiment as you can get with software development. The statement is certainly true from my experience. Clients and project sponsors want commitment and sign-off because they are afraid that their goals will not be met and because they have no visibility over software deliverables. User stories, as great as they are for short-term planning, are useless for high-level visibility. Effect Maps address this problem better by clearly visualizing stakeholder needs and allowing us to prioritize at that level. Drawing an Effect Map up to the third level and sharing it ensures that stakeholders and their needs will be addressed. This also means that we only need the first three levels of the map for mid term and long term prioritization and release planning, which enables us to postpone the discussion on detailed scope for everything but the highest priority stakeholder needs. Defining the first three levels of a map in a measurable way ensures that there is a clear target for iterative delivery, without actually defining how we’ll get there. Such definition of quality frees us to come up with the best possible scope, just at the time when we need it, and not waste time on defining or managing scope too much up front. It also enables us to clearly provide visibility to project sponsors of how much we have delivered up to any point in time. Effect Maps allow us to provide good visibility instead of control. As the project progresses, we can mark areas of the map that have been delivered, identify stakeholders whose needs are satisfied and plan whose needs will be fulfilled next. This high-level visibility provides the delivery team and the project sponsors enough information to track progress and effectively plan further scope iteratively. In addition, this high-level visibility enables people to see the big picture so that they can prioritize and analyze further scope for immediate delivery. Deriving scope from goals Techniques for collaboratively deriving scope from goals, such as Feature Injection [Matts09], are becoming increasingly popular in software development. Although still in an early adoption phase, such techniques are used by teams to build in quality from the start and further enhance the effects of techniques such as specification by example [Adzic11]. Effect Maps facilitate this process by helping a team to clearly focus on a business goal while planning scope. Drawing an Effect Map requires us to define implementation scope to satisfy a busi-

ness goal. Effect Mapping enables us to derive scope from goals just in time. This fits in nicely with flow-based software delivery methods, such as Kanban. Focusing deliverables on business value increments Focusing delivery on increments of business value instead of increments of technical functionality provides fast feedback to project sponsors about their assumptions in iterative deliveries. It also supports the delivery team in learning more about the business domain and technology. This is a key element of planning with User Stories (see [Cohn04]). Focusing delivery on the most valuable Minimum Marketable Features (MMFs) allows the management to plan releases to achieve the best level of return on investment from the project [Denne03]. By focusing our planning and prioritizing on addressing stakeholder needs, we ensure that deliverables actually provide business value. Effect Maps facilitate structuring a project around stakeholder needs. They help us break down how to satisfy stakeholder needs into MMFs and User Stories hierarchically. In the “iteration zero” approach, teams need to define a large number of stories quickly. They focus on deliverables (“I want ...” part), and invent stakeholders (“as a ...”) and benefits (“in order to...”). Stories become generic such as “as a trader I want to trade so that I can trade”, with dozens or hundreds of stakeholders and benefits. This completely defeats the point of user stories that are supposed to communicate intent. The need to fit stories into something easily deliverable in isolation disconnects the deliverable piece from the value expected by the customer, in particular when the benefits and stakeholders are invented on the fly. Note that this is not an issue with user stories as a technique, because they were designed to deal with this very problem, but an issue with how teams commonly misuse them. Effect Maps facilitate the correct application of User Stories as they help us to directly answer who is a stakeholder for a particular feature and how that feature helps the stakeholder achieve something useful. Preventing scope creep With a clear map from features to the end goal, it is easy to spot if a particular suggested feature is not relevant for the ultimate goal of a project. With large flat backlogs of ideas it is less easy to spot such things. Effect Maps achieve the same effect as the Goals-Features-Requirements hierarchies described in [Berkun05], ensuring that everything in scope really contributes to the goals. This helps to avoid scope creep by providing an argument against unnecessary pet features. Supporting scope change and reprioritisation Business priorities really change at the level of stakeholder needs. If a product backlog is described with an Effect Map, the team can effectively react to such changes.

Effect Maps allow us to derive scope incrementally and just in time. We do not have to make many decisions on scope for individual stakeholder needs before we really need to start investing in them. Even if we have already defined several levels of features for the reprioritized stakeholder needs, there is a clear hierarchical map that shows us what gets moved. We can stop working on any features related to needs that do not need to be satisfied now and start working on more important business deliverables. With product backlogs that are just a linear stack of stories, we don’t have anywhere near this kind of visibility, which is one of the reasons for “User Story Hell”. Shifting the mindset from cost of IT to investment in IT When we discuss priorities and plan without an assumed software implementation scope, my assumption is that we can change the tone from how much is it going to cost/take to have something delivered to how much the sponsors want to invest in supporting a particular stakeholder’s need. (This is the only assumption in this paper; I’ve tried all the other ideas in practice successfully). By applying Gilb’s measurability ideas to how much satisfying stakeholder needs will contribute to a goal, we can get an idea of what is the expected outcome of the related set of features, which will guide us in deciding how much to invest. With defined expected outcomes and investment, a team can come up with the suitable set of software features and enhancements to support the investment and achieve the goal, or decide that such a delivery is unrealistic early on. Instead of estimating how long a set of features will take to develop, we can invest in building features up to a certain level of quality. This is where User Stories fit in perfectly and where a clear definition of quality ensures a shared understanding. By delivering the features - fourth and fifth level deliverables - we should get parts of the expected results for parts of the investment. We can then measure the outcomes of increments and validate our business assumptions early. For example, in order to increase the number of users (“why”), a company might want to stimulate existing customers (“who”) to invite their Facebook friends to sign up for a product (“how”). The business can decide to invest 100,000 GBP and one month of development time in improving the way fans can invite their friends to sign up for a product through Facebook, expecting that this will give them 50,000 new users over 6 months. The team can then suggest ways for the software to support such an investment, for example offering some kind of personalization (“what”), which can lead to 5-6 suggestions of user stories that improve the software in that area. Each user story should ideally deliver a slice of the value, so it should create a visible effect on the number of new users. Although the expected total improvement is specified on the level of six months, the sponsors and the delivery team can come up with a scaled down short term indicator of what they expected from a particular story. After the first few stories go live, the sponsors can check their assumptions about the value of Facebook invitations against those

www.agilerecord.com

11

indicators much before the entire investment is spent. This can either validate the assumptions and confirm that further investment in invitations or personalization is justified, or lead to a reprioritization from a business level because the indicators are not there. In addition, if the expected outcome is achieved sooner, for example if the few early stories already deliver much more than the expected indicators, the business might decide to stop investing in invitations and shift the focus somewhere else. Conclusion Effect Mapping, as a process, helps to drive the project implementation scope towards delivering the right business effects. The resulting mind maps provide a good visualization of software project scope for the sponsors to effectively track progress, plan and prioritize without micro-managing and getting involved in minute delivery tasks. Effect Maps facilitate the implementation of User Stories for short term planning, measurable well defined outcomes for business prioritization, focusing on minimum marketable features for highest return on investment. They help teams to implement other good iterative product management practices. In addition, my assumption (which I plan to verify the first time I get the chance) is that it will help to switch the mentality from cost to investment for IT projects, and facilitate the business delivery for the return on that investment. References •

[Balic07] Mijo Balic, Ingrid Ottersten, Effect Managing IT, 2007, Copenhagen Business School, 978-8763001762



[Cohn04] Mike Cohn, User Stories Applied for Agile Software Development, 2004, Addison-Wesley Professional, ISBN 978-0321205681



[Weinberg91] Gerald Weinberg, Quality Software Management, 1991, Dorset House Publishing, ISBN 9780932633224



[Adzic11] Gojko Adzic, Specification by Example, 2011, Manning, ISBN 9781617290084, http://specificationbyexample.com



[Berkun05] Scott Berkun, Art of Project Management, 2005, O’Reilly, ISBN 978-0596007867



[Denne03] Mark Denne, Jane Cleland-Huang, Software by numbers, 2003, Prentice Hall, ISBN 978-0131407282



[Gilb05] Tom Gilb, Competitive Software Engineering, 2005, Butterworth-Heinemann, ISBN 978-0750665070



[Hubbard10] Douglas W. Hubbard, How to Measure Anything, finding the value of “intangibles” in business. Wiley, ISBN 978-0-470-62568-2



[Matts09] Chris Matts, Real Options, http://www. lulu.com/product/file-download/real-options-at-agile-2009/5949486

12

www.agilerecord.com

> About the author Gojko Adzic is a frequent speaker at leading software development and testing conferences and runs the UK agile testing user group. Over the last twelve years, he has worked as a developer, architect, technical director and consultant on projects delivering financial and energy trading, mobile positioning, e-commerce and online gaming. Gojko runs Neuri Ltd, a UK based strategic consultancy that helps ambitious teams from web startups to large financial institutions improve software delivery. To get in touch, write to [email protected] or visit http://gojko.net. For more information on Gojko‘s latest book, see http:// specificationbyexample.com

November 14–17, 2011 Potsdam (near Berlin), Germany www.agiletestingdays.com

November 14–17, 2011 in Potsdam (near Berlin), Germany

Agile Testing Days 2011 – A Díaz & Hilterscheid Conference

The Agile Testing Days is an annual European conference for and by international professionals involved in the agile world. This year’s central theme is “Interactive Contribution”.

Díaz & Hilterscheid Unternehmensberatung GmbH Kurfürstendamm 179 10707 Berlin Germany

Please visit our website for the current program.

Phone: +49 (0)30 74 76 28-0 Fax: +49 (0)30 74 76 28-99 [email protected] www.agiletestingdays.com

Tutorials – November 14, 2011 • Lisa Crispin: “Hooray! We’re Agile Testers! What’s Next? Advanced Topics in Agile We’re Testing” “Hooray! Agile Testers! What’s Next? Advanced • Janet Gregory: “Transitioning Agile Testing” Topics in AgiletoTesting” • Johanna Rothman: Lisa“Making Crispin Geographically Distributed Projects Work” • Esther Derby: “Dealing with Differences: From Conflict to Complementary Action” • Linda Rising: “Influence Strategies for Practitioners”/ “Patterns for Improved Customer Interaction“

• Michael Bolton: “Critical Thinking Skills for Testers” • Gojko Adzic: “Winning big with Specifi cation by Example: “Transitioning to Agile Testing” Lessons learned from successful projects” Janet50 Gregory • Elizabeth Keogh: “Introduction to BDD” • Lasse Koskela: “Acceptance Testing: From Brains to Paper and Paper to Computer” • Jurgen Appelo: “Agile Management: Leading Software Professionals”

“Making Geographically Distributed Projects Work” Johanna Rothman

“Dealing with Differences: From Conflict to Complementary Action” Esther Derby

“Influence Strategies for Practitioners & “Patterns for Improved Customer Interaction“ Linda Rising

“Critical Thinking Skills for Testers” Michael Bolton

“Winning big with Specification by Example: Lessons learned from 50 successful projects” Gojko Adzic

“Introduction to BDD” Elizabeth Keogh

“Acceptance Testing: From Brains to Paper and Paper to Computer” Lasse Koskela

“Agile Management: Leading Software Professionals” Jurgen Appelo

Conference (Day 1) – November 15, 2011 Time

Track 1

Track 2

Track 3

Track 4

08:00–09:25

Registration

09:25–09:30

Opening

09:30–10:30

Keynote: “Agile Testing and Test Management” – Johanna Rothman

10:30–11:30

“What Testers and Developers Can Learn From Each Other” David Evans

“Specification by Example using GUI tests – how could that work?” Geoff Bache & Emily Bache

11:30–11:50 11:50–12:50

“Agile Performance Testing” Alexander Podelko

“SQL PL Mock – A Mock Framework for SQL PL” Keith McDonald & Scott Walkty

“The roles of an agile Tester” Sergej Lassahn (T-Systems Multimedia Solutions GmbH)

“Using agile tools for system-level regression testing in agile projects” Silvio Glöckner

Talk 5.2

Break “Do agile teams have wider awareness fields?” Rob Lambert

“Experiences with Semiscripted Exploratory Testing” Simon Morley

“Design For Testablity is a Fraud” Lior Friedman

12:50–14:20

Lunch

14:20–15:20

Keynote: “Who do You Trust? Beware of Your Brain” – Linda Rising

15:20–16:20

“Top Testing Challenges We Face Today“ ” Lloyd Roden

“Session Based Testing to Meet Agile Deadlines” Mason Womack

16:20–16:40 16:40–17:40

Vendor Track

“Unit testing asynchronous JavaScript code” Damjan Vujnovic

“Automated Functional Testing with Jubula: Introduction, Questions and Answers” Alexandra Imrie

Talk 5.3

“Automated testing of complex service oriented architectures” Alexander Grosse

Talk 5.4

Break “I don’t want to be called QA any more! – Agile Quality Assistance” Markus Gaertner

“TDD with Mock Objects: Design Principles and Emerging Properties” Luca Minudel

“Agile on huge banking mainframe legacy systems. Is it possible?” Christian Bendix & Kjær Hanse

17:40–18:40

Keynote: “Appendix A: Lessons Learned since Agile Testing Was Published“ – Lisa Crispin & Janet Gregory

18:40–18:45

Closing Session

Exhibitors & Supporters 2011

Conference (Day 2) – November 16, 2011 Time

Track 1

Track 2

Track 3

Track 4

Vendor Track

“Micro-Benchmark Framework: An advanced solution for Continuous Performance Testing” Sven Breyvogel & Eric Windisch

Talk 5.5

“Beyond Page Objects – Building a robust framework to automate testing of a multi-client, multilingual web site” Mike Scott

Talk 5.6

08:00–09:25

Registration

09:25–09:30

Opening

09:30–10:30

Keynote: “People and Patterns” – Esther Derby

10:30–11:30

“About testers and garbage men” Stefaan Luckermans

“ATDD and SCRUM Integration from a traditional Project methodology” Raquel Jimenez-Garrido

“Test automation beyond GUI testing” H. Schwier & P. Jacobs

11:30–11:50 11:50–12:50

Break “Do we just Manage or do we Lead?” Stevan Zivanovic

“Agile ATDD Dojo” Aki Salmi

“Make your automated regression tests scalable, maintainable, and fun by using the right abstractions” Alexander Tarnowski

12:50–14:20

Lunch

14:20–15:20

Keynote: “Haiku, Hypnosis and Discovery: How the mind makes models” – Elizabeth Keogh

15:20–16:20

“A Balanced Test Strategy Strengthens the Team” Anko Tijman

“Effective Agile Test Management” Fran O’Hara

“Sustainable quality insurance: how automated integration tests have saved our quality insurance team.” Gabriel Le Van

16:20–16:40 16:40–17:40

“Automate Testing Web of Services” Thomas Sundberg

Talk 5.7

“Real loadtesting: WebDriver + Grinder” Vegard Hartmann & Øyvind Kvangardsnes

Talk 5.8

Break “Testing your Organization” Andreas Schliep

“Get your agile test process in control!” Cecile Davis

“Measuring Technical Debt Using Load Testing in an Agile Environment” Peter Varhol

17:40–18:40

Keynote: “Five key challenges for agile testers tomorrow” – Gojko Adzic

19:00–23:00

Chill Out/Award Event

Collaboration Day – November 17, 2011 Time

Track 1

Track 2

Track 3

Track 4

08:00–09:25

Registration

09:25–09:30

Opening

09:30–10:30

Keynote: “No More Fooling Around: Skills and Dynamics of Exploratory Testing”  – Michael Bolton

10:30–11:30

Open Space – Brett L. Schuchert

11:30–11:50 11:50–12:50

Testing Dojos – Markus Gaertner

Coding Dojos – Markus Gaertner

Break Open Space – Brett L. Schuchert

Testing Dojos – Markus Gaertner

Coding Dojos – Markus Gaertner

12:50–13:50

Lunch

13:50–14:50

Keynote: “Stepping Outside” – Lasse Koskela

14:50–16:50

TestLab – B. Knaack & J. Lyndsay

Open Space – Brett L. Schuchert

Testing Dojos – Markus Gaertner

Coding Dojos – Markus Gaertner

16:50–17:50

Keynote: “The 7 Duties of Great Software Professionals” – Jurgen Appelo

17:50–18:00

Closing Session

TestLab – B. Knaack & J. Lyndsay

TestLab – B. Knaack & J. Lyndsay

Column

Agile Testing in Real Life: March 2011 – Practice, Practice, Practice! by Lisa Crispin There’s an old joke New Yorkers tell: “A tourist asks a New Yorker, “How do I get to Carnegie Hall?” Answer: “Practice, Practice, Practice!”. In Malcolm Gladwell’s book Outliers, he discusses the 10,000 hour rule: the key to success in any field is to practice a specific activity for 10,000 hours, an idea based on a study by Anders Ericsson (http://en.wikipedia.org/wiki/Anders_Ericsson). When and how do we, as software developers, practice? As a tester, there are many skills I need to continuously improve, from “thinking skills” such as the ability to collaborate effectively to technical skills such as the ability to design maintainable automated tests. In our daily jobs, most of us don’t even think about practicing – we have real work to do. But if we never get better at our work, we’ll become frustrated, we may deliver software that does not meet our quality goals, our work will be less rewarding. We must look for opportunities to practice. In last month’s column, I mentioned a few different ways testers can hone our skills, including Testing Dojos and Weekend Testing. In the past few weeks, I’ve had the opportunity to participate in both of these activities, as well as a Code Retreat. Here are my experiences with these learning opportunities, and the benefits I reaped from joining in. I hope these might inspire you to try one or more of these events yourself! Weekend Testing (Actually, Weeknight Testing) The European Weeknight Testing (http://weekendtesting.com) sessions are scheduled during my own lunchtime, which makes it convenient for me to join in. Each session has a facilitator and a mission. We spend an hour working on the mission, individually or in pairs, and then debrief to find out what approaches and techniques each person tried, what they found out about the mission, and what they learned.

map. Our mission was to test TinyURL, and we tried all kinds of test cases: happy path, negative, boundary conditions, edge cases such as URLs containing accented letters, and various browsers. In the debrief, other participants told of their ideas, such as trying URLs in other character sets, turning Javascript on and off, and various heuristics that were new to me. This session gave me ideas for testing our own team’s web application, which is especially helpful given that we are embarking on some major UI redesign and will need to do similar types of testing. Testing Dojos At Belgium Testing Days in February, I spent some time at the Testing Dojo, facilitated by Markus Gaertner. The first mission was to explore Google Refine. The first time I “test drove”, we explored the idea of using Google Refine to help refactor FitNesse tests. I had trouble understanding how to use the application (which to me indicated problems with it), but we got far enough in the time period to see there might be possibilities and it could be worth pursuing later. In the second session I attended, Markus and I paired up to plan a house using an application called “Planning Wiz”. We found it awkward that the walls and objects did not seem to ‘snap together’, and searched the online help for clues whether there was such a feature, to no avail. (You can read more about the Testing Dojo at http://www.shino.de/2011/02/19/belgium-testingdays-testing-dojos-report/). I don’t get to pair a lot at my regular job, so I appreciated the chance to practice pairing at the Testing Dojo. It reminded me of the power of having two sets of eyes and two brains. Since then, I’ve been quicker to ask someone to come sit with me to help solve a problem or review a test design or potential bug. I’m inspired to organize a Testing Dojo for our local testing user group.

In my first session, I paired with Darren McMillan (who’s in the UK), and we decided to plan and track our testing using a mind

www.agilerecord.com

17

Code Retreat, Boulder One night I was checking Twitter and noticed a tweet about a Code Retreat scheduled for Boulder, Colorado in February. I live about an hour (in good traffic) from Boulder, and I’ve heard so much about code retreats, I’ve always wanted to participate in one. I was rather scared to sign up, because these are clearly for practicing writing code, and the only code I write is test automation code. Still, I want to get better at designing maintainable automated tests, and the organizers, including Corey Haines, encouraged me to attend.

10,000 hour of practice is a lot – 20 hours a week for 10 years. So, practice as much as you can at your job, too. Don’t always go with the first solution you think of. Experiment with different solutions. Get someone to pair with you and focus on doing the best work you can do. This investment will pay off by enabling you to help deliver a better product, and enjoy your work more in the process.

The Code Retreat consisted of six 45-minute pairing sessions in which each pair had to do TDD in the language of their choice, trying to solve the problem of Conway’s Game of Life. At the end of each session, we had to delete all of our code. This kept us focused on practicing. Between each session, people shared things they had tried and what they learned. The facilitators (Corey Haines and Chad Fowler) circulated around the room, looking at everyone’s code and making observations and suggestions. (You can learn more about Code Retreat at http://www.coderetreat.com/how-it-works.html). I definitely embodied the “Be the Worst” pattern at this event. Out of sixty participants, I was the only tester. My best programming language is Ruby, but I’m not that fluent in it. However, I’ve worked with teams who have practiced TDD and pair programming for eleven years, so at least I’ve seen it in action; these practices were new to many of the participants. Fortunately, highly experienced programmers were willing to pair with me, and it was fun to brainstorm new ideas for solving the code design problem in each session. I learned more in 45 minutes of pairing at Code Retreat about good code design than I could from days of reading books. For example, though I’ve heard the concept that tests and code must reveal intent, I had not practiced the art of naming things well and seeing the resulting benefits. I actually got to practice these things over and over, without any pressure of needing to complete some task for work. The following week at work, I needed to update some FitNesse tests. I noticed they were full of duplication and difficult to understand, so I invested time to refactor them and leave them in better shape than I found them. Find Opportunities for Practice No matter where you live, you can get some testing practice with an organization such as Weekend Testing. Check your local user groups to see if there are any Testing Dojos or Code Retreats scheduled. If you don’t find anything, organize one yourself – we have such a wonderful Agile community, you can find experienced facilitators willing to help and even companies willing to sponsor. If you have trouble acting on your desire to practice your skills, find a partner with whom you can exchange learning goals. Encourage each other at least once per week through emails, questions, offering support.

18

www.agilerecord.com

> About the author Lisa Crispin is an agile testing coach and practitioner. She is the co-author, with Janet Gregory, of Agile Testing: A Practical Guide for Testers and Agile Teams (AddisonWesley, 2009). She specializes in showing testers and agile teams how testers can add value and how to guide development with business-facing tests. Her mission is to bring agile joy to the software testing world and testing joy to the agile development world. Lisa joined her first agile team in 2000, having enjoyed many years working as a programmer, analyst, tester, and QA director. Since 2003, she’s been a tester on a Scrum/XP team at ePlan Services, Inc. in Denver, Colorado. She frequently leads tutorials and workshops on agile testing at conferences in North America and Europe. Lisa regularly contributes articles about agile testing to publications such as Better Software Magazine, IEEE Software, and Methods and Tools. Lisa also co-authored Testing Extreme Programming (Boston: Addison-Wesley, 2002) with Tip House.

Knowledge Transfer – The Trainer Excellence Guild

Agile Requirements: Collaborating to Define and Confirm Needs by Ellen Gottesdiener

• Jun 21 – 23, 2011

Berlin

Rapid Software Testing by Michael Bolton

• May 23 – 25, 2011

Helsinki

Risk-Based Testing by Hans Schaefer

• Jun 10, 2011

Kopenhagen

Foundations of Agile Software Development by Jennitta Andrea

• Jul 7 – 8, 2011 • Jul 11 – 12, 2011

Berlin Amsterdam

An Agile Approach to Program Management by Johanna Rothman

• Sep 12 – 13, 2011 • Sep 15 – 16, 2011

Berlin Amsterdam

• Sep 26 – 28, 2011

Brussels

• Oct 10 – 12, 2011

Amsterdam

Testing on Agile Projects: A RoadMap for Success by Janet Gregory From User Stories to Acceptance Tests by Gojko Adzic

Website: www.testingexperience.com/knowledge_transfer.php

© iStockphoto.com/LdF

Agile Management Matters by Jurgen Appelo

Maybe I should stop talking to people about their managers. Too often it makes me sad. I heard several stories this week about management behavior that completely destroyed the motivation people have to do a good job. In one story a manager refused to let his team members organize meetings with customers, because he considered it his job to handle all important communication. In another story a committee of three managers took care of evaluating job candidates for a new position on a team, but the team members themselves were never asked for their opinion. And today I was told the story of top management imposing a 10% cost reduction target on all teams, resulting in destruction of trust and productivity, because people found ways of cheating the system to make their targets, in the most creative ways possible. After all, nobody wants to get fired. Why do managers manage their organizations so badly? Why do they make everyone feel negative about management? Best practices for management have existed for decades. Wonderful books for managers have been written by Drucker, Deming, Pralahad and hundreds of other experts. Management and leadership gurus like Buckingham, Mintzberg, and Gamel teach people how to manage organizations well. They know that managers have to do all they can to keep employees motivated and energized. They know that self-organizing teams work better than command-and-control hierarchies. The experts already know that teams need inspiring goals and clear responsibilities. And they know that competence in teams must be developed and nurtured. The gurus already know that the best businesses have scalable organizational structures, and that the whole organization must be involved in continuous improvement efforts.

Ten years ago, with the introduction of the Agile Manifesto, software developers received wonderful guidelines on how to deliver successful software projects. With various principles and practices, software teams were told to focus on individuals and interactions, working software, customer collaboration, and embracing change. And it worked! Reports such as VersionOne’s State of Agile Survey prove that Agile organizations have experienced an increased capability of handling changing priorities, better project visibility, improved team morale, accelerated time to market, and much more. [VersionOne 2010] Except... there are still significant barriers to Agile adoption. Not all organizations are able to transform and be Agile. The major obstacles appear to be management responsibilities. Some of the main issues are about organizational culture, resistance to change, availability of capable employees, and lack of management support. And these issues are all in the management domain. Managers should be able to solve them. Except, they often don’t. I don’t believe this is intentional. When I started out as a manager, I made the same stupid mistakes as most managers before me. My desk was better looking than the desks of “normal” employees. My salary was higher. And my room was bigger to accommodate for my growing ego. As a new manager I simply didn’t know any better. I didn’t read books about good management practices. I didn’t know any courses for Agile managers. And I certainly didn’t know my management style was even worse than my programming style. However, when business performed badly, I decided to learn. And slowly but surely I learned how to do things well.

However, it seems that many managers don’t. At least they don’t act like they know.

20

www.agilerecord.com

Now that Agile software development has existed for 10 years, people in the software business are reflecting on the results of 10 years of Agile. One result that has come to light is that the

role of the manager in software organizations has largely been ignored. Agile developers know what to do. Agile testers know what to do. And Agile project managers know what to do. But what about Agile line managers? There are at least a hundred books for Agile developers, testers and project managers, but very few for Agile managers and leaders. However, when organizations adopt Agile software development, not only developers and project managers need to learn new practices. Development managers and team leaders must also learn a different approach to leading and managing organizations. Fortunately, times are changing. Agile gurus are realizing that Agile software development can only be a success if it is about more than just software development. In order for Agile to succeed, business as a whole needs to change. Including HR departments, and marketing departments, and financial departments. And management. Managers need to learn what their new role is in software development organizations in the 21st century, and how to get the best out of Agile. In fact, nowadays Agile often starts with management. 80% of Agile transformations are initiated by managers. When management doesn’t change, along with the rest of the organization, any Agile transformation is doomed to fail. That’s why, next time you are discussing Agile with some managers, you might want to ask them a couple of questions. Such as, what are you doing to energize your people? How do you facilitate selforganizing teams? How do you define purpose and constraints? How are you developing competent teams? How do you grow a useful organizational structure? And what approaches do you use to improve the whole business? (These views on management are depicted in Figure 1, showing Martie the Management Model, a 6-eyed good-natured Agile management monster.)

Figure 1: Martie the Management Model

If your managers are not able to answer such questions, consider giving them a book or two, or convince them to attend a course or conference. And if none of that seems to help, leave! Every employee has the right to work in an Agile organization. Help your manager to become Agile, or find one who already is. Because Agile management matters. [VersionOne 2010] http://www.versionone.com/state_of_agile_ development_survey/10/

> About the author Jurgen Appelo is a writer (http://www. management30.com/ book/), speaker (http:// www.jurgenappelo.com/ speaking/), trainer (http:// www.jurgenappelo.com/ training/), entrepreneur, illustrator, developer, manager, blogger, reader, dreamer, leader, freethinker, and… Dutch guy. Since 2008 Jurgen has written a popular blog at www. noop.nl, which deals with development management, software engineering, business improvement, personal development, and complexity theory. He is the author of the book Management 3.0: Leading Agile Developers, Developing Agile Leaders (http://www.management30. com/), which describes the role of the manager in Agile organizations. He is also a speaker, being regularly invited to talk at business seminars and conferences around the world. After studying Software Engineering at the Delft University of Technology, and earning his Master’s degree in 1994, Jurgen Appelo has busied himself starting up and leading a variety of Dutch businesses, always in the position of team leader, manager, or executive. Jurgen has experience in leading a horde of 100 software developers, development managers, project managers, business consultants, quality managers, service managers, and kangaroos, some of which he hired accidentally. Nowadays he works full-time developing innovative courseware, books, and other types of original content. Sometimes, however, Jurgen puts it all aside to do some programming himself, or to spend time on his ever-growing collection of science fiction and fantasy literature, which he stacks in a self-designed book case. It is 4 meters high. Jurgen lives in Rotterdam (The Netherlands) -- and sometimes in Brussels (Belgium) -- with his partner Raoul. He has two kids, and an imaginary hamster called George.

www.agilerecord.com

21

© iStockphoto.com/Squaredpixels

Agile (Unified Process)

­– context for scrum and framework-approach makes it a viable business case by Hannes Van Baelen

Agile Development is becoming an increasingly popular trend. The current vision of the IT department as a service provider for business units makes the satisfaction of business needs a key performance indicator in the measurement of IT service effectiveness. At the same time emphasis is put on IT-Business alignment to allow adaptation to organizational and market changes. This is reflected in current requirements for applications and IT infrastructure to ensure that the IT needs of the organization are met with the required level of flexibility and support. These points stress the need for a client oriented approach. Traditional development methods operate around fixing requirements, mitigating risk through a formal procedural approach, and validating deliverables that are supposed to fully capture fixed specifications. The cornerstone of this approach is to understand and elaborate the problem beforehand and to firmly define the delivered system’s service offering from the start. The problem with this approach is that it makes simplified assumptions about the collaboration between the multiple roles involved in the software development lifecycle. Not only is the involvement of the business minimized, it is also assumed that business experts participating in requirements’ definition are able to fully grasp the meaning of different validation phases, and able to firmly commit to project goals at an early stage. Written documentation becomes the main medium of communication, and it is assumed that everything is read, understood and validated within foreseen timeframes. Finally, architecture and infrastructure considerations are typically excluded from business review and as such, tend to be viewed exclusively as IT department responsibilities. Adaptive software development recognizes the importance of constant collaboration and adaptation by involving stakeholders and future users in validating working software and allowing

22

www.agilerecord.com

requirements’ definition to be an iterative process. What Agile brings to this picture is the understanding that in the end it is the software that matters, and that those who implement it need to grasp what the project is about. In an environment of trust carefully selected roles turn an IT project into a truly collaborative and iterative effort where business commitment can be achieved and fostered. Working software is far more effective in communicating the requirements of future applications compared to abstract documentation, and allows the people building it to better visualize and grasp the realities of the system. By involving the actual system users, better quality software is delivered and user acceptance is ensured with business units starting to see actual services emerge from the system’s functionality and accept changes as a part of life. This collaborative and adaptive environment provides motivation to understand what is needed and to ensure that the product evolves towards a usable and acceptable result. A very distinctive feature of Agile and Scrum in particular is that project management and release management activities are conceived as collaboration between IT and business. The product backlog is defined together and prioritization of issues into the next sprint backlog comes from a mutual agreement between the business stakeholders and the development team. New and potentially shippable software is delivered within weeks, and the future users of the system actively participate in the testing and definition of change requests. The following figure illustrates the general principles of the Agile development lifecycle: Regardless of the Agile approach used, there is one point commonly shared but sometimes forgotten: you need a context to start from. Even the theory of Scrum imposes a context at the outset of a project. This context consists of architectural and design decisions established during a preliminary analysis activity. Where this principle is not respected, it is all left to the development team to define the context in an incremental and iterative way.

The Agile Unified Process (AUP) is a method that tries to establish this context by augmenting incremental development with the phases of the Unified Process. It keeps all elements of Agile development while imposing architecture and requirements envisioning at the outset of the software delivery lifecycle. Self-organizing teams that collaborate, commit and adapt remain key, and scope is defined through business-driven prioritization based on the added value that each feature is expected to bring to end users. AUP tries to wrap these activities within a framework where Agile development serves to implement this framework and define new applications as customizations. When a non-Agile organization considers switching to an Agile approach, this comes down to a shift in paradigm since it might imply -not to say require- a change in corporate and working culture and specifications for staffing. A culture of trust can’t be established immediately and is not in itself a guarantee for success. There needs to be an investment in the team, environment and infrastructure with development teams formed to establish a good mix of senior and junior members in which the senior members undertake important coaching responsibilities. The collaborative nature of Agile requires that business representatives are committed and available throughout the project lifecycle allowing the Business Units impacted by the project to take an active part in shaping it according to their needs. User Acceptance Testing at regular intervals gives a realistic flavor of the future application and makes user acceptance more likely. On the other hand, documentation overhead is significantly reduced, but a lot of effort goes into communication, review and rework. In order to make the shift towards collaborative, iterative and adaptive development viable, we need to define a framework for future needs. The inherent learning curve that the development team needs to go through in understanding features makes this

a long term return on investment that would be unsuitable for a single application. In order to enhance knowledge sharing and transfer in the long run, it is also important to consider which roles should be assigned to which people. Typically the first action in such an adoption is to assess whether the requirements of switching to an Agile approach in terms of organizational culture, personnel, communication and project needs can be met. In addition, it needs to be determined if the IT needs can be defined in terms of framework features for the whole organization. As previously mentioned, AUP is a variation of the Unified Process. As such it defines the software delivery cycle as composed of four major phases: Inception, Elaboration, Construction and Transition. Inception is a non-recurring phase that serves to define the overall context for the project and a high-level view over the required features for a generic application framework. Elaboration is about going into the details of each feature that will be developed in the iterative and incremental sprints that make up the Construction phase(s). According to the project a number of sprints form a new release (iteration) for testing. Generally a number of iterations result in a new version to roll-out. The Transition corresponds to that roll-out. As such Construction and Transition are iteratively transversed during the development cycle. An overview of these phases is provided in the following figure, where the text in white provides a description of the phase, and the text in yellow outlines each phase’s output:

www.agilerecord.com

23

INCEPTION

Non recurrent phase. Context and Environment are defined for a first prototype containing all generic features for future application needs. The challenge lies identifying architechtural infrastructure components that ensure covering future needs and apt for Agile development. It must also be taken into account what the implications are for existing components and blackend systems. Often a unified data model will be introduced. The output of this phase should be carefully documented and this documentation of the context must be maintained. Of course this can happen in a pragmatic way but it is very important for this to be available to technical team members. Context High-level conceptual data model GUI prototypes for generic features Technology stack for framework Backlog of generic features Environment Development Infrastructure Test Environments Build and Deployment mechanism Version Management DBA procedures Tools for follow-up of development (sprint Backlog, status stories, velocity, etc.)

ELABORATION (context for Scrum) First Elaboration Proof of Concept. A selection is made of features that are components of the generic framework defined before. Look-and-feel described in more detail. All data needs are addressed with a more precise conceptual data model. The first time this phase occurs the features are more generic (login, GIS, ...). The development infrastructure is deployed and evaluated. By now the team is actively involved in the implantation choices for the context to be build. Generic prototype of framework Proof-of-concept for the architecture evaluate environment and way of working Functional Acceptance Testing One time for each new project From then on each new application starts with such a phase in which the specific needs are defined as customizations within generic framework. Data modelling requires stakeholder participation and must be detailed before implementation can start. Preliminary analysis and design for specific application Backlog features for the project are prioritized into sprint Backlogs GUI design, datamodelling, navigation map

The Inception phase allows the architecture and infrastructure for future development to be captured in ‘architectural envisioning’ and ‘requirements envisioning’ activities that involve architects, analysts and representatives of all business units. The outcome is the definition of the features that would make up a framework within which all application needs of the organization are covered. This phase begins with the analyst and architect profiles and ends with the definition of the full future development team. Initially these four phases take place to implement the generic framework that will put in place the building blocks for future applications. From this point on, applications are developed through projects that begin with an Elaboration phase and continue with sprints to complete the Construction and Transition

24

www.agilerecord.com

CONSTRUCTION (iterations of x sprints) Several sprints make up iterations. One iteration corresponds to a new release to be fully tested. Sprints result in customization of the features by means of a maintainable programming model

TRANSITION

After demonstrations ans User Acceptance Testing a new version released in the (pre-)production environment

Agile Development Cycle The Agile Software Factory id up-and-running. All modifications to the overall context must result in updating the documentation thereof.

> About the author Hannes Van Baelen is a senior consultant at TRASYS where he performs business, functional and technical analysis for implementation projects. Hannes is a Master in History and holds a degree in Applied Computer Science. He has been involved in the development of custom-built web applications and large integration and migration projects in analyst roles. Having a great interest in organizations and delivery methodologies, his first mission in an Agile environment made him a true believer. For TRASYS he presents Agile (Unified Process) to clients as part of IT Governance missions.

phases. This highlights the importance of the initial architectural requirement that allows the framework to evolve to a point where a new application can be mapped onto existing features. In fact it comes down to setting up an Agile software factory after aligning the whole organization around a framework covering future needs. Once this generic framework is developed, each new application begins with some in-depth analysis and design around the defined project backlog features to be translated into stories for the sprints. By formalizing preliminary analysis and design activities, AUP maintains the enabling features of Agile while putting boundaries within which IT building blocks are being created to ensure a good fit with the overall business needs of the organization.

September 9–11, 2011 in London, UK The Conference for Product and Requirements Management in Agile

Call for Papers The call for papers ends by April 15, 2011. The theme of this year’s conference is “Building software that matters”. Go to www.agilereq.com and submit your paper!

Topics of interest: Emerging practices for managing requirements and specifications in Agile/Lean processes · Release planning in Lean and Agile · Evolutionary product management · Agile business analysis · Providing big-picture visibility and feedback to sponsors · Engaging business users and stakeholders · The roles of business analysts and product owners in lean/agile team · Product management/requirements challenges for distributed teams · Experience reports of applying user stories · Transitioning from traditional product management practices to agile ideas · Value-streams · Applying agile in regulated environments · Requirements traceability · Estimation and planning in Agile and Lean · Applying Minimum Marketable Features · Focusing development and delivery on things that really matter · Core domain distillation

www.agilereq.com

by José Carréra Alvares Neto

The information technology industry increasingly realizes the importance of conducting, in a careful and efficient manner, verification and validation activities, which includes software testing. Here the testing individuals, along with the rest of the team, work to assure that the developed software meets all clients’ needs and is of a high quality standard. To achieve this goal, an effective test plan is indispensable. From the beginning of the project, a test engineer should be present, because this allows us to plan ahead and to find and fix defects as soon as possible in the development life cycle. After all, as mentioned in testing literature, the sooner defects are found, the lower will be the costs to fix them and the higher is the probability that their correction won’t cause new bugs (Glenford J. Myers, “The art of software testing”). This article, describes how testing activities were performed in a mobile game development project using SCRUM as the management process. It will describe in detail the testing strategies used, along with the best practices identified and the learned lessons. The main goal of the article is to assist other test engineers who are starting in game development projects, so that they can easily and rapidly adapt to the differences compared to standard software development projects. This will also allow them to contribute to the creation of new and even more effective testing techniques.

be developed in short development cycles (“sprints”), whereby at the end of each sprint a new version of the product was released to the client. In our project, each sprint lasted 10 work days, and the items to be developed were chosen by the team during sprint planning meetings. During the sprint planning meeting, all team members had to prioritize all BLIs, in order to help decide which tasks had to be developed during the following sprint. From a test engineer’s point of view, the approach was to always try to anticipate the features that appeared to be critical for system behavior and for meeting the client’s needs. Prioritization could be assigned due to complexity or importance; the intention was to prevent bugs relating to these features from being found late in the project life cycle.

The project All the techniques and learned lessons described in this article were experienced during a project developed at C.E.S.A.R. (Recife’s Center of Advanced Studies and Systems) from December 2007 to March 2008.

Testing strategy Planning – First sprint Testing activities started before the end of the project’s first sprint with the arrival of a test engineer. As a first task, a complete analysis of the Basic Game Design Specification (BGDS) was made. This document summarizes all basic game features. After evaluating the BGDS and all client’s requests and needs, a simple test strategy was defined which involved documenting manual test cases as soon as the sprint started using a simple priority scheme based on the complexity of the selected story and the implementation order. At this initial stage we also identified and solved all test environment needs, like available hardware, SIM cards, bug tracking software, etc. Finally, a set of general test cases made available by the client were also evaluated prior to starting test execution.

Since the project included, amongst others, elements like short duration, a small team, frequent client involvement, and constant requirement changes, it was decided to apply SCRUM and an agile development methodology to help manage all activities during the project. In accordance with SCRUM, all tasks to be developed were listed as backlog items (BLI). These were elected to

Test case design One of the initially defined constraints to the project was that all documented BLIs should be covered by test cases, and the BGDS were considered as the test oracle. Based on project characteristics like short duration, scarce BLI documentation, and frequent changes, we decided to design test cases in a more general way

26

www.agilerecord.com

© iStockphoto.com/uriy2007

Test Planning and Execution in a Mobile Game Development Project using SCRUM

with a focus on testing the game’s basic functionalities for each BLI. This resulted in a set of test cases similar to those used for “sanity” tests. This way we expected to maximize the time spent on test execution and to avoid spending excessive time on documentation. Test execution The test specification consisted of a spreadsheet with a set of test cases sent by the customer and a group of specific scenarios designed specifically for the game by the test engineers., A MANTIS bug tracker (http://www.mantisbt.org/) was used as the defect management tool. During the execution phase, the following activities were planned to be executed in each sprint: The main focus was the execution of the exploratory tests as features were released throughout the sprint, along with the execution of the client set of test cases and the game’s specific group of test cases. The use of exploratory testing is generally encouraged for projects with characteristics similar to ours, and when executed by experienced test engineers, such”exploratory tests can be much more efficient than the tests performed following scripted test cases” (James Bach). During the course of each sprint, an important task performed by the test engineer was to effectively monitor the progress of all developers’ activities on the current sprint. This was done in order to define the best time to request the release of intermediate versions of the game for component testing and also to avoid defects or change requests being raised for features that had not been fully released by the development team. This monitoring of activities was made easier by the SCRUM methodology where, during the daily meetings, we could easily follow project activities through the burndown graph. As previously mentioned, the decision to prioritize the exploratory tests was made due to the project’s main characteristics, such as a lack of a extensive documentation at the beginning of the project, and frequent changes of client’s needs and requirements. Formal tests The complete set of test cases consisted of the client’s standard test cases together with game- specific test cases which came to a total of around 15O tests. However, not all specific tests were executed in the initial sprints, since most of the features were not yet developed. A complete test execution cycle would take an average of three days. During the rest of the sprint other testing activities like test design and maintenance were performed along with exploratory testing and change request validation. The client’s set of test cases mainly focused on assuring that the company’s standards were being followed by our team. This concerned features like key mapping, performance, interaction and user interface. Taking this into consideration, it was mandatory to run these tests for each sprint in order to assure that the developed game suited all clients’ demands and, above all, that they wouldn’t interfere with the mobile phone’s basic features.

At the end of each sprint an intermediate version of the game was released to the client, who could analyze the delivery and provide feedback to our team. This usually involved aspects like game play, game design and defects. Through an analysis of this feedback we could figure out which areas were more relevant to our client, adjust our test strategy accordingly, and then focus on the missed defects in the next sprint. Exploratory tests Exploratory tests, which were chosen as our main test execution strategy due to the project profile, began early in the initial sprints. In a traditional approach, informal test charters were prepared focusing on a specific area or BLI to be tested. During the course of the project, as the Game Design Specification became more mature, we started using two new approaches for running the exploratory tests, which will be described below. Test case based In this approach the actual test cases were used as the focus area for each exploratory test, whereby the steps of the test case were run in an unusual way. The tester is encouraged to diverge as much as possible from the specified test steps, and to try and think of alternative paths that could be taken instead of the one suggested by the test case. The main idea is to use the existing test cases just as a reference in order to cover all the features of the application and to leave it to the test engineer to evaluate relevance of the features to the system. By doing this, we encourage the test engineer to think creatively to find new ways, or ways not previously foreseen, to break the software. This approach achieved very good results and certainly increased the total number of relevant defects found. If this approach is to achieve a high degree of success, it is very important for the test engineer to know all existing features of the system, the market and the client’s expectations. He needs to fully concentrate on his work in order to notice details that may have escaped before. It is also important that the engineer can work in a comfortable environment during test execution without being constantly interrupted, thus allowing an efficient analysis of the existing test cases and unexplored possibilities. GDS scanning A different approach, which we used in the later sprints, was based on the execution of the exploratory tests through a complete scan of the GDS. This approach couldn’t be applied fully in the initial sprints, because only the BGDS was available, which didn’t contain enough information to allow a more detailed execution. For this technique to be applied successfully, the test engineer needs to have already read and understood the document,, and there should be no open questions. The main idea of this approach is to make sure that every description included in the GDS is correctly coded in the game. By simultaneously analyzing the document and exploring the game, it becomes visible if any important scenario isn’t well described in the text. With this approach we can combine the benefits of static testing of docu-

www.agilerecord.com

27

mentation with the advantages of exploratory testing of the application. Any gaps between game and documentation can easily be found and the game designer can clarify the type of any bugs found. Registered defects During the course of the project, 122 change requests (CRs) were registered, which were simply classified for this article into four categories, according to the type of defect, as follows: functional, user interface, art, and sound effects. Later on in this article is described how each of these areas presented unique important characteristics. In addition to these descriptions, two graphs will indicate the amount of CRs per type and the severity of the registered defects. The severity of each bug was classified as follows: (i) minor (for defects that do not block the game’s correct behavior, e.g., the phone doesn’t vibrate when a new level is unlocked); (ii) normal (for defects that affect important elements of the game, but do not block the game’s execution, e.g., a special effect lasts shorter than the value specified in the GDS); (iii) major (for defects that directly affect gameplay or user satisfaction, that have a direct impact on the game’s level design, and that prohibit the player from proceeding, e.g., scenarios where the game freezes); (iv) critical (for defects similar to major defects, but with an even higher impact on the game’s correct operation, e.g., level is not unlocked after completing tasks or phone doesn’t receive calls while game is started). Functional The CRs from this group are related to inconsistencies regarding the game’s designed rules and logic. This type of defect, even those with lower severity, have a direct impact on the game’s success, because they usually get in the way of a smooth understanding of the game’s objectives. They may even block the player from overcoming the challenges presented, turning the game into an impossible mission. Scenarios like score limits, lousy player and excellent player approach, and other possible features that involve testing the game’s features and limits, were tests that usually detected this kind of defect. These scenarios were not always well described in the GDS, and developers didn’t take them into consideration or unit test them properly.

Art All change requests of the “art” category are related to features like image rendering, lightening, and any other aspects related to the elements produced by the art team (although some of them may also be performed by the development team). This type of defect varies widely from huge perceptible failures, which can be easily noticed, to specific scenarios that are caused only by a defined sequence of actions. This kind of defect can be found not only by focusing on this aspect during testing, but also while running any other type of test. All that is needed is a highly alert tester who pays attention to details. It is highly important for the test engineer, especially if he is not experienced with this kind of defect, to interact with the art team to clarify the questions related to possible defects, and in doing so beginning to understand the features, their limits and their solutions. Sound effects We also assigned defects to a sound effect category, because it turned out to be a key area where initially we did not expect to find a relevant amount of bugs. However, testing showed that this assumption was wrong. Several defects were found which demanded considerable work from the development team to get them fixed. One aspect observed, the concurrent events execution, caused complications in the game’s general behavior. This concerns scenarios such as executing a sound while another one is already playing, user-initiated pauses of the game, disabling and enabling the sound. In addition, severe performance problems could result from some of the game’s sound events. Throughout the course of the project, this kind of the defect, which initially was underestimated, gained higher priority and attention. We found that in this area we had a higher probability of causing other defects while fixing one. At first, test execution for this aspect was impacted by the quality of the available hardware, which presented a bad sound quality. Later on, with the arrival of new hardware, tests could be easily executed and showed better results. Therefore it is important for the testing team to ensure the availability of the correct hardware at the beginning of the project.

User interface Interface defects were connected to failures during display of texts, opening of pop-up windows, screen limits, etc. These issues could be easily noticed by any player, and would definitely give the idea of a poorly developed game, without care for details. These defects, although easily detected, frequently escape the developing team and even the initial test cycles. This can happen because developers usually run tests using a simulator and not a real device. It is part of the test engineer’s job to assure that all game screens and texts are checked for the supported devices.

28

www.agilerecord.com

Figure 1 – Amount of defects registered by type

To assist in this task, one real benefit is to add a recorded video of the issue or, if that is not possible, at least a screenshot. If the issue can’t be reproduced on the simulator, use a regular camera to capture it. Just remember that this is not a rule. It’s up to you to evaluate whether a CR should be improved by adding some extra resource (after all, not all issues have a visual feedback).

Figure 2 – Severity of reported defects

Best practices Below we describe some of the practices that were applied in our project and that presented good results. These could therefore even be applied to projects in different areas. BLI defect tracking Every time a new change request was submitted during the course of the project, along with its short description, a tag was added to identify to which BLI it was related. At the start, this was only used to help the Configuration Control Board (CCB) with the CR assignment, but later on it aided the team in evaluating which BLIs presented more defects of higher severity. On the basis of this information we could plan test execution focusing on two aspects: (i) validate if BLIs with few or no defects were sufficiently tested; (ii) analyze BLIs that showed a greater amount of defects, their characteristics and which test scenarios could be re-tested or added to allow the discovery of new bugs. Greater emphasis was applied to the second aspect, as described by Myers “The probability of the existence of more errors in a section of a program is proportional to the number of errors already found on that section”. This approach proved to be efficient as new errors were found. Some adjustments were made to make this task faster. For example, instead of adding the BLI identification in the description header, we added a new text field to the defect management tool so we could identify the BLI and later on easily identify the defects found. Greater level of detail in defect description One of the most important activities of the test engineer is recording the defects, found in a defect management tool. Nevertheless, some people didn’t perform this task as expected. Issues were sometimes not completely described, making it harder for managers and developers to understand the error. Later on this generated several interruptions for the test engineer in order to clarify the issue description or, even worse, to discard the defect. Therefore, it is very important for the defects to be reported in a detailed and didactic manner, making everyone’s job easier, including other testers that might be involved in retesting after the issue gets fixed.

Defect management tool Another highly important asset to assist the test engineer’s work is the defect management tool. In our project the open source tool MANTIS was used. It provides the engineer with effective control of the registered defects, allows trends analysis and, most of all, enhances team communication. Taking notes Keeping an electronic notepad always open or even a piece of paper and pen on your desk can really help during test activities. With the time pressure and tight deadlines always present , it is not always possible to test all the scenarios that we foresee during testing. This may happen because of the need of keeping focused on the current test cycle or other tasks being performed. If not written down somewhere, these other activities or even hints observed during team meetings may get lost and never tested. Making a habit of note-taking for any relevant information that might help in future activities (for example, a new test scenario or system characteristic, some user feedback, etc.), will help the test engineer to avoid forgetting interesting investigations that could be performed and assist in finding new bugs. LEASSONS LEARNED Throughout the project, many new experiences were faced and a lot was learned by working in a project with very different characteristics to those found in regular software development projects. Test case design After gaining some experience in test case design for games, we observed that tests that were related to functionalities didn’t need to be repeated on different game scenarios because the error found applied to the whole application (e.g. pressing a specific key doesn’t produce the expected behavior). On the other hand, when considering the user interface and art, this type of test needed to be repeated for each scenario of the application, since some errors can be found only in specific scenarios. Using cheat codes As the game evolved, a couple of cheat codes (coding that provided special game advantages that would not be available on the final version, such as “invincibility”),were created to make some tests easier to execute. for both developers and testers. However, we need to be extremely careful when deciding to use this type of assistance. If they are misused, it can lead to hidden bugs or, conversely, bugs that only appear due to the existence of the cheat code. An example that happened to us was a cheat code that unlocked a game level, which blocked developers from reproducing a bug reported by testers.

www.agilerecord.com

29

If correctly used, cheat codes can increase the team’s performance and even help anticipating bugs. Team communication It is also important for the testing team to define specific time intervals throughout the project where reports from test execution cycles will be sent for the rest of the team. This helps us to make our work visible and understandable for the entire team. Although SCRUM already makes tasks communication easier among team members by applying the daily meetings and burndown graphs, we still need a more formal type of communication. A recommended moment for these reports is at the end of each sprint. Activities followed up using the burndown graph As test engineers we commonly need to assure that we are using the latest available versions for testing. During the course of a sprint, the time for requesting new partial versions can be determined through the daily meetings and the burndown graph, where we can be aware of the stage of each BLI and set up with the developers the number of features available for testing. The test engineer needs to keep constantly in touch with the team leader to assure that these intermediate versions get released for component and exploratory testing. BLIs changes closely followed Generally on Agile projects, but especially on ours, a great amount of change happened to the BLIs which directly impacts on the previously designed test cases. Therefore, it is highly relevant to stay tuned to changes caused by client’s feedbacks, usability tests and reports, meetings and also technical limitations. This follow-up is made easier when the Game Designer works closely with the rest of the team and keeps everyone informed when changes occur. This way, any questions related to any feature of the game can be easily discussed and clarified with the Game Designer. Informally reported defects don’t get fixed Just like a tester forgets to test scenarios that he doesn’t document, by informally reporting a defect to a team member, (developer, GD or artist), there is no guarantee of a fix. No matter what the severity of the report is, the informal communication creates a high risk that it doesn’t get fixed or, if fixed, that it may not be validated. Retest with different devices Throughout the project several errors were found which were related to the game’s interaction with specific devices or external applications such as phone calls or SMS. This kind of error causes considerable effort for the development team to investigate the issue and to find out the cause. However, this kind of error can’t be solved by our team since it is not related to our game. Therefore, it is a good approach to retest with different devices or different games with similar characteristics. This extra information won’t eliminate the CR, but developers can start analyzing with this aspect in mind.

30

www.agilerecord.com

This approach can also be applied to different type of scenarios, such as performance, sound effects, rendering and other features which can be compared with similar games. CONCLUSION No matter what stage we are at in the development life cycle or the experience level of the developing team, there are many causes for software bugs. Most of them are not related to the code itself, but to other problems, such as incomplete or unclear requirements, hardware issues and integration. Therefore, considering the practices and lessons learned from this project, we are convinced that software quality is a high priority for modern software products, like mobile games, and that achieving this shall be a common goal for the entire team with a clear division of responsibilities. Making sure that there are no conflicts between developers, testers and other team members regarding quality, everyone must work together to deliver a high quality product. Only by placing priority on quality we will be able to deliver products that fully meet our clients’ needs and expectations.

> About the author José Carréra MSc, has been test engineer at C.E.S.A.R. (Recife’s Center for Advanced Studies and Systems) since 2006 and Professor of Computer Science at FATEC (Faculdade de Tecnologia de Pernambuco), Brazil, since 2010. He obtained his master degree in software engineering (2009), graduated in computer science (2007), and is a Certified Tester - Foundation Level (CTFL) by the ISTQB (2009). His main research interests include quality assurance, agile methodologies, software engineering and performance testing.

A Díaz & Hilterscheid Conference

©iStockphoto.com/STILLFX

May 9–10, 2011 in Bad Homburg v. d. H., Germany The Conference for Testing & Finance Professionals www.testingfinance.com

Conference (Day 1) May 9, 2011

Zeit

Track 1

Track 2

Track 3

08:00

Anmeldung

09:10

Eröffnungsrede

Track 4

Vendor Track

José Díaz Keynote: “Social Banking 2.0”

09:15

Lothar Lochmaier (freier Wirtschaftsjournalist) 10:15 10:20

Pause “Quality Approach at ‘Zurich Center of Excellence’ in Barcelona” Javier Fernández-Pello & Raul Alares (Zurich CoE)

„Effizientes Testen“

„Basel III und Banksteuerung 2013 – ein komplexes Paket mit hohem Navigationsbedarf“

Robin Schönwald (SAP Deutschland)

Frank Pevestorf (Díaz & Hilterscheid)

Matthias Knappstein (Deloitte Consulting GmbH)

11:10 11:30

„Identity Management im Umfeld von IT-Sicherheit und IT-Governance“

Kaffeepause “Test Process Evolution – How Far Do We Go?” Chris C. Schotanus (Logica)

„Modellbasiertes Testen (MBT) zur Erstellung wartungsarmer, automatisierter Tests in Ergänzung zu statischen Kode-Analysen“

„Methoden der Ertragsmessung in Banken“

„Viele Wege führen zum Identity Management – welcher ist der Richtige?” Jutta Cymanek (Omada)

Michael Averstegge & Jan Mikus (CS Consulting)

„Managed Testing Services – Ein Erfolgsmodell für Groß und Klein” Frank Schmitz (Steria Mummert Consulting) 1

Keynote: “Misunderstandings and Illusions about software testing”

12:25

Hans Schaefer 13:25 14:40

Mittagspause “Using the TPI-next Model for Test Process Improvement” Graham Bath (T-Systems)

“Interfacing Interface Testing”

„Aktuelle Änderungen im Aufsichtsrecht“

Patrice Willemot (CTG)

Karl Dürselen

Emine Oezdemir (FMC Feindt Management Consulting)

15:35 15:45

Kaffeepause “Role of Negative Requirements in Security Testing” Nageswara R Sastry (IBM)

„Tagebuch eines nächtlichen Tests“

„Liquiditätsvorgaben der Aufsicht”

„Identity & Access Governance (IAG)“

Achim Lörke (Bredex GmbH)

Dr. Thomas Dietz (Fachhochschule der Deutschen Bundesbank)

Dr. Martin Dehn (KOGIT GmbH)

“Software Testing Banking Applications for Mobile Platforms such as IOS and Android”

„Herausforderungen und Implikationen einer modernen Kreditrisiko modellierung – Theorie und gelebte Praxis in einer Zentralbank“

16:35 16:50

„Data Loss Prevention: Welche Gefahren bestehen, wie kann das Risiko minimiert werden?“

„Einführung der neuen Eigenkapitalvorschriften (CRD II) bei knapp 800 Banken“ Mario Berger & Christian Sattler (CGI) 1

Kaffeepause “Test Manager Ability to Lead” Thomas Axen (ATP)

Christian Ramírez Arévalo (Certum)

Dr. Jadran Dobrić (WGZ BANK)

17:45

Keynote: “Industrial Espionage (at its best)“

18:50

Abendveranstaltung: Abendessen & Theater

Peter Kleissner

Matthias Zieger (Microsoft)

Conference (Day 2) May 10, 2011

Zeit

Track 1

Track 2

Track 3

08:00

Anmeldung

09:10

Keynote: „Basel III und CRD IV: Die komplexen Neuerungen zum Eigenkapital und deren Auswirkungen auf die Risikotragfähigkeit der Banken“

Vendor Track

Prof. Dr. Hermann Schulte-Mattler 10:15

“The New Philosophy: Preparationism, When The Past Just Won’t Do”

“We Rebuild the Bank” Matthias Leitner (IngDiba) & Helmut Pichler (ANECON)

Jamie Dobson & Jorrit-Jaap de Jong (Ugly duckling software) 11:05 11:25

„Schwerpunkte der dritten MaRisk-Novelle”

„Keine datenschutzrelevanten Daten im Test!”

Dr. Ralf Hannemann (Bundesverband Öffentlicher Banken Deutschlands)

Eckehard Kruse (GFB Softwareentwicklungsgesellschaft mbH)

Kaffeepause „Kennzahlenbasierte Tests im Data Warehouse“

“Agile Testing in Large Enterprise Projects”

Adalbert Thomalla & Stefan Platz (CGI)

Sergey Zabaluev (C.T.Co)

„Easy Trustee Modell” Thomas Hoppe (immofori AG)

“Test Early, Test Often, Test Efficiently – Unit Tests Under z/OS” Metin Savignano (savignano software solutions) 1

Keynote: TBD

12:20

Achim Wagner (Dekabank) 13:20 14:35

Mittagspause “Test Automation beyond GUI Testing” Hartwig Schwier & Patrick Jacobs (Océ)

“Agile on huge banking mainframe legacy systems. Is it possible?”

„Gesetzgebung! Wie kann ich mich darauf einstellen? Eine Irrfahrt für Kreditinstitute?“

Christan Bendix Kjaer Hansen (Danske Bank)

Marc Ahlbach

15:25 15:35

TBD Guido Fischer (macrosConsult) 1

Kaffeepause “The Data Dilemma”

“Make or Buy?”

Huw Price (Grid-Tools Ltd.) & Edwin van Vliet (ABN Amro Bank)

Michael T. Pilawa (Pilawa SA)

16:25

„Rechte der Supranationalen Aufsicht – Was müssen Kreditinstitute in Zukunft beachten“

Schlussveranstaltung José Díaz ¹ 25-minute presentation

May 9–10, 2011 in Bad Homburg v. d. H. (near Frankfurt), Germany The two-day conference Testing & Finance, which is held once a year, brings together quality assurance specialists from the financial world from both home and abroad. Please visit our website for the current program.

Exhibitors

Testing & Finance Europe 2011 – A Díaz & Hilterscheid Conference Díaz & Hilterscheid Unternehmensberatung GmbH Kurfürstendamm 179 10707 Berlin Germany Phone: +49 (0)30 74 76 28-0 Fax: +49 (0)30 74 76 28-99 [email protected] www.testingfinance.com www.xing.com/net/testingfinance

© iStockphoto.com/shorrocksshorrocks

Agility in an outsourced context by Jerry E. Durant

Information Technology Outsourcing (ITO) has been, and continues to be, a high-end solution set for companies. Reaching beyond basic web development and legacy application development, buyers look to further their internal service delivery through both re-engineering and new application development. At the core of these sourced relationships is the question of how much a buyer should mandate as it relates to the methods used by the sourcing company. In general, many organizations have for some time been in favor of long standing traditional methods, such as waterfall and v-model, due largely to their rich use of artifacts and heavy oversight elements. However, the question arises as to whether these elements provide real value and at what cost, or whether companies have been comfortable believing that by using the traditional methods risk is being reduced, when in fact they aren’t. Agile Challenges The application of agile methods in an outsource context faces some of the same obstacles as can be found when introducing agile methods internally. Questions about the loss of control and insight, the invariable amount of informality that is exchanged for purpose-driven tasks, and the shift from individual to group accountability continues to raise suspicion about result delivery. However, as we have learned from the first few projects, the things we thought were reliable in traditional models are not all that reliable. In fact most of them seldom delivered the guarantees that were expected. In each and every case it was the teams operating in a cohesive and delivery-focused way that produced results. When we now consider using a distant team that is operating outside of our control, there is a temptation to return to those traditional frameworks. To some extent it may be a question of contractual conditions, and very often it is encased in a question of insight into project process. First and foremost, a company that has decided on utilizing outsourced resources must be comfortable with the relationship and the reliability of the supplier to deliver. This trust is maybe comparable to that we would have placed in a newly formed team.

34

www.agilerecord.com

However, before a buyer starts to dictate methods, they must be fully convinced of the methods they are using before prescribing to their service partner. Therefore, buyers who have not internalized agile methods should refrain not only from prescribing such methods but should try to avoid suppliers who utilize these methods. The main reason is that the ideology behind traditional and agile method is dramatically different. To the misinformed buyer, it would appear as too casual. As we know, the paradigm shift is anything but informal and in fact has an underlying formality that capitalizes on the informality dynamics in order to amplify the delivery of results. In cases where sourcing companies utilize agility, success has been delivered through the use of established progress insight (dynamic story boards, sprint progress insight, cumulative working delivery iterations, and formally participative demos and retrospective exercises). These were not done as a surprise, but as a part of a highly effective and purposeful project and sprint planning exercise. Both Ends Working Together If we are to assume that your company has software engineering methods that include agility, fully operational and cultural, it would be appropriate to look for suppliers that have a similar maturity. Ideally, the experience level should be close to being equivalent, and this is certainly not the case if it is the first or second attempt of using the methods by the supplier organization. Ideally, at least two or three projects of varying sizes should have been run under this model and a base of consistent experience been established. This will be the basis for the buyer and outsource supplier to effectively communicate with each other, to understand the underlying model-operating dynamics (and ground rules), and to have appropriate understanding of who serves what role. Without this, constant reliance will have to be placed on some of the formal elements found in traditional process models (e.g. schedules, metrics, documents…). The apportionment of who does what is an important aspect to understand. Reflected below is a fairly common agile project framework.

Agile Project Roadmap Story Card Development Story Workshop

Buyer

Effort Estimation

Outsourcer

Business Value Estimation

Buyer

Risk Estimation

Joint

True-Up (Matching to Project Support Commitments)

Buyer

Sprint Planning Cycle Iteration 0

Outsourcer

Sprint Delivery (1… n) Sprint Detail Planning

Outsource/Buyer Participation

Build-Test-Refactor-Deliver

Outsource/Buyer Participation

Demo/Test

Joint

Retrospective

Outsourcer

... Next Sprint Use of Hardening Sprints Cleanup, Test, Close and Stabilization of

Outsource/Buyer Participation

Preceding Work Project Finalization

It is quite customary for companies to enter into outsourcing engagements because they wish to take a passive role. This is often the result of the sourcing relationship being viewed as a service that does not require heavy participation. As history has shown, such viewpoints will in most cases result in disappointment. The use of agility forces buyers and outsources to behave in a tight and collaborative fashion in order to attain mutual benefit (and avoid similar disappointments). A failure to do so will result in project discord. Agile Sourcing Challenges A cornerstone to agility is co-habitation. Confusions occur when this is interpreted to mean being in the same physical location. I’m sure that you have seen cases where being in the same room has not always worked as a means of facilitating interchange and awareness. Think back to the last time that you had a dispute with an office mate. The awkward co-habitation wasn’t put to rest until a resolution was reached. In the case of outsourced agile projects, one should take some comfort in not only isolating us from this human condition, but also afford a bit of added value in overcoming cultural differences that can infiltrate projects. What we are looking to establish is a communicative co-habitation where our workspace, dialogs and differences are transparent. This suggests, because of time zone variations, to utilize realtime technologies. The effective use of these technologies demands their utilization as a part of our work efforts, and not as a separate additional task to be maintained. Examples of these include the use of common workspace tools, messaging and email communication services, access into work space arenas (within the outsource supplier), and meeting forum devices. The effective deployment of these technologies highly suggests a seamless formation in order to reduce duplication and efficiencies loss in use.

Joint

set of zero bug backlogs and that no element is considered complete until it is test clean, has been re-tested in concert with other completed stories, and tested on a repeated basis. Given that some are apt to have problems, these stories should be closed but reintroduced as a story to be addressed in future sprints (or during periodic hardening exercises). This allows the buyer to better understand potential slippage issues while allowing for flexible scheduling to foster a high-quality, defect-reduced delivery. No one is perfect, but anticipating what is likely to happen and having a means to efficiently manage these situations, will keep things on track and with a high degree of delivery probability. Finally, a common concern is the handling of changing requirements. These may be the result of discoveries made or modifications that resulted from business redirections. Of usually concern is how these should be handled since they are not within the original context of the bid proposal. It is important to think a bit differently and shift the focus away from contract cost to the work effort on which the bid is based. In essence, we are placing the stories in play as items that can be exchanged. The context of a traditional project would hold us to the agreement as portrayed, whether good or bad. In this situation we develop a joint agreement that stories of similar effort and importance can be freely interchanged with consideration for stage in the sprint delivery continuum. Most agile efforts attempt to front load the cycles with mission critical story delivery in order to flush out issues and to serve as the backbone of the application. We would expect that this exercise would reveal additions and adjustments and that interchange would therefore take place to a moderate degree. As the sprint cycles continue, more ideas may be generated, but they are more apt to be forgiven for this project and deferred to the next. In the latter case, these will be handled by future project bid proposals, but will enjoy the added benefit of strengthened contextual understanding.

Testing takes on a higher level of importance in the agile outsourced engagement. Basic agility promotes keeping issues lists small. However, one must give consideration to deploying a mindwww.agilerecord.com

35

Doing the Right… Entrusting your business to others is not something to be taken too lightly. While well-intentioned buyers often rely on their procurement and legal departments to secure suppliers, the intricate details of service delivery must be governed by software engineering directives. In this context the mastery of applied agility on the part of the outsourcer cannot be overlooked. This is where the buyer must fully understand and have experienced the agile framework and the various ways that it can be applied for determinable results. There are many outsourcers who have strategically decided to use agility for its delivery based value, lower risk potential and reduced cost. These compelling reasons have created temptations to talk-the-talk, but fall short on walking-thewalk. For the same reason, as some believe that agile means no documentation (which really means only documentation with discernable value contribution), the handful of misguided suppliers may employ bad practices that will fall short of being appropriate. Take the time, be involved, ploy your agile expertise, and get ready for an exciting and productive outsourced agile experience.

and involvement. The use of the Just Enough-Just in Time principle further created a ‘can do’ climate. Further supporting this endeavor was a strong use of technology (with tools such as Wikis, Skype™ and Gliffy), but also the fundamental importance of human face-to-face time was not overlooked at the client location. Over the course of several months, the pace of regular delivery, balanced documentation delivery, and mutual operating respect allowed both the customer and Exilesoft to actively pursue their goals. It was realized that a system solution is not a finite objective, but one that is constantly growing and changing, and therefore needs solution approaches that endear this condition.

REAL LIFE – Case Study To illustrate the points presented, let me share with you an example of how agile outsourcing can be practically applied. Exilesoft is a software outsourcing company in Sri Lanka who exemplifies the use of agility as a regular part of the business delivery model. At the core is a strategic commitment to provide a risk-reduced, flexibly transparent, participative mechanism for buyers (users) that will result in lower delivery costs. One example is a project in which Exilesoft provided a service for a leading Norwegian producer of automated warehousing solutions. The project involved transforming their legacy application, produced using multiple suppliers and methods, into a newly cast application solution. As is the case with most projects of this type, •

It lacked definitive and reliable documentation



Domain knowledge was limited to a few very busy individuals



Development and redeployment could not interrupt attention to current customers



Complexity was high and design was fragmented



It focused heavily on investment in current product and customer support.

These limitations, along with the lack of client understanding of agile methods, strongly suggested the use of a method that was adaptive in nature and not heavily vested in large inflexible legacy elements. Exilesoft commenced the engagement with two pivotal elements: client awareness (agile orientation) and a roadmap of committed involvement. To lay credibility to their words, it was backed up with proven result delivery in the very early stages. It allowed for flexible adaption and the creation of an atmosphere that fostered client interest. What might have been mandated in a traditional development setting created an atmosphere of intense interest 36

www.agilerecord.com

> About the author Jerry Durant is the Chairman Emeritus and founder of the International Institute for Outsource Management, a trade organisation dedicated to assessing, developing, and guiding outsource service providers in the ITO, BPO, call center and KPO industries. He authored the Outsourcing Management Body of Knowledge and implemented the only outsource provider viability assessment model - Global Star Certification (GSC). GSC examinations have been conducted since 1988 in over 70 countries and for over 200 companies. He also established the first International Outsource Management Research Center in Wuxi, China. With over 30 years of IT experience, Mr. Durant, a consummate innovator in outsourcing is widely regarded for his talent not only in IT, but also in business.

© Katrin Schülke

Berlin, Germany

IT Law Contract Law German English Spanish French www.kanzlei-hilterscheid.de [email protected]

k

a

n

z

l

e

i

h

i

l

t

e

r

s

c

h

e

i

d

© iStockphoto.com/graphicphoto

Agile Requirements: Not an Oxymoron by Ellen Gottesdiener

Adult children. Jumbo shrimp. Seriously funny. I’m sure you recognize these expressions as oxymorons—self-contradictory phrases, often with an ironic meaning. Should we add “Agile requirements” to the list? Does Agile development fit in with traditional requirements practices? And if so, how?

However, Agile projects still produce requirements and documentation, and they involve plenty of analysis. On the best Agile projects, requirements practices combine discipline, rigor, and analysis with speed, adaptation, and collaboration. Because software development is a knotty “wicked problem” with evolving requirements, using iterative and Agile practices is not only common sense but also economically desirable. Indeed, Agile requirements drive identifying and delivering value during Agile planning, development, and delivery. Planning Agile teams base product requirements on their business value— for example, boosting revenue, cutting costs, improving services, complying with regulatory constraints, and meeting market goals. If you’re agile, it means that you focus on value and jettison anything in the product or process that’s not valuable.

Once More into the Breach Traditionally, defining requirements involves careful analysis and documentation and checking and rechecking for understanding. It’s a disciplined approach backed by documentation, including models and specifications. For many organizations, this means weeks or months of analysis, minimal cross-team collaboration, and reams of documentation. In contrast, Agile practices—Lean (http://www.leanprimer.com/ downloads/lean_primer.pdf), Scrum (http://www.scrumalliance. org/), XP (http://www.extremeprogramming.org/), FDD (http:// www.featuredrivendevelopment.com/), Crystal (http://www.amazon.com/exec/obidos/ASIN/0201699478), and so on—involve understanding small slices of requirements and developing them with an eye toward using tests as truth. You confirm customers’ needs by showing them delivered snippets of software.

38

www.agilerecord.com

Planning covers not only the “now-view” (the current iteration) but also the “pre-view” (the release) and the “big-view” (the vision and product roadmap), with close attention to nonfunctional (http://www.agile2010.org/express.html#5210) as well as functional requirements. The product roadmap is crucial for keeping your eyes on the prize, especially in large, complex products. You don’t have to know each specific route, but the overall way must be clear. It’s driven by the product vision and marked by industry events, dates, or key features that must be achieved along the route. Customers (or “product owners,” in Scrum terminology) drive Agile planning, constantly reprioritizing requirements and evaluating risks and dependencies. Close customer collaboration is essential. One of the original Agile methods, DSDM (http://www. dsdm.org/), has customer involvement as the first principle. Your Agile backlog, or catalog, of product needs changes constantly—whenever you do planning (e.g., for a release or iteration)

or, if you’re using a kanban/flow model, every time you’re ready to pull in another requirement. Plans are based on deciding what to build, and when. An Agile delivery team works ahead, preparing requirements for development and testing. This preparation is vital to deliver the value as soon as possible, with smooth flow and no thrashing or interruptions in delivery and testing.

It’s All Good “Agile requirements” isn’t an oxymoron, although it may be a bit of a paradox—in the same way that the concise enables the complex, the small gives rise to the large, incompleteness facilitates the finish, and you must slow down to speed up. Indeed, Agile requirements are central to Agile planning, development, and delivery.

Developing An Agile team’s work is based on building concise, fine-grained requirements (typically captured as user stories). Developers need small, tamped-down requirements to work from. Small requirements that have clear conditions of satisfaction (doneness) minimize risk. The team may also sketch organic data models, state diagrams, and interface mockups. These are like micro-specifications: “ready” requirements for pulling into delivery. The team knows enough to estimate, develop, test, and demonstrate the requirements. Doneness is a key aspect of requirements. I wrote about “done” requirements in my first book (2002, http://www.ebgconsulting. com/Pubs/reqtcoll.php): the team and customer need to know when they understand the requirements enough to build and test. This concept is used often in Agile development and refers not only to requirements but also to the build, test, and release process. Delivering Requirements are built and released based on the team’s clear understanding of requirements dependencies, which also drive architecture trade-off decisions. Requirements are dependent on each other when each relies on (and thus constrains) the other.

> About the author Ellen Gottesdiener EBG Consulting, Inc. Principal Consultant and Founder Ellen Gottesdiener helps business and technical teams collaborate to define and deliver products customers value and need. Ellen is an internationally recognized facilitator, trainer, speaker, and expert on requirements development and management, Agile business analysis, product chartering and roadmapping, retrospectives, and collaborative workshops. Author of two acclaimed books Requirements by Collaboration and The Software Requirements Memory Jogger, Ellen works with global clients and speaks at industry conferences. She is co-authoring a book on agile practices for discovering and exploring product needs. View her articles, tweets, blog, and free eNewsletter on EBG’s web site.

Smart Agile teams analyze development and delivery dependencies (http://www.agile2010.org/express.html#5364) to optimize value. Traditional requirements models are useful for dependency analysis and to supplement Agile’s lightweight requirements (such as user stories).

www.agilerecord.com

39

© iStockphoto.com/Edruba

Process performance indicators in a lean software enterprise by Kristian Hamström

Case study: Measuring effectiveness and repeatability of an agile software delivery framework. Applying non-software industry methodologies in a release and test processes improvement project. Background and current framework setup Modern agile software development has its origin in lean manufacturing principles which already are well-established in most other industry sectors. Despite this fact, there is not that much reported in articles and literature on how basic lean improvement techniques actually have been, and could be, utilized in agile software process improvement work. This article describes a practical project case where lean thinking and principles, in particular some basic Six Sigma tools, were applied in an effort to make major changes to an existing software release process, i.e. a Service Transition Process in Information Technology Infrastructure Library (ITIL) terminology. The company’s entire software delivery framework consists of a development process (Scrum) with integrated agile test process, a Company Sprint approach which embeds the release process and its related test processes, and a high-level Company Backlog process for project prioritization and portfolio management. The release process can be compared to a combination of a traditional industry batch process and a manufacturing production line. In a Suppliers-Inputs-Process-Outputs-Customers (SIPOC) context it means that the inputs, “the raw material batch”, is the content produced by development teams (the suppliers), and the batch processed output is a good quality release package (value for the consumers), which is ready to be deployed, e.g. on a web site. This process is repeated with a constant lead cycle time, or tact rate, which mainly depends on what the organization is capable of smoothly delivering and also partly on the business needs. The optimal tact rate and batch size are thus largely factors of the company’s overall maturity, capability of the processes as well as the initial quality of the inputs for the release process. If business requirements and/or customer demand set

40

www.agilerecord.com

some specific tact rate, then the problem is narrowed down to an issue of optimal batch size, i.e. maximum process capacity. How much content can be included/descoped, or how many teams should be allowed to participate in one release while still retaining an acceptable quality risk and cost level? Delivered quality level is not good enough Our current release process is built around a concept of enterprise-level Company Sprints, which are a five calendar week timeboxed process layer on top of the normal team Scrum sprints. Company Sprints and team sprints are normally not synchronized. Briefly, what happens during the Company Sprint is that: 1. teams plan the work to be done based on specific Company Stories and team release goals, 2. execute the plan and do system integration work of all content to be released, and only after this 3. actual release integration testing and system testing are carried out, and the release is deployed. This process has several major known problems and shortcomings. First of all, since test coverage and quality of the inputs (team content) is not well known, many test phases with slack periods are built in, which basically becomes a huge “safety net” for the development teams. System integration happens late and in practice development work is done during the release, so late critical findings are typical even close to production deployment. Secondly, most of the release testing is manual work, which is significant time (several hundred man hours) taken away from testing efforts in the simultaneously ongoing team sprints. Release testing work is managed by a specifically appointed release test lead, resulting in teams not taking full responsibility and accountability for their own quality, since they are being guided throughout the release process. As the batch size is not restricted, all teams that have content ready on the shelf waiting to be released can participate in a release. This, however, introduces big variations in the complexity of each consecutive release,

many times exceeding the capability and capacity of the process, making it difficult to achieve repeatability between releases. As we know, uncontrolled process variation is quality’s worst enemy. Furthermore, work priorities for teams are not always crystal clear since there are so many concurrent competing process layers. We have team sprints, the Company Sprint, release and production issues to be solved, and on top of it all there is the enterprise level project portfolio bundling it all together. All these tasks have a different priority, so it becomes difficult to focus on one particular assignment at a time, and work flow is easily “trashed” (negative frequent task switching, as opposed to the

similar Single-Minute-Exchange-of-Die (SMED) concept in lean manufacturing). When all these above mentioned challenges are added up, it is clear that there is a big problem also from the customer perspective. Releases get delayed or their quality is not good enough, and in the end business will suffer. Another negative outcome is additional high quality costs caused by frequent patch/fix releases. Obviously, there are historical reasons why the framework has evolved to this state, and many design decisions had to be made out of necessity based on the particular situation at hand at that time. Despite this, the current framework still provides a well understood and relatively stable foundation on which to start building something new.

Figure 1: Release Process Value Stream Map, 25 days takt rate, 4% efficiency

www.agilerecord.com

41

Lean Six Sigma improvement project In order to proceed in a structured and systematic way, a light Define-Measure-Analyze-Improve-Control (DMAIC) approach was chosen for long-term process improvement efforts, with smaller Kaizen quick-fix improvement steps defined and implemented along the way. These projects were initiated at the end of 2009, and were estimated to be finished by the end of 2010. The work done in all project phases is explained in more detail in the following sections.

Define phase Since the main problem scope was already well known, general “technical” targets of the project could easily be derived from agile software development principles: more effective test automation, more customer facing testing, more frequent deliveries, more capability to handle late changes, etc. Successively, business and financial benefits of the project were identified as: •

a better overall customer experience, also known as FURPS+

Figure 2: Balanced Scorecard perspectives with related metrics (target numbers removed)

(Functionality, Usability, Reliability, Performance, Supportability, Security) quality attributes, •

an opportunity for decreasing the cycle time in a controlled manner so that more value can be delivered without compromising quality, and



lowering of total product lifecycle quality costs as well as gaining visibility into all quality cost areas.

A detailed release process flow in the shape of a Value Stream Map (VSM) was documented, which is depicted in fig. 1 (nonstandard symbols used). As can easily be seen, the whole process is very inefficient, contains significant waste time and nonvalue adding work as well as transportation of content / builds between several different environments. This “as-is” process map was used as the baseline in all phases of the project. It helped in defining the problem scope, in creating the measurements, in the detailed analysis of process bottlenecks and measurements, in the evaluation of better solutions and in de-

42

www.agilerecord.com

veloping the “to-be” process map. Note that content/release package building procedures are not included in the VSM since they do not affect the timeline, because all building is executed automatically overnight. This is the case also for automated test runs in Continuous Integration (CI) and Release Candidate (RC) environments, which are started immediately after a build. Another detail worth mentioning about the VSM is that there are also several defined milestones and checkups at specific stages of the release process. For example, before integration testing is started, fulfillment of company Definition of Done (DoD) by all teams is assessed, and at the end e.g. a Company Sprint Review is held where final system quality is validated and a GO or NO-GO for production deployment is formally stated. Measure phase One of the biggest obstacles in using Six Sigma or other statistical methods in the software development domain is the lack of valid numerical data. Lead times in software processes are

normally magnitudes greater than in any manufacturing production line, and very few organizations are mature enough to have systematic measurement systems in place. Individual agile development teams might follow up their “productivity” in velocity or burn-down charts, but these metrics are rarely utilized at company level, e.g. in performance improvement efforts. The agile software business is not yet as professional in this area as other traditional industries. So, if a team sprint has a length of two weeks, during one year a data set of 25 measurements will be available. Similarly, since our Company Sprint is a five week cycle, there will be 10 measurement points accumulated per year. This little amount of data#This limited data basis is not really enough for making any statistically correct data analysis, and error margins in the end results are bound to be large. The well-known software Goal-Question-Metric (GQM) method was used as an aiding tool in distilling a set of metrics from the targets and goals of the improvement work, and an effort was also made to achieve a suitable balance between leading (feed forward) and lagging measurements. The metrics were further refined into an overview Balanced Scorecard (BSC) chart enabling some strategic business goals to be loosely connected to the metrics (see fig. 2). At the same time the four original BSC perspectives were also somewhat tailored: 1. Quality Costs (Financial) 2. Consumer Experience (Customer) 3. Framework Processes (Internal Processes) 4. Competence (Learning and Growth) This change suited our organization and purposes better, and for each modified perspective one corresponding Key Performance Indicator (KPI) was then determined. By following only these four Key Performance Indicators, a picture of how the whole delivery framework is improved towards the set targets (agile principles) and business goals (cost vs. quality) could ideally be observed. The full metrics set should also reveal long-term capability and capacity of the release process. A good example of one leading metric is Release Complexity, which can be calculated already at Company Sprint joint planning, as it gives a rough indicator on the overall risk and degree of difficulty of the upcoming release. It is composed using information on teams’ performance variables in the previous Company Sprint, new and changed components in current Company Sprint content, number of story points done, etc. The definitions of the four Key Performance Indicators are as follows: Production Quality Debt (bugs): Total number of all defects introduced to live production after a release, minus the total number of all defects fixed in the same Company Sprint Release Downtime (minutes): Total amount of time (in minutes) for which the production site was unavailable during release deployment Manual Testing Effort (hours): Total amount of working time (in

hours) used for release and deployment test execution work Release Test Effectiveness (%): Ratio between all defects found in release testing vs. total number of defects found in the released production content (by Incident Management) until the next release As mentioned earlier, data collection is a time-consuming task in this context. For this purpose, one year of measurements had to be retrieved before it was possible to start making any detailed analysis of the available numeric information. There were also other reasons why it was even possible to have such a long data collection period; in normal circumstances a one-year DMAIC project would naturally be completely unacceptable. During this period, a number of Kaizen-style quick improvements were made to the delivery framework, while remembering that they might have an impact on the stability of the measured processes. The effect of some of them could be seen already in the short term during the data collection period, particularly in Production Quality Debt and Release Downtime KPIs, but most were actually discovered to be long-term improvements even though they were easy to implement (or the changes done were ineffective, or simply could not be seen in the measurements available). Some of the implemented changes were for example: •

improving process descriptions, instructions and guidelines and making them easily available



improving visibility of the collected measurements, creating some sense of commitment in the whole development organization



improved test automation coverage, both in development as well as in common integration environments



more effective communication means, carrying out a company-wide process training program



more strict and better defined release process entry criteria and exit quality criteria

It also became clear that certain supporting tools that were used in the organization needed harmonization so that some of the identified improvements could be achieved. This was the case with our test management tools in particular (and partly also test automation solutions). Most teams use their own tools (TestLink, Excel, Wiki etc.) and in varying ways so that it is practically impossible to have reliable overall visibility into e.g. test case execution status. Therefore a formal test management tool evaluation project was started as a separate track from the process improvement project, with the goal of selecting a suitable commonly accepted solution by the time the final major improvement implementation was to begin. Analyze phase In defining whether particular steps in the Value Stream Map (fig. 1) are Value Adding (VA), a few simple rules were applied:

www.agilerecord.com

43

1. customers asks for “it” and are willing to pay for ”it” 2. the “it” must change in the operation 3. the work for “it” must be done right the first time The obvious conclusion was that only the final step in the release process, i.e. the actual deployment of new content to live production, is a value-adding activity, which fulfills the above listed rules. The deployment procedures take roughly one working day of the five-week cycle time, so calculating Process Cycle Efficiency (PCE) is simple in this case: 1 / 25 % = 4%. All other steps are consequently either Necessary Non Value Adding (NNVA) or plain waste, i.e. Non Value Adding (NVA), with the difference that all quality control (=testing) tasks were defined

as necessary, because of the current quality issues that we experience, as discussed earlier. NNVA work consequently adds up to 48% of the work, meaning that roughly half of the time in a Company Sprint is spent in different testing and fixing activities. All other remaining time used in the Company Sprint is mainly built-in slack/buffer as a risk management tool in case the already reserved testing time will be consumed, e.g. because of technical surprises, or because of even worse quality in the release content than expected. Also note that the eight days of system integration work done by development teams is defined as NVA, which it clearly is from a release process perspective, but of course it is necessary work that must be done anyway by the development teams in order for them to integrate their components into the system platform.

Figure 3: Release process KPI one year trends (Manual Testing Effort KPI excluded)

Fig. 3 is a chart showing one year data plots of the defined Key Performance Indicators, excluding Manual Testing Effort. Data for Release Test Effectiveness and Release Downtime is “normalized”, simply meaning that the average has been removed. It can be seen that Release Test Effectiveness has been quite stable in all releases, except the last one, staying within one standard deviation (the horizontal dotted red lines). Release Downtime has decreased by more than 150 minutes (=2.5 hours) in total, and over 100 defects have been removed from live production, as Production Quality Debt numbers indicate. It is also interesting to observe that Release Downtime and Production Quality Debt linear trends have practically the same slope (the dotted blue and green lines). It should also be emphasized again that these data are not an absolute fact, there is naturally some inaccuracy

44

www.agilerecord.com

because of the small sample population. The way the measurements have been done also inherently affects the error margins. Anyway, most interesting for practical analysis work are the general trends and not so much any specific absolute values. A chart of our tailored Overall Capability Index (OCI) is shown in Fig. 4. The purpose of this index is to illustrate long-term progress of overall delivery framework “maturity” improvement. Though named capability index, it should not be confused with traditional Six Sigma process capability definition (= allowed process variation / normal process variation). The index has during the studied one-year period increased from a value of a poor 2 to an acceptable level 6 in Company Sprint number 10, where the ultimate end goal is a capability value of 9 (entailing that all metrics are

Figure 4: Overall release process Capability Index, one year progress

within set limits). Since one goal is to also have measurable process repeatability, the Overall Capability Index should stay at the same level as well for at least three Company Sprints in a row before it can be assumed stable. Improve phase During the Measure and Analyze phases it became evident that even though many implemented changes were successful and improvements can be seen in Key Performance Indicator and Overall Capability Index trends, the long-term rate of change is not good enough and feedback loops are long. During the oneyear period it was not possible to reach the originally set technical targets and business objectives with the current delivery framework, although much had been improved in processes and practices along the way. A major process re-engineering effort was hence required for real change to happen and so that all end goals could be reached much faster. It was concluded that the biggest bottleneck points and constraints in the current release process stem from system integration and development work being done during the Company Sprint, the related long integration test phase as well as the fact that all release testing work is managed by a separately appointed resource. After careful evaluation of different alternatives, we decided to choose the following future solution: •

the system integration test rounds are removed altogether from the release process, but a (smaller) amount of time is still left for teams’ system integration work



the separate release test lead role is removed, instead the Release Manager oversees all testing activities



the progress of testing is supervised more closely with a strict stage-gate model approach



the staging environment is removed, Release Candidates go directly to the production system



the Company Sprint concept and the release process are combined to create a new Release Train



the amount of content coming into the Release Train is possibly restricted (= fixed-size train carts)

This streamlining approach will hopefully, at least in the long term, have the benefit that development teams are forced to perform early system integration work and will proactively by themselves begin to coordinate all integration testing, thereby taking more ownership and real responsibility for product quality. These new tasks will be made easier for them since there is now one process layer less (the Company Sprint), and productivity can be increased due to less task switching and less simultaneous work with conflicting business priorities. In practice, slicing off one calendar week will improve the lead time of the new Release Train by 20%, while the impact on Process Cycle Efficiency will not be that big, increasing from 4% to just 5%. By far the biggest and most important revolution comes with a changed mindset and empowerment; development teams are now fully responsible for overall quality of the software system during its entire lifecycle. They are also given the means (support, tools and processes) to accomplish this.

www.agilerecord.com

45

The Release Train is reduced to being merely a “conveyor belt” pulling and deploying ready content to the production site at regular intervals. The remaining system testing phase should ideally be fully automated with regression test suites and a customer point-of-view (consumer facing) system acceptance test strategy. Also, more important for process throughput than the tact rate is a consistent batch size. As stated earlier, the biggest process variation source is introduced through accepting an unlimited amount of content into a release. By restricting the batch size to an optimal size and by having a relatively fast-pace Release Train, a predictable and smooth flow of produced value can be delivered to customers. However, the optimal batch size is currently not known and has to be found out through experimentation and follow-up of Release Train performance metrics. It should also be remembered that typically an even bigger and more critical bottleneck in software development organizations is project lead time. Massive amounts of simultaneous projects exhaust development capacity to a halt, and projects fail for any number of other reasons. In this context, the additional time required because of a four-week release cycle is negligible. Starting the new streamlined Release Train Control phase There are naturally considerable business risks in making such big structural modifications to a company core process such as the release process. If the change is not managed carefully and with buy-in from the whole organization, the Release Train might easily fail repeatedly while the organization is struggling and learning how the new practice works (or doesn’t work). In the worst case, business opportunities might be lost; all produced and ready value remains on the shelf and will not reach the customers. Several Strengths-Weaknesses-Opportunities-Threats (SWOT) workshops were arranged, where technical and business risks were identified, analyzed and, if possible, feasible mitigation actions were planned. In summary, the conclusion of this analysis work was that there are more valuable long-term business benefits overall to be achieved than there will be short-term issues with probable initial Release Train cancellations. Most of the likely short-term problems can in any case be mitigated, for example, by: •

strictly following entry quality criteria, making sure company DoD is followed



reverting back to the current process for a few release cycles, reinstating the release test lead



arranging Release Train retrospectives and performing root cause analysis of failures that occurred



clear communication and training schemes of the new way of working, with status information being provided in daily meetings



visibility and follow-up on the status of build progress and CI/RC environments condition



effectively implementing identified necessary prerequisites first

46

www.agilerecord.com

A few entry criteria or prerequisites that will help in enabling an easier transition to the Release Train had also been identified during the risk mitigation workshops. One is the before mentioned new test management tool, for which an evaluation project had been specifically initiated. A suitable professional tool (TestRail by Gurock Software) was selected, piloted, and after its roll-out release work will benefit through a highly increased visibility into teams’ test coverage and quality level. Another prerequisite is good enough test automation coverage. The current level must be assessed and actions taken if it is concluded that higher coverage is required, since the new Release Train approach relies heavily on working regression test automation in all test phases. Since the process tact rate now is four weeks, it will also be beneficial to synchronize the cycle with regular calendar months. This makes it much easier also for distant stakeholders (e.g. sales and marketing) to plan and align their related activities accordingly. The current plan is to launch the new streamlined Release Train at the beginning of 2011, and some additional improvement work will be done once the process is operative. The Balanced Scorecard with all Key Performance Indicators and other metrics must be further developed to properly represent effectiveness and repeatability measurements of the new delivery framework. At the same time, all metrics in the Quality Costs area are to be reassessed to better reflect and visualize actual (monetary) quality costs such as prevention, appraisal, and internal failure costs. Theoretically, this framework also makes it possible to achieve single-piece flow if the tact rate is shortened considerably and the batch size is equal to one development team’s component. This would mean that any individual development team could make a production release at any point in time, e.g. daily. In practice, this can be achieved only if the number of teams in the organization is small enough (i.e. 1 -3 teams), and if the software system is trivial or of low business risk. In large software companies where there might be tens of development teams working together on a common system, communication and coordination complexity will be increased exponentially meaning that single-piece flow is not possible without extreme process management overhead or total quality costs. Therefore there is a practical lower limit on how lean or streamlined the release process in a cost-aware software company can be. So how do we find this “sweet spot”?

> About the author Kristian Hamström After over 15 years of experience in software and system development, I’m still at the beginning of the (long and winding) quality journey. As QA Manager for online gaming at paf.com, I have been facilitating a small cross-functional team with the mission to provide “Quality Assistance” to all system engineering efforts and business projects since 2009. The team provides training services and guidance in quality enabling methods and tools for the whole company. We are responsible for development and continuous improvement of organization-wide QA, testing, release, and operational processes with a lean approach and strong focus on pragmatic capability. My earlier positions included consulting in software QA and testing, management of diagnostics device development projects in the health-care industry, as well as real work as a system engineer and software developer in the paper machine business. I am an ISEB Foundation, IPMA D-level, Six Sigma Yellow Belt and Scrum Product Owner certified professional. I’m 41 years old, live in Helsinki, Finland, and can be contacted via e-mail at [email protected].

Advertise at www.agilerecord.com

www.agilerecord.com

47

Can agile be certified? Find out what Aitor, Erik or Nitin think about the certification at www.agile-tester.org

Training Concept All Days: Daily Scrum and Soft Skills Assessment Day 1: History and Terminology: Agile Manifesto, Principles and Methods Day 2: Planning and Requirements

© Sergejy Galushko – Fotolia.com

Day 3: Testing and Retrospectives Day 4: Test Driven Development, Test Automation and Non-Functional Day 5: Practical Assessment and Written Exam

Supported by

We are well aware that agile team members shy away from standardized trainings and exams as they seem to be opposing the agile philosophy. However, agile projects are no free agents; they need structure and discipline as well as a common language and methods. Since the individuals in a team are the key element of agile projects, they heavily rely on a consensus on their daily work methods to be successful. All the above was considered during the long and careful process of developing a certification framework that is agile and not static. The exam to certify the tester also had to capture the essential skills for agile cooperation. Hence a whole new approach was developed together with the experienced input of a number of renowned industry partners.

Barclays DORMA Hewlett Packard IBM IVV Logic Studio Microfocus Microsoft Mobile.de Nokia NTS Océ SAP Sogeti SWIFT T-Systems Multimedia Solutions XING Zurich

© iStockphoto.com/LeventKonuk

Smoothing Out Lumpy Sprints by Catherine Powell

Agile teams generally work in sprints or iterations. Using shorter time periods involving a sprint instead of a release cycle before confirming that the software works and is stable helps make development more efficient and easier. It turns development into many small tasks, instead of several large tasks and a long (and messy!) integration cycle. Within an iteration, though, many teams do a lot of things partially, and then finish the sprint with a rush to check in. These teams still have tasks and then a messy integration cycle; it’s just shorter! It’s a smaller mess, but it’s still a lumpy mess of a sprint. A Lumpy Sprint

pens late in the sprint, and there’s a rash of check-ins, finishing and testing at the end of the sprint. The Problem With Lumps As long as all tasks are getting done within the sprint, does it really matter whether they all get finished close to the end? In a word, yes. A flurry of check-ins at the end of a sprint increases the likelihood of integration problems and forces the final testing to be rushed. In addition, starting many tasks without finishing them makes it more difficult to drop items from the sprint if that’s needed. It’s a lot more difficult to remove something that’s partly done from a code base than to remove something that hasn’t been started yet. A Smooth Sprint This is a smoother sprint. Each team member working on a task finishes that task before starting the next one. If the task is inter-

This is a fairly typical lumpy sprint. The sprint starts and each member of the team picks up a task or two. Everything’s going well, but then something takes a little longer than expected, or a bug comes in from the field and needs immediate attention. The task in progress stays in progress. Another developer decides he’s just “got a few little tweaks left” on his first task, so he puts it aside and starts a second task “just to make sure there aren’t any surprises.” Repeat this across a team and the “mostly done” tasks proliferate. Getting from 80% done to 100% done all hap-

50

www.agilerecord.com

rupted (for example, by an urgent bug at a customer that needs to be diagnosed), another developer finishes the task instead of starting something else. Tasks are finished throughout the sprint and integration occurs in smaller chunks as the sprint progresses rather than all happening at the end. If something needs to be

removed from the sprint, there are more choices: anything that hasn’t started can be removed. Tips for a Smoother Sprint Making a sprint smooth is about making sure tasks are getting finished throughout the sprint. Spreading out the points where tasks are completed and checked in makes the sprint smoother, easier to test, and more stable. To make sprints smoother: • Finish something all the way. When you finish a task, do all of it, including the tests, documentation and fixing any issues or integration problems that arise. Don’t give into the temptation to move on to something else while there are still “tweaks” since those tweaks may be larger than they seem. • Work together on a task. Grab someone else on the team and start a task together. That way, if one of you gets pulled off, the other can still finish the task. It doesn’t work for really tiny tasks, but it makes sense for any task more than a few hours long. Working together also helps to make sure that there aren’t too many tasks going on simultaneously when the team is fairly large. • Don’t start more than one task at a time. Any person can really only work on one thing at a time (multi-tasking is a myth!), so don’t try. Pick up a task and work on it until it’s done or you’re blocked completely. Then don’t start a new task; help resolve the blockage instead. Finish a task, then start a second one. Conclusion The two most stable states for a sprint task are: (1) not started; and (2) completed. A task that hasn’t started yet has had no effect on the system at all. It can be removed with no consequences for the rest of the system (if it doesn’t exist, removing it won’t destabilize the system). A task that is completed can be shipped at any point and won’t change; it provides a stable base for other work. To make a sprint smoother, keep as many tasks as possible in the stable states; don’t start them until you can complete them as quickly as possible. A smooth sprint is a stable sprint, and that’s a lot easier for everyone involved to work with.

> About the author Catherine Powell has been testing and managing for about ten years. She is a manager, a tester, an author and a formal mentor to testers and test managers. Catherine has worked with a broad range of software, including an enterprise storage system, a web-based healthcare system, data synchronization applications on mobile devices, and webapps of various flavors. With a focus on the human side of software development, Catherine builds strong teams spanning testers, developers, support, product management, and all the people involved in turning an idea into reality. She emphasizes the generation of information and pragmatic decision making using a myriad of approaches. With thoughtful techniques that access the strengths of the human team members combined with the needs of the system and its users, Catherine guides both developers and testers to be a valuable part of the decision making required to create rock-solid software. Catherine focuses primarily on the realities of shipping software in small and mid-size companies. Specifically, she highlights and works to explicate the „onthe-ground“ pragmatism necessary for an engineering team to work effectively with both software and humans from product definition through release, and in the field.

www.agilerecord.com

51

Gilb’s Mythodology Column

User Stories: A Skeptical View by Tom and Kai Gilb The Skeptical View We agree with the ideals of user stories, in the ‘Myths’ [1, Denning & Cohn] discussed below, but do not agree at all to Myth arguments given, that user stories are a good, sufficient or even best way to achieve the ideals. We are going to argue that we need to improve user stories for serious and large projects. It is possible for trivial projects that user stories are sufficient tools. Myth 1: User stories and the conversations provoked by them comprise verbal communication, which is clearer than written communication. There may be occasions where good, conversational communication can help clear up bad written communication. In fact we see a lot of really bad written ‘user needs’ communication; where we have measured the density of unintelligible words at 30% to 90% and more. [3] It should be possible to reduce defective written requirements defects by two orders of magnitude, as our clients have done. [4] A good written specification of any requirement type should be so clear and comprehensive that it is not necessary, as it is assumed with user stories, to have an oral conversation to clarify it. The useful power of the well-written specification increases with the frequency of referring to it, and the number of people that need to interpret it. Try to have a ‘conversation’ about the following example of a story: “We want the most intuitive system possible” Now compare your conversation with a specification like [5]: Intuitiveness: Type: Quality Requirement Stakeholders: Product Marketing, end users, trainers Ambition Level: To make the intuitive and immediate application

52

www.agilerecord.com

of our product clearly superior to all competitive products at all times. Scale: average seconds needed for defined [Users] to Correctly Complete defined [Tasks] defined [Help] Goal [Deadline = 1st Release, Users = Novice, Tasks = Most Complex, Help = {No Training, No Written References} ] 10 seconds ± 5 seconds About the authors Tom Gilb and Kai Gilb have, together with many professional friends and clients, personally developed the methods they teach. The methods have been developed over decades of practice all over the world in both small companies and projects, as well as in the largest companies and projects. Tom Gilb Tom is the author of nine books, and hundreds of papers on these and related subjects. His latest book ‘Competitive Engineering’ is a substantial definition of requirements ideas. His ideas on requirements are the acknowledged basis for CMMI level 4 (quantification, as initially developed at IBM from 1980). Tom has guest lectured at universities all over UK, Europe, China, India, USA, Korea – and has been a keynote speaker at dozens of technical conferences internationally. Kai Gilb has partnered with Tom in developing these ideas, holding courses and practicing them with clients since 1992. He coach managers and product owners, writes papers, develops the courses, and is writing his own book, ‘Evo – Evolutionary Project Management & Product Development.’ Tom & Kai work well as a team, they approach the art of teaching the common methods somewhat differently. Consequently the students benefit from two different styles. There are very many organizations and individuals who use some or all of their methods. IBM and HP were two early corporate adopters. Recently over 6,000 (and growing) engineers at Intel have adopted the Planguage requirements methods. Ericsson, Nokia and lately Symbian and A Major Mulitnational Finance Group use parts of their methods extensively. Many smaller companies also use the methods.

54

www.agilerecord.com

Subscribe for the printed issue!

Please fax this form to +49 (0)30 74 76 28 99, send an e-mail to [email protected] or subscribe at www.testingexperience-shop.com: Billing Adress Company: VAT ID: First Name: Last Name: Street: Post Code: City, State: Country: Phone/Fax: E-mail: Delivery Address (if differs from the one above) Company: First Name: Last Name: Street: Post Code: City, State: Country: Phone/Fax: E-mail: Remarks:





1 year subscription







(plus VAT & shipping)





Date

32,- €











2 years subscription

60,- €

(plus VAT & shipping)











Signature, Company Stamp

© detailblick - Fotolia.com

How 100% Utilization Got Started by Johanna Rothman

We have a problem in our industry. Managers seem to want to utilize people at 100%. If people are not fully allocated at 100% (or more!), there is something wrong. Well, that doesn’t work. But first, let me explain how that fallacy started. A Little History… Back in the early days of computing, machines were orders of magnitude more expensive than programmers. In the ‘70s, when I started, companies could pay highly experienced programmers about $50,000 per year. You could pay those of us just out of school less than $15,000 per year, and we thought we were making huge sums of money. (We were.) In contrast, companies either rented machines for many multiples of tens of thousands of dollars per year or bought them for millions. You can see that the scales of salaries to machine cost are not even close to equivalent. When computers were that expensive, we utilized every second of machine time. We signed up for computer time. We deskchecked our work. We held design reviews and code reviews. We received minutes—yes, our jobs were often restricted to a minute of CPU time—of computer time. If you wanted more time, you signed up for after-hours time, such as 2am-4am. Realize that computer time was not the only expensive part of computing. Memory was expensive. Back in these old days, we had 256 bytes of memory and programmed in assembly language code. We had one page of code. If you had a routine that was longer than one page, you branched at the end of a page to another page that had room that you had to swap in. (Yes, often by hand. And, no, I am not nostalgic for the old days at all!) Minicomputers helped bring the money scales of pay and computer price closer in the late ‘70s and the 80’s. But it wasn’t until minicomputers really came down in price and PCs started

56

www.agilerecord.com

to dominate the market that the price of a developer became so much more expensive than the price of a computer. By then, many people thought it was cheaper for a developer to spend time one-on-one with the computer, not in design reviews or in code reviews, or discussing the architecture with others. In the ‘90s, even as the prices of computers, disks, and memory fell, and as programmers and testers became more expensive, it was clear to some of us that developing product was more collaborative than just a developer with his or her computer. That’s why the SEI gained such traction during the ‘90s. Not because people liked heavy-weight processes, but because especially with a serial lifecycle, you had to do something to make system development more successful. And, many managers were stuck in 100% utilization thinking. Remember, it hadn’t been that long before when 100% utilization meant something significant. Now, let’s go back to what it means when a computer is fully utilized and it’s a single-process machine. It’s only a problem if the program is either I/O bound, memory-bound, or CPU bound. That is, if the program can’t get data in or out fast enough, if the program has to swap data or programs in or out, or if the CPU can’t respond to other interrupts, such as to read the next card from the card reader. If it’s a single-user machine, running one program, maybe you can make allowances for this. However, if it’s a multi-process machine, if a computer is fully utilized, you have excessive memory swapping (“thrashing”), and a potential of gridlock. Think of a highway at rush hour, with no one moving. That’s a highway at 100% utilization. We don’t want highways at 100% utilization. We don’t want computers at 100% utilization either. If your computer gets to about 50-75% utilization, it feels slow. And, computers utilized at higher than 85% have unpredictable performance. Their throughput is unpredictable. You can’t tell what’s going to happen. Unfortunately, that’s precisely the same problem for people.

Why 100% Utilization Doesn’t Work for People Now, think of a human being. When we are at 100% utilization, we have no slack time at all. We run from one task or interrupt to another, not thinking. There are at least two things wrong with this picture: the inevitable multitasking and the not thinking.

Plan on people working about 6 technical hours a day, maximum, on one project at a time. Use your project portfolio to sequence the projects. Now, people will finish work and innovate, and create an environment of success. You’re doing something right.

We don’t actually multitask at all. We fast-switch. And, we are not like computers, that, when they switch, write a perfect copy of what’s in memory to disk, and are able to read that back in again when it’s time to swap that back in. No, because we are human, we are unable to perfectly write out what’s in our memory, and we imperfectly swap back in. So, there is a context switch cost in the swapping, because we have to remember what we were thinking of when we swapped out. And that takes time. So, there is a context switch in the time it takes us to swap out and swap back in. All of that time and imperfection adds up. And, because we are human, we do not perfectly allocate our time first to one task and then to another. If we have three tasks we don’t allocate 33% to each; we spend as much time as we please on each—assuming we are spending 33% on each. Now, let me address the not-thinking part of 100% utilization. What if you want people to consider working in a new way? If you have them working at 100% utilization, will they? Not on your life. They can’t consider it. They have no time.

> About the author Johanna Rothman works with companies to improve how they manage their product development-to maximize management and technical staff productivity and to improve product quality. Johanna is the author of several books:

So you get people performing their jobs by servicing their interrupts in the best way they know how, doing as little as possible and doing enough to get by. They are not thinking of ways to improve. They are not thinking of ways to help others. They are not thinking of ways to innovate. They are thinking, “How the heck can I get out from under this mountain of work?” It’s a horrible environment. When you ask people to work at 100% utilization, you get much less work out of them than when you plan for them to work for roughly 6 hours of technical work a day. People need time to read email, go to the occasional meeting, take bio breaks, have spirited discussions about the architecture or the coffee or something else. We seem to need spirited discussions in this industry! So, if you plan for a good chunk of work in the morning and a couple of good chunks of work in the afternoon and keep the meetings to a minimum, technical people have done their fair share of work. If you work in a meeting-happy organization, you can’t plan on 6 hours of technical work. You have to plan on less. You’re wasting people’s time with meetings. If you plan on 100% utilization, no matter what, you get much less done in the organization. You create a terrible environment for work. And you create an environment of no innovation. That doesn’t sound like a recipe for success, does it?



Manage Your Project Portfolio: Increase Your Capacity and Finish More Projects • The 2008 Jolt Productivity award-winning Manage It! Your Guide to Modern, Pragmatic Project Management • Behind Closed Doors: Secrets of Great Management • Hiring the Best Knowledge Workers, Techies & Nerds: The Secrets and Science of Hiring Technical People She writes columns for Stickyminds.com and on “extreme project management” for Gantthead.com, and writes two blogs on her web site, jrothman.com. She has just started blogging on http://www.createadaptablelife.com/. She is a host of the Amplifying Your Effectiveness conference.

www.agilerecord.com

57

© iStockphoto.com/macniakmacniak

Consensus decision-making: Better decisions in less time by Linda Rising

Some futurists tell us that effective organizations in the next decade will use consensus as a model for the way in which we will work. This seems very inefficient. Are there good ways to make this work now, for the rest of us? By consensus decision-making, I mean that decisions reflect the ideas and thoughts of all team members. The decisions are acceptable to everyone. It is not unanimity, that is, the outcome may not be everyone’s first choice, and it is not a majority vote. [ASU] Peter Drucker says, “You can work or you can meet - you can’t do both.” With today’s business imperative to get more done with less, making every meeting count is more important than ever. I think we all feel that most meetings are a waste of time. Meeting experts have determined that roughly 53% of all the time spent in meetings is unproductive, worthless, and of little consequence. [Nelson00] A poll of professionals and managers produced 1,305 examples of problems encountered in meetings. Of these, the following sixteen account for over 90% of all meeting problems [Meetings]: •

Getting off the subject



No goals or agenda



Disorganized



Ineffective leadership/lack of control



Wasted time



Ineffective decision-making



No pre-meeting orientation



Too lengthy



Poor/inadequate preparation



Inconclusive

58

www.agilerecord.com



Irrelevant information discussed



Starting late



Interruptions



Rambling, redundant discussion



Individuals dominate discussion



No published results or follow-up action

Let’s see if we can attack one of these: ineffective decision-making. It’s a time management principle that you should never put more time and energy into making a decision than the decision is worth, so perhaps the first rule of thumb we can practice is: Make the decision even if all the facts are not known. You will never know everything there is to know about something that is going to happen in the future. There will always be some risk. Don’t waste time procrastinating. In practice, however, we get stuck in “analysis paralysis” and endless discussion. The discussion accomplishes nothing but wasting time, as we spin around and around endlessly. But, wait! Doesn’t discussion alter the course of the decision? Isn’t that what it’s all about? We want to get all sides of the issue on the table, so that the best possible result can be produced. If we don’t have the discussion, then aren’t we at even more risk? Is discussion convincing? According to one researcher who studied decision strategies, he began with the assumption that decisions were made rationally. He assumed that options were collected and examined and on the basis of logical and rational processes, the decision was made. He was wrong. Subjects showed little inclination toward systematic thinking. Instead they would make a gut choice and then use the information that had been gathered to justify the decision they had already made. If this is true, then during discussion we filter information according to our biases and reinforce the decision we have already made. The discussion, in other words, gains us nothing. [Klein98]

In today’s pressure cooker environment, many are called upon to make decisions that affect lives. According to one fireground commander, “I don’t make decisions. I don’t remember when I’ve ever made a decision.” The reason for the lack of decisionmaking: There was just no time. The building would burn down by the time he considered all the options. [Klein98] Researchers have studied the way physicians determine diagnoses. Physicians ostensibly suppress any explanations until they have studied all the symptoms, to make sure they do not overlook something. The studies found, however, that physicians form hypotheses and explanations from the very beginning and use these to direct their examinations. [Klein98] High-pressure situations and uncertainty make it difficult to apply a decision-making process. Uncertainty is and will be inevitable. Because uncertainty is inevitable, decisions can never be perfect. Often we believe that we can improve the decision by collecting more information, but in the process we lose opportunities. Skilled decision makers appear to know when to wait and when to act. Most important, they accept the need to act despite uncertainty. [Klein98] Astute readers will note that these research reports come from a book by Gary Klein, Sources of Power. It is about an intriguing effort to study how we make decisions. I recommend this book highly. It turns our ideas of how we behave upside down. You have probably experienced group decision-making as a voting activity in which the majority wins and everyone else loses. Consensus decision-making is quite different. In its purest form, it requires that every member consent to the decision before the group can adopt it. The notion of a group of diverse, strongminded people coalescing behind decision after decision, and all feeling like winners as a result, may seem like a pipe dream. Perhaps it only works, you may think, when some people are willing simply to go along with a decision they dislike to avoid the pain of conflict. [Shaffer93] Actually, the opposite is true. Consensus works only when people who feel uncomfortable about a proposed solution are willing to speak up and take the risk of engaging in conflict until a solution emerges that everyone can support. Suppressing feelings and reservations deprives the group of the information it needs to make the wisest decision. If you go along with the majority for the sake of harmony or time efficiency while harboring doubts or resentments, you reduce the consensus to majority rule. This not only weakens the power of the process, but also the long-term vitality of the community. [Shaffer93] Consensus rests on the belief that every member of the group— however naïve, experienced, confused, or articulate—holds a portion of the truth and that no one holds all of the truth. It assumes that the best decision arises when everyone involved hears each other out about every aspect of the issue while keeping an open mind and heart. [Shaffer93]

Once you have developed full agreement, your group can move forward. No disgruntled minority will drag its feet or otherwise sabotage your success. All of you will own the decision and will support it with your full energy. You will know that you have tapped the wisdom and creativity of every member of your group and developed a solution more effective than any one of you could have developed alone. [Shaffer93] In organizations, consensus works only when a clear fallback procedure exists, for example, the leader can make the decision when the group seems unable to do so. In most groups, the fallback is the majority vote. One way of implementing this is to hand everyone a set of cards that can be used to display their feelings about any decision: Green – I support the proposal Orange – I have a question Red – I do not support the proposal You can also use thumbs-up, thumbs-sideways, and thumbsdown to mean the same thing. [Shaffer93] The real purpose of this article is to introduce you to a process I learned when I was the technical editor for a book by Jim and Michele McCarthy [Mccarthy+02]. They are former Microsofties, who used this protocol at Microsoft to make faster decisions. Did I get your attention with the mention of Microsoft? I’ve seen this in action and it works. It may seem complicated at first, but when a team uses it, everyone quickly understands how decisions are made and it saves time, but still allows for everyone’s input. The proposer says, “I propose . The proposer says, “1-2-3.” All team members vote simultaneously: “Yes” voters give a thumbs-up. “No” voters give a thumbs-down and may also say, “I refuse to support this,” meaning that nothing the proposer can do will convince them to go along with the proposal. “Support-it” voters show a hand flat, which says, “I can live with this proposal. I believe that it is probably the best way for us to proceed now. I support it, even though I have reservations.” The proposal fails if any of the following applies: If the combination of “no” voters (outliers) and “support-it” voters is too great (usually about one-third), the proposal is dead. If any “no” voter says, “I refuse to support this,” the proposal is dead. If there are just a few “no” voters, the proposer resolves outliers’ issues by trying to bring the outliers in at least cost. No one else www.agilerecord.com

59

contributes except the proposer and the current outlier as the proposer asks each outlier to express his requirements for supporting the proposal: “What will it take to bring you in?”

References [ASU] ASU Continuous Improvement Resources http://www.west.asu.edu/tqteam/other.htm

The outlier gives a single, short, declarative sentence describing precisely what he requires to be “in.” No explanation or discussion should take place. If the outlier is given what he requires, he promises to drop all resistance to the proposal and to support it.

[Klein98] Klein, G. Sources of Power, The MIT Press, 1998. [McCarthy+02] McCarthy, J. and M. McCarthy, Software for Your Head: Core Protocols for Creating and Maintaining Shared Vision, Addison-Wesley, 2002.

If possible, the proposer makes an offer to the outlier. [Meetings] http://www.unm.edu/~sac/meetings.html If the changes to the proposal to accommodate the outlier’s requirements are minor, the proposer may use a simple, “eyecheck” of the non-outliers to see if there is general acceptance to the new proposal. If anyone is opposed or requires a formal restatement and a new vote, he must say so at this time. If the required changes are more complex, the proposer must create and submit a new proposal. The team reviews this proposal and conducts a new vote, and the Decider protocol begins anew.

[Nelson00] Nelson, B., “Don’t make team meetings wasted time, bizjournals.com, Monday, June 19, 2000. http://seattle.bizjournals.com/extraedge/consultants/return_ on_people/2000/06/19/column83.html [Shaffer+93] Shaffer, C.R. and K. Anundsen, Creating Community Anywhere, Penguin Putnam, 1993.

If all outliers change their votes from “no” to “support-it” or “yes,” the proposal is adopted. In many cases, outliers are simply requesting a small alteration to the proposal. This process allows requests to be heard in the most efficient way possible. Many times during discussions, lots of words hide what may be a straightforward alteration to the proposal. The result of this process is that unanimous “yes” votes or “yes” votes mixed with some “support-it” votes are the only configurations that cause a proposal to be adopted as a part of the team’s strategy. If too many people feel the proposal is not worthwhile, it will be immediately and clearly rejected without endless debate. Failed proposals should only be repeated if relevant circumstances have changed. If you adopt this procedure, let me know how it works for you: [email protected].

60

www.agilerecord.com

> About the author Linda Rising With a Ph.D. from Arizona State University in the field of object-based design metrics, Linda Rising’s background includes university teaching and industry work in telecommunications, avionics, and tactical weapons systems. An internationally known presenter on topics related to patterns, retrospectives, agile development, and the change process, Linda is the author of numerous articles and four books – Design Patterns in Communications, The Pattern Almanac 2000, A Patterns Handbook, and Fearless Change: Patterns for Introducing New Ideas, written with Mary Lynn Manns. Find more information about Linda at www.lindarising. org.

© iStockphoto.com/toddmedia

Becoming test-disinfected by Alexander Tarnowski

Developers that get fluent in test-driven development become ”test-infected”. That’s a good thing. They drive their development using tests and produce good quality code that exhibits certain properties linked to the practice of writing the tests first. What happens when you take a test-infected developer, snatch him from his natural habitat, and place him in a team that thinks of itself as “pragmatic” whenever it engages in testhostile behavior? Test-oriented teams To developers that have had the privilege of being part of a wellperforming agile team that’s adopted the “test-first” philosophy, driving the development using various types of tests becomes second nature. They start expecting that there will be unit tests for the code, some kind of integration tests for components that work together, and high-level automated acceptance tests. To be perfectly honest, even the best teams most likely cut some corners at some point in time. Maybe not all code is developed using unit tests, or maybe a “trivial” story lacks its acceptance test. What they most likely do have is the fundamental capability and infrastructure to do agile development. The build scripts run unit tests, there is a CI server, a couple of databases are dedicated to supporting testing, and so on. Getting this infrastructure and mindset in place is not cheap. It takes time and practice, and is vital to agile development. Properties of test-driven code Developers doing test-driven development for a while discover that the code produced will exhibit certain properties that are a direct result of how it came into being.



Keep methods short – Nobody wants to test a monster method.



Keep the number of dependencies down – Mocking and stubbing is fun, but can easily get of control.



Honor the Single Responsibility Principle – Naming tests that verify distinctly different things hurts.



Have concise interfaces – Testing a method that accepts too many parameters is tedious and error-prone.



Avoid indirect inputs – You know that something’s wrong when you set up indirect inputs in your test fixture.



Not be duplicated – If you’re sure about the quality of your piece of code, you don’t want to introduce inconsistencies by copying it around.

This list could go on, but its main point is that test-first code has properties that are desirable for both developers and testers. Let’s leave it at that and visit another team, the “pragmatic” one. Pragmatic teams (based on a true story anno 2011) The pragmatic team is not as agile and test-focused as it would like to be. It has neither the infrastructure, nor the practices to drive their development using a test-first approach (or any other approach that involves testing for that matter). However, they are pragmatic. Why bother with this anyway? It’s the amount of delivered code that counts. Their “pragmatism” lies in their ability to produce code that will make it to the customer without all the rituals and additional work related to keeping the automated test-suite in shape. Regression? The industry has been able to deliver software for decades without regression testing, and besides, what’s the purpose of customer acceptance testing if not to discover regression? After all, “pragmatic” is a positive word, right?

Code that’s written to pass tests will most likely:

www.agilerecord.com

61

Needless to say, code produced by the “pragmatic” team will not exhibit the properties discussed previously, quite the opposite in fact. Enter the test-infected developer Moving from the test-oriented team to the pragmatic team is quite a shock for the average test-infected developer. Even if he’s cut a corner or two while writing tests in his former team, he soon discovers that the new codebase only has ten unit tests, half of which are testing the unit test framework. Test automation? There was no time. It doesn’t provide value (pixels on the screen) for the customer, hence it’s not pragmatic. Integration tests to secure the service layer down to persistence? Meaningless! The data model changes too frequently. The infrastructure would take too much time to set up. Too late! What’s the test infected developer to do in this situation, apart from leaving the team? From my own experience, the pragmatic team will have a hard time changing. Without external help it will be too caught up in its momentum. A good agile coach can help, but what can the testinfected developer do in the meantime? Ideally, he should spend a part of his time improving the test infrastructure or refactoring code to become more testable. However, this might not be acceptable in his new team. After all, he isn’t writing new code. Also, his efforts may at worst be nullified by other developers that introduce test-unfriendly code or break the few existing tests without caring about it. The effort of getting continuous integration up and running may also be overpowering in such an environment. What’s the test-infected developer’s defense against these test-disinfecting practices? Here are some coping strategies that all boil down to staying with the good practices: •

Regardless of the surrounding code, keep your code testready: short, concise, without duplication. If people on the team don’t have problems with long methods and duplication, they certainly won’t mind short methods and properly factored code. It should pass unnoticed.



Keep your own private test-suite of unit tests if you must. These tests will get broken by other people, but fixing them in secrecy shouldn’t be a problem. After all, lots of effort is spent on maintaining monster methods and duplicated sections of code anyway.



If you can’t add tests, pretend that there’s test code out there. That’s sort of desperate, but conjuring up a mental image of a test that would exercise your code should keep it testable.

A test-infected developer in a test-disinfected environment faces many challenges. Let’s hope that good habits are hard to kill.

62

www.agilerecord.com

> About the author Alexander Tarnowski is a software developer and architect with a decade of experience in the field, and a broad definition of the term „software development“. He has a master‘s degree in computer science and a bachelor‘s degree in business administration, and has spent his professional career working as a contractor for both small and large companies. Having worked in all phases of the development process, he believes in craftsmanship and technical excellence at every stage. He‘s currently interested in how the introduction of agile methodologies on a larger scale will redefine the tester and developer roles, and how they can both benefit from it.

© iStockphoto.com/OSTILL

Top 10 indications that you moved up from offshore staff augmentation into Agile software development by Raja Bavani

Offshore staff augmentation is all about engineers staffed at offshore locations and treated as augmented team members reporting into an onsite lead or manager. While offshore staffing serves short-term tactical goals, it requires extensive one-toone interactions and consumes significant communication and coordination overheads. As a result the team members continue to remain order takers and do not find opportunities to understand the big picture in order to create value to stakeholders. Even though there may be good reasons to start an engagement in an offshore staffing model, there are compelling benefits when stakeholders evolve it to the next level so that the offshore team becomes agile in executing projects. In software product engineering as well as in outsourced product development, Agile has been the mantra of success for several years. Moving up from offshore staff augmentation is a collective decision of project stakeholders. At MindTree we have collaborated with our partners in moving up the value chain. This requires lot of trust, support, collaboration and meticulous governance. Here are the top 10 indications that you moved up from offshore staff augmentation into Agile software development. 1. Shared Vision: The executive sponsor and governance team members have collaboratively drafted and established a shared vision among distributed team members. This has helped the senior leadership at each location to sensitize team members at regular intervals with the right context and shared vision. Without this, team members tend to restrict themselves to transactional engineering activities without relating their work to the overall business needs of project sponsors. 2. Base Camp & Planned Visits: Some of the offshore team members have been through a base camp and there are planned visits by key team members across shores. In case of distributed Agile projects, setting up the base camp is absolutely essential in order to put the right foot forward. Setting up the base camp involves forming a seed

team (with at least 1 project manager, 1 or 2 technical leads and a handful of engineers). Typically, members of this seed team are selected from distributed locations. They come together and spend 4 to 8 weeks, depending on the size of the project, at a central location where the project initiates from. During this time they decide on project infrastructure, guidelines and execute the first iteration. They also establish a clear understanding of what to expect during the first few iterations or Sprints. Setting up the base camp and facilitating planned visits provide several benefits. On the project execution front, a base camp provides an opportunity to have adequate clarity on the technical environment, tools and key engineering processes. On the people front, a base camp and planned visits provide opportunities for rapport building. This can ensure better efficiency in the resolution of issues and conflicts during the project. Overall, this provides an opportunity to start distributed Agile projects on the right foot. 3. Collaborative Governance: There exists a collaborative governance mechanism and it facilitates efficient reviews, issue resolution and decision making. Besides there is perpetual support and encouragement from senior leaders across sites in conducting governance reviews at regular intervals. The word governance originated from the Greek verb κυβερνάω (kubernáo), which means ‘to steer’. It was used for the first time in a metaphorical sense by Plato. In general, governance means a mechanism that includes a group of people (or committees or departments etc.) who make up a body for the purpose of administering something and hence to make the best decisions in a timely manner. In case of software projects executed at a single location it has been a general practice to implement a governance mechanism at three levels namely project level, program level and organizational level. In case of projects executed across multiple geographies and time zones with employees of the project

www.agilerecord.com

63

sponsor organization, external vendors and independent contractors, the complexity of governance increases multifold. Hence it is absolutely essential to form a governance team that comprises of representatives from onsite as well as offshore and works together as a single body at global level in order to run distributed projects successfully. Governance has been one of the key success factors in distributed projects and it is going to provide the necessary foundation and support in future as well. 4. Structured Team: The offshore team is a structured team with a lead or manager. Team members have clarity regarding their roles and responsibilities. Everyone in the offshore team knows what everyone else does in the team. Key team members visit other locations a couple of times a year for knowledge sharing and team bonding. Besides, team members are empowered to make decisions. Many times, practitioners tend to embrace Agile principles and recommend a self-directed team of offshore engineers that can work with an onsite manager. For very small teams of 1 or 2 engineers that do monotonous work, such as bug fixing or maintenance of end-of-life non-critical products, it may be possible to manage with a remote project manager. However, in all other cases, you will need to structure that team in such a way that it gets adequate local leadership and managerial support to deliver the best. If you follow Scrum, you will need a local Scrum Master for every project. Else, you may need a lead or a manger to support your local team to deliver the desired behavior. 5. Infrastructure & Collaboration Tools: The offshore team has adequate infrastructure support for day-to-day communication with remote teams. Team members have access to Internet Messenger (IM), phone, and video conferencing, in addition to email communication, and are aware of efficient ways to utilize Internet Messenger or phone or video conferencing. Both onsite and offshore teams have access to a centralized version control system and adhere to the same set of build process and corresponding build scripts. Continuous integration is practiced consistently. Besides, both onsite and offshore teams use the same web-based tool for bug tracking, issue resolution and query resolution. Having the right infrastructure and collaboration tools is essential to improve efficiency in distributed projects. When efficiency is a cause of concern, distributed projects undergo a lot of turbulence and escalations. 6. Agile Expertise: Offshore team members do not perceive ‘Agile’ as a niche paradigm known to the remote customer only, nor do they tend to survive with inadequate preliminary awareness of Agile. They are well aware of Agile practices and have practiced Agile for more than six months. Offshore team members organize knowledge sharing sessions to discuss Agile best practices and share suggestions with customer teams. Also, the offshore team has the support

64

www.agilerecord.com

of Agile experts who can provide real time coaching when needed. When you transition from offshore staffing to Agile, it is required to facilitate training programs and team building exercises to all team members and budget for additional team members with Agile expertise. Also, it is a best practice to form a team of Agile experts who can provide real time coaching to your team when needed. 7. Collaborative Planning, Execution and Retrospectives: The offshore team members attend iteration planning meetings. Also, they follow a similar set of Agile practices as followed by the onsite teams. Iterations are short (2 to 4 weeks), and offshore teams participate in estimation and prioritization of tasks. Working software is delivered at the end of each iteration and the team experiences continuous learning and improvement. While there is no change in iteration size, the overall execution model supports variations in scope in order to accommodate the priorities of stakeholders. Offshore and onsite teams participate in retrospectives and come up with collective best practices and improvement areas. Retrospectives are no longer an onsite affair. Also, offshore teams participate in Root Cause Analysis (RCA) meetings and become a part of the root cause identification process. 8. Change Management: Team members do not get daily task delegation emails from onsite leads or managers. There are no drastic changes to assigned tasks and the offshore team members do not have to do frequent context switching every day. Instead, team members embrace change swiftly with constructive discussions. Task assignment happens over a web-based tool that supports Agile software development. The scope of any iteration does not change during that iteration. Any exceptions are handled collaboratively by leaders across shores. 9. Unit Testing & Test Automation: Team members practice unit testing and pursue test automation in order to improve productivity. They adhere to the processes and guidelines set for unit testing, test automation and any such engineering activities seamlessly. 10. Collective Code Ownership: Everyone in the team understands the value of collective code ownership and more than 50% of the developers own the code during the first two or three iterations. There has been an increasing trend in the percentage of developers who know the entire code base and exhibit code ownership. Team members are willing to move across various categories of tasks in the project in order to meet project goals. It is very rare to find someone in the team who attempts to claim: “This is not my job.”

Predominantly in the software product engineering arena, several instances of global software engineering initiatives start in the form of staff augmentation at a remote captive center or with an offshore service provider. Transitioning from offshore staffing into Agile software development improves the overall efficiency and throughput of the team. Also, it provides adequate ownership and satisfaction to project teams.

> About the author Raja Bavani is Technical Director of MindTree’s Software Product Engineering (SPE) group in Pune and plays the role of SPE evangelist. He has more than 20 years of experience in the IT industry and has published papers at international conferences on topics related to code quality, distributed Agile, customer value management and software estimation. His SPE experience started during the early 90s, when he was involved in porting a leading ERP product across various UNIX platforms. Later he moved onto products that involved data mining and master data management. During early 2000, he worked with some of the niche independent software vendors in the hospitality and finance domains. At MindTree, he worked with project teams that executed SPE services for some of the top vendors of virtualization platforms, business service management solutions and health care products. His other areas of interests include global delivery model, requirement engineering, software architecture, software reuse, customer value management, knowledge management, and IT outsourcing. He regularly interfaces with educational institutions to offer guest lectures and writes for technical conferences. His SPE blog is available at http://www.mindtree.com/blogs/category/software-product-engineering. He can be reached at [email protected].

Reader's Opinion You'd like to comment on an article? Please feel free to contact: [email protected] www.agilerecord.com

65

© iStockphoto.com/Neustockimages

Introducing Agile – how to deal with middle management by Armin Grau

When SCRUM or other Agile methods are introduced, middle management often seems to be one of the major obstacles. Skilled engineers quickly realize the power of Agile methodology and so do most top managers - at least they can be convinced through the promise of more output, flexibility and a highly motivated workforce. Middle management on the other hand often seem blind to the advantages of Agile methods, thus slowing down or even actively hindering the necessary changes. So what are the reasons behind this? How can you deal with an old(er) fashioned head of department? And what are the keys to introducing agility in a waterfall oriented process landscape? This article attempts to give answers to those questions, starting with a line management view of things and ending with a team oriented approach to change organizations. The perspective of middle management Before we get on to more Agile topics, let’s take a look at what the role of middle management usually looks like and what the main challenges are. The position of a middle manager is a sandwich position and requires dealing with all levels of hierarchy in the company. Communication, adapting to the situation and translating between different hierarchy levels are core parts of a middle manager’s daily work. Middle managers are responsible for many things they only partially understand. They have to work with ambiguous information to reach and carry out decisions. Many middle managers started their career as technical experts, gaining more and more responsibility as they climbed up the company hierarchy. During that process they had to detach themselves step by step from the technical field they once excelled in – only very few can cope with the fast evolution of technology and deal with people, organizations and processes at the same time. It is more likely that a middle manager does not really have the management or people skills required for the job [The Peter

66

www.agilerecord.com

Principle, Raymond Hull]. The middle manager’s job is not only to manage but also to shape the organization. This leads to the paradox situation that they work a system which they are a part of and which is also shaping them. So it is not surprising that many middle managers are convinced they are experts when it comes to organization development. For more information on the role of middle management, see the excellent article of [Esther Derby]. Why is my boss so stubborn? Now that we have some insight into the middle manager role, let us introduce SCRUM in their organization. Maybe you already have a team of software developers that are doing SCRUM, have gained the attention of the CTO and have convinced him that development will be faster, risk will be lower and developers will be happier. So the CTO asks the middle manager to introduce SCRUM to his department. This scenario likely leads to an approach where team leads or senior developers become SCRUM masters and SCRUM techniques are implemented on a team level only. After some time, conflicts will emerge about controlling the work of the team and about estimates and commitments. Some implementations of SCRUM get stuck or rolled back here. In our scenario, let’s introduce a Scrum coach who really understands SCRUM, clarifies the roles in the team and promotes Agile principles. He also introduces the middle manager to some books about SCRUM and Agile. Unfortunately, SCRUM coaches sometimes forget that in order to introduce SCRUM successfully the role of the middle manager has to change. If they mention middle management the primary focus is “What managers no longer need to do...” [Ken Schwaber] – which from a middle manager point of view translates to: Stay out of our way! There are a lot of books on how to introduce SCRUM, but they do not describe how the role of middle management changes and what can be done to support that change.

The manager role in SCRUM In order to effectively work in an Agile organization the role of the middle manager has to change from taking direct control to guiding, enabling and coaching teams. The guiding part is also present in the classic management role, but gets more important when direct control is not an option. So one important job for the middle manager is to provide and communicate the mission, vision, strategy and goals for the organization. This is not necessarily the product vision (which is part of the product owner role), but should be aligned with it. Another aspect is to enable the team to take action and make decisions regarding their work. This sounds easy: just give them control over their stuff and be done?! In real life, it involves much more work: reassuring the team, supporting them in conflicts with other departments and upper management, but also finding the right boundary conditions for them. For instance it might be a condition for the team to have features load tested before deploying them to the productive system. How this can be achieved is up to them – if they need test systems, expert advice, etc. they can turn to the manager who will help them, sometimes with money, sometimes with advice, sometimes by telling them to try harder to find a solution on their own.

3. Communicate in a way that is easily understandable, but also respectful 4. Self-esteem is good (you`re capable!), arrogance is fatal; sometimes it helps to show vulnerability 5. Be aware that you – like everybody else – have blind spots; even a smart comment can contain important information The vision is to introduce SCRUM to your organization, to develop faster, with less risk and happier developers, (insert your own vision here). What is the middle manager’s place in that? Having their own incentive will certainly help to get the buy-in for your vision. Some of the benefits of introducing SCRUM for middle managers could be: 1. Less operational, more strategic work. As the role of the middle manager changes, the focus changes too, from managing to leading teams. And probably every manager will tell you, that they want to have more time for working on strategy. Good examples can help, so you might want to set up some knowledge exchange meetings with other managers/companies. 2. More transparency on the status and state of the team.

There are also some parts of the manager’s role that more or less stay the same – but they need to be performed in a different manner. One such task is to provide transparency and to advertise the performance of their organization. Basically this should be even easier with SCRUM, as transparency is one of the key values. Recruiting of employees, as well as training and talent management is another recurring part of the manager’s role. In the interest of SCRUM, however, it might be more important to hire employees with the right mindset than the ones with the best skills. And a good team does a lot of knowledge transfer, so training moves on to a higher level and becomes more about providing new ideas. And finally there still will be a lot of escalations and conflicts to deal with as well as budget responsibility, administrative work, and more. It’s all about change - so let’s do some change management! In the end our middle manager tasked with introducing SCRUM not only has to change the organization but also their own attitude and style of working. A good SCRUM coach would know about that and support this change by applying some change management rules and techniques. Perhaps the most difficult, but also the most important technique, is to build up trust as “trust determines the bandwidth of communication” [Tom de Marco, OOP Conference, Munich 2011]. Here are some ideas on how to build trust: 1. Initiate dialog and keep the middle manager informed 2. Deliver results; be reliable and true to your commitments

SCRUM boards, burndown charts, product backlogs and publicly held daily/estimation/review meetings will help to provide that kind of transparency. However, it may be important to give guidance on the interpretation and to ask for feedback after the meetings – see trust and dialog. 3. Success. The goal is to develop faster, with less risk, and happier developers. Introducing SCRUM will achieve this aim, and making the middle manager a part of the guiding team makes it his success, too. Generating some quick wins will help here, so try to have some visible improvements fast. And set up some statistics that show the success on a management level. Team work with managers - is this even possible and how can it work? Most middle managers are not used to team work anymore, even if they were team players in the past. Climbing up the hierarchy often goes along with competitive behavior, and competition does not go together well with team play. On the other hand it will benefit the change process to have a guiding coalition - a group with enough power to lead the change effort, working together as a team [John P. Kotter, Leading Change]. If you manage to have a common vision and common goals that everybody in the team accepts, team play can work on a management level, too. As you make progress in introducing SCRUM, you probably will have a separate team of SCRUM masters who no longer answer to the middle management (at least that would be my recommendation, as such a setup provides a much stronger

www.agilerecord.com

67

urge to change and enhance the organization). At this point it becomes vitally important to set up a team where SCRUM masters and line managers work together. And it’s definitely worth a try to introduce SCRUM techniques for the management team and let them experience the difference first- hand. Conclusion When introducing SCRUM in an organization, middle management is often considered an obstacle. This is partly due to the role middle management plays in a non-Agile organization, but even more to their reluctance to adapt to the new situation. SCRUM coaches and books are often not helpful when it comes to dealing with that. Middle managers are too involved to stay out of the way, but they can be important players in the introduction of SCRUM, by guiding, enabling and coaching their teams. Using change management techniques you can help them to find their new role. To achieve this, building trust is vitally important. And finally team play can work on a management level, too – at least it’s worth a try. Links and Literature Eleven things to remember about people in middle management roles by Esther Derby http://www.estherderby.com/2011/02/11-things-abt-managers.html

The Peter Principle: Why Things Always Go Wrong by Laurence J. Peter and Raymond Hull (1971) Leading Change by John P. Kotter (1996) Scrum: Produkte zuverlässig und schnell entwickeln by Boris Gloger (2009)

> About the author Armin Grau is head of the IT product development department at ImmobilienScout24, where he is responsible for the work of 15 SCRUM teams. In addition to a diploma in computer science he is also a certified SCRUM Master and Product Owner and was project manager for numerous software development projects. The introduction of SCRUM at ImmobilienScout24 was a tremendous success: within two years productivity was more than doubled, the number of bugs was halved and so was the fluctuation in the team.

Lassen Sie sich auf Mallorca zertifizieren!

© Wolfgang Zintl - Fotolia.com

Certified Tester Advanced Level TESTMANAGER - deutsch 10.10. – 14.10.2011 Mallorca

68 www.agilerecord.com www.testingexperience.com

http://training.diazhilterscheid.com The Magazine for Professional Testers

1

© iStockphoto.com/AleksandarPetrovic

How to Succeed with Scrum by Martin Bauer

I’ve been having an ongoing argument with a colleague who is critical of Scrum. He believes people choose Scrum because they wrongly believe it’s easier and cheaper than Waterfall, or because they have been sucked in by the current “Agile” fad. He says that people choose Scrum over Waterfall because they want to avoid the discipline of doing the hard work upfront, that they believe they won’t have to make difficult decisions and that they will save a buck or two along the way. I can’t say that he’s wrong in what motivates people to choose Scrum, but I do believe he’s mistaken that Scrum is easier and less disciplined than Waterfall. Succeeding with Scrum takes effort, discipline and many tough decisions along the way. It takes the right attitude and level of commitment, especially for product owners, to make it work. Choosing Scrum is not an easier path, but if you have the right elements in place, it can produce great results. The key is knowing what the right elements are. Common Understanding First and foremost, there needs to be a common understanding of how things are going to work. It’s not enough to say we are going to do “Scrum” without being more specific about the details of how Scrum is going to be implemented for the particular project at hand. This means taking time and effort upfront to be clear on a whole variety of things. Here’s an example of some of the aspects that need to be addressed before the project kicks off: •

How much analysis, if any, will we do before our first sprint?



Does the team have to be sitting together?



At what time are we going to have our daily stand-ups?



Do we need to take meeting notes for each stand-up?



What happens if the product owner can’t make all of the stand-ups?



How long are our sprints going to be?



What’s our definition of complete?



Will we have sprint retrospectives?



Will we have pre-sprint planning?



What tools will we use to capture user stories?



What tools will we use to monitor progress?



What level of detail do we need for our conditions of satisfaction?

There’s no right or wrong answer for any of these questions; they depend on the specific project. It might be that it’s not possible to have the team sitting together as they aren’t in the same country. It might be that sprint retrospectives are not needed as the team has worked together before and it’s unlikely that they need a formal method of capturing the positives and negatives of a particular sprint, and that this will naturally come up as part of the daily stand-up and sprint planning. The important point to note here is not what the answers are, but that the questions have been raised, considered and the team has come to a group decision. Nor is it vital that all of the questions are answered before the project starts; sometimes it’s fine for some questions to be answered at a later date. What’s important, however, is that the team agree that it can be left until later and that there’s a consensus on the way forward. In this respect, Scrum is much like Waterfall in needing to address elements upfront. It requires discipline to make important decisions. Where Scrum has the advantage over Waterfall is that the decision of how this project is going to be run is a collaborative effort. The key members of the team all have a say; it’s not the project manager dictating to both the client and the team how things are going to be run. It’s a vital first step to ensure the entire team works together in a way that they all agree to. This way they build consensus before the project starts.

www.agilerecord.com

69

Discipline When the project starts is the time when the reality of what Scrum means comes into play. For people that are new to Scrum, it can be confronting and uncomfortable. It means putting aside preconceived notions of how things should work. This is where it becomes challenging for Product Owners that thought their lives were going to be easier being “Agile”. The reality is very different. Product Owners quickly find out how much work they have to put in for the project to become a success. In contrast to Waterfall, the Product Owner should be a part of the project everyday, they need to be actively involved in the analysis of features, daily scrums and sprint planning. This isn’t easy and requires discipline from the Product Owner. Clarity There are a number of qualities that a Product Owner needs for a Scrum project to be successful. One of the most important is the ability to be clear about what they want. This is no different from Waterfall, but it’s far more obvious in Scrum where the Product Owner can be disconnected from the project. In Waterfall, the Product Owner can wait until the specification is drafted, answer any business questions raised and have it handed over to the developers to build. Further questions are likely to arise but, unlike in Scrum, it’s not likely to be on a daily basis. In Scrum, there’s the chance that every day a developer will raise a question or blocker about a particular user story that hadn’t been considered before. This is when the Product Owner is accountable. Everyone on the team will be aware that the question has been raised and it needs to be answered for the project to continue. It puts pressure on the Product Owner to come up with a quick answer. Hence the importance of them needing to be clear about what they want. If they don’t, then that user story will have to be put on hold until it is resolved. The Product Owner has no choice but to accept responsibility for providing clarity. Priorities Another important quality for Product Owners is being able to prioritize. Very few projects have unlimited budgets and Scrum project are no different. Although the scope may have flexibility, it’s unlikely there will be an unlimited budget and endless resources. As those who have read Fred Brooks “Mythical Man Month” will know, having more resources on a project doesn’t necessarily make it go quicker and can, especially near the end of a project, have the opposite effect. What this means for the Product Owner is that they need to be able to prioritize the backlog and be able to identify what’s most important. When it’s clear that not everything in the backlog is achievable, the Product Owner has to make tough decisions about what is included and what gets dropped. This leads into the next quality: acceptance. Acceptance What is probably the hardest part of a Scrum project for a Product Owner is accepting the reality of what is achievable on a number of levels. Where this is most difficult is when the true velocity of a project becomes apparent. For example, at a sprint planning meeting, the developers estimate they should be able to complete 8 user stories in the next sprint. The Product Own-

70

www.agilerecord.com

er has to accept what the developers tell him about how long it’s going to take, even if they believe that it shouldn’t take that long. The Product Owner has to trust that the developers are telling him the truth about how long it will really take despite what the Product Owner thinks. That’s not to say that estimates can’t be challenged and adjusted, but ultimately the Product Owner has to show trust in the team. However, it doesn’t stop there. By the end of the sprint, unforeseen circumstances may have surfaced that meant only 5 user stories are actually completed. The Product Owner is unlikely to be happy about getting less than expected (who would), but, once again, they have to trust that the team has done everything they can and that the reasons for the delay aren’t their fault. Even more challenging, the error could be one of estimation, that the team underestimated the complexity and simply got it wrong. The Product Owner has little choice but to accept the reality and adjust the expectations of what can be delivered in the next sprint and possibly for the entire project, especially if the budget doesn’t stretch far enough for the entire backlog to be completed. In contrast to Waterfall, it’s not known in detail upfront how much can actually be achieved for the number of sprints that the budget allows. That means the Product Owner has to accept that they may not get everything they want, and after each sprint, as the reality of how much effort is truly required for each user story, expectations may need to be adjusted yet again. Ironically, in a Waterfall project, the scope is clear upfront and the Product Owner can insist that it all be delivered, the risk being on the supplier and team to deliver, even if it takes more effort than originally estimated. In Scrum, the Product Owner has to accept both upfront and during the project that the scope of what will be delivered can and, more often than not, will change and they will get less than they hoped for initially. This is not an easy position for a Product Owner to be in, but the benefit they get is being able to choose what’s most important. Ultimately, for a Scrum project to be successful, the Product Owner has to trust the team and be pragmatic about the end result. Commitment Last but not least, the Product Owner has to show commitment. Unlike Waterfall, they must be there every day for the entire project. They need to be at daily stand-up meetings so they can answer questions and resolve blockers, they need to be at sprint retrospectives to understand how the team feels about how things are going, what’s working, what needs to be changed. They need to be available to participate in the analysis required for user stories to be tackled in the next sprint. They need to be at sprint planning meetings to prioritize the backlog. They need to be at showcases to see what’s been done and comment on whether it’s acceptable or if further refinement is required. It’s a huge commitment and vital for the project. Product Owners can’t simply tell a business analyst what they want, read the specification and then wait for the project to be delivered. They have to be committed for it to succeed.

Accountability Clearly much rests on the Product Owner’s shoulders to make a Scrum project successful, but it’s not just the Product Owner that has challenges to face. For a developer used to Waterfall, it can be confronting to be so exposed, to not be hidden behind the project manager, and be accountable for estimates and their progress on a day to day basis, and to deal with the change that Scrum allows. The first thing that a developer will notice about Scrum is the direct contact with the client, or in this case Product Owner. I’ve worked on projects where the key technical person has had little or no contact with the client. The majority of the communication has been via a business analyst, sometimes from the client’s business analyst to the supplier’s business analyst and then the lead developer. In Scrum, not just the lead developer, but all developers get to talk with the Product Owner every day. For some developers, that’s an uncomfortable situation. They may struggle with communicating technical challenges in layman’s terms and succinctly convey blockers, issues, concerns or solutions. In Scrum, developers can’t hide behind technical terms and sit in their cubicles or offices; they have to step outside of their comfort zone and face business people that will ask simple questions that may not have simple answers. Transparency Along with having to deal with the Product Owner on a daily basis, the developers’ progress will be monitored on a daily basis. Developers aren’t left to their own devices for days or weeks at a time, they need to break down the user story into individual tasks that are assigned story points and they are measured, daily, against their estimates. If they have forgotten a task, they add it in and everyone will know they forgot. If they take longer on a task than they thought, then everyone knows. That’s not to say developers will be blamed for missing tasks or taking longer than expected, we are all human and don’t always get it right. The difference in Scrum is that everyone will know, the message is not delivered by a Project Manager where the developer has never even met the client. In Scrum, the developers’ estimates and progress are totally exposed. The fact that it might be accepted and understood by the team still doesn’t make it easy to admit that you were wrong, and for some developers that’s a very uncomfortable situation to be in. Adaptability Although developers are exposed in the project, the challenge of Scrum is that they are not exposed to the details of the entire project at the start. And that’s because the details don’t necessarily exist. There is usually a backlog of user stories, but that doesn’t mean the backlog is what will be developed. Some user stories can be dropped, some can be added, some can change during the project. It means there’s a lot of flexibility to ensure the end result is what the Product Owner both wants and needs. The challenge for developers is learning how to be flexible, how to adapt to change during the project, how to technically design a solution when you don’t know all the moving parts upfront. This is one of the drawbacks of Agile. Without knowing all the details

upfront, it’s very difficult to design a holistic solution. Sometimes the developer has to implement a solution and then later, when a new user story is introduced or existing functionality is changed because the Product Owner has changed their mind, has to refactor code already written and tested or introduce work-arounds due to other design decisions made before the change was identified. This can be incredibly frustrating for intelligent developers who take great care to design efficient, robust and maintainable code. The irony of the challenges facing both Product Owners and developers is that it is exactly those aspects of Scrum that make it so successful. So many projects suffer from lack of involvement from the client. In Scrum, the Product Owner has to be committed and a part of the project on a daily basis; they have to be there to resolve blockers, make decisions on priorities, clarify details, determine the conditions for success. For Scrum to work, and for any project to have a chance of success, the Product Owner has to be involved, has to be committed and has to know what they want. Similarly, many projects suffer from the developer being too far removed from the project, not being accountable or having to explain and justify their actions. In Scrum, developers can’t hide, they are held accountable on a daily basis and have to be clear about what they are doing, when it’s going to get done and justify why it might take longer than expected. Scrum ensures that there’s transparency which leads to a greater chance of success. In summary, what’s required for Scrum to work is what’s required for any project to work.There needs to be a team where everyone is clear on the goal, is accountable and works together to deliver the end result.

> About the author Martin Bauer is the Programme Manager for Vision With Technology, an award-winning digital agency based in London. He has over 15 years’ experience in Web development and content management. Mr. Bauer is the first certified FeatureDriven Development Project Manager, an advocate of Agile development, and also a qualified lawyer. His experience covers managing several businesses as well as teams of developers, business analysts, and project managers. Mr. Bauer can be reached at [email protected]; Web site: www.martinbauer.com.

www.agilerecord.com

71

© Dmitry Knorre - Fotolia.com

What Does Agile Mean To Us? A Survey Report by Prasad Prabhakaran

We did a survey among all our Agile projects. The purpose of the survey was to assess how Agile teams have operated so far in 2010, what practices they have used and what the critical ones are. Respondent Spread: The number of respondents was 162, out of which 32% have 5 plus years of experience in Agile. Of all the respondents 59% were team members (developers), 32% respondents were managers (Scrum masters, project manager, coach) and 19% where sponsors (senior management). Survey Questions: The survey had 3 sections - the first section on experience and roles of the respondents in the project, the second section on the type of Agile methods / framework the respondents used, and the third section on practices/approaches/artifacts that were used in the current project and their criticality for success. Under section 3, respondents were asked to rate the criticality of practice in their current context on a scale from 1 -5 (1&2 meaning low, 3&4 average and 5 high).

The respondents had to provide their rating on the following practices/approaches/artifacts: •

Time boxing (30 days or less)



Prioritized product backlog



Configuration management



Automated unit testing



Automated tests are run with each build



Complete’ feature testing done during iteration



Team velocity



Design inspections



Stabilization sprints



Pair programming



Collective code ownership



Emergent design



Test-driven development



Iteration reviews/demos



Continuous integration



Code inspections



Task planning

Percentage of Respondents 70 60 50 40 30 Percentage of respondents

20 10 0 Team Members (developers and testers)

72

www.agilerecord.com

Managers ( SM, PM, Coach etc.)

Sponsors

Methodology

Team Members (developers and testers)

Managers ( SM, PM, Coach etc.)

Survey Outcomes: Types of Agile Methods As expected, 73% of the respondents are working in the Agile Scrum framework, 18% hybrid and combination of other methods, and 9% in XP.

Sponsors

Methodology

Practices / Approaches / Artifacts Quite surprisingly, of all these practices “Task planning” emerged as the most essential practice with 86% rating it 5, followed by “Done Criteria” with 79%, and “Iteration review /demo” and “Prioritized backlog” with 72% rating it as essential.

It is interesting to note the least critical practices /approaches / artifacts. The following ones are the bottom 5: •

Scrum XP Hybrid / others

Percentage of Top 5 'Essential' Practices 100

Pair programming – only 15% think it is essential

90



Refactoring – 19% think it is essential

70



Emergent design – 20% think it is essential



Design inspection - 21% think it is essential



Complete “feature testing done during iteration” - 24% see this as essential

86

79

80 60 100 50 90 40

72

72

68

Percentage of Top 5 'Essential' Practices 86

80 30 70 20

79

72

72

Iteration reviews /demos

Priortized back log

Percentage of 'essential' response

68

60 10 500 40 30

Task planning

"Done" criteria

Task planning

"Done" criteria

20

Automated unit testing

Percentage of 'essential' response

10 0 30

Least Essential Iteration PriortizedPractices Automated reviews /demos

back log

24

25 20

21

19

20

21

Pair programming

Refactoring

Emergent design

Design inspection

Complete" feature testing done during iteration”

Pair programming

Refactoring

Emergent design

Design inspection

Complete" feature testing done during iteration”

19

20 15 30

15

Least Essential Practices 24

10 25 205 150 10

unit testing

Percentage

15

5

Percentage

0

www.agilerecord.com

73

Table showing practices in order of essentiality: Practice

Essential / Critical Response

Essential / Critical Response

Task planning

86

Done criteria

79

Iteration reviews and demo

72

Prioritized back log

72

Automated unit testing

68

Time boxing

67

Automated test with each build

67

Configuration management

66

Release planning

65

Continuous integration

64

Test driven development

59

Burndown charts

58

Team velocity

53

Stabilization iteration

48

Collective code ownership

32

Complete feature test done during iteration

24

Design inspections

21

Emergent design

20

Refactoring

19

Pair programming

15

100 90 80 70 60 50 40 30 20 10 0

Essential / critical response

Conclusion We had the unique opportunity of working on both traditional and hybrid Agile eco-systems. The target group for this survey was only the projects which passed the basic Nokia test. Being an Outsourced Product Development services provider, most of the processes and practices used in Symphony need to be in consensus with the customer; hence, the survey revealed a variation of practices used in different projects based on their project context. Today Symphony Services offers a unique value proposition to customers based on all these lessons, best practices and value accelerators.

> About the author Prasad Prabhakaran has 11+ years of experience in the IT services industry. His first exposure to Agile was at Microsoft in 2005. Since then he has done solutioning, coaching, consulting, teaching about Agile and its flavors for many companies like GE, Cisco, Coke etc. Currently, he works as Program Manager at Symphony Services (http://www.symphonysv.com/). Almost 40% of projects at Symphony Services are in some form of Agile and its flavors. Symphony Services have provided business critical value to their customers through Agile since 2004. Prasad can be reached at [email protected].

74

www.agilerecord.com

© Kerbusch - Fotolia.com

SCRUM in the FDA context by Antonio Robres

Nowadays one of the most popular software development methodologies is SCRUM. It is an iterative, incremental methodology used in Agile software development.

Matching the Agile Manifesto with FDA context

The Agile Manifesto is based on four principles: The SCRUM methodology is based on the Agile Manifesto developed in 2001, which follows four principles. Actually, most companies in the regulated medical sector follow traditional waterfall development to implement the software process. In these companies, there is a traditional point of view which is influenced by FDA regulations. But the FDA does not force the methodology or the techniques to be used. In fact, the FDA remains neutral in this regard and states only General Principles of Software Validation; Final Guidance for Industry and FDA Staff. Is it possible to apply SCRUM or other Agile methodologies in FDA regulated environments? Are all of the Agile manifesto points compatible with software development in an FDA context? The aim of the regulatory requirements is “to provide a level of confidence that the quality of the software is appropriate to support Public Health”. In order to provide this level of confidence, the product needs software requirements, product validation and verification with a documentation of the test process and with risk analysis. Every software development methodology is valid as long as it can deliver the product with confidence and safety.

Individuals and interactions over processes and tools Working software over comprehensive documentation Customer collaboration over contract negotiation Responding to change over following a plan That is, while there is value in the items on the right, we value the items on the left.

This last statement is essential. That is, both sides of the table (right and left) are important. Agile methodologies do not imply lack of control or lack of documentation. However, it only prioritizes the left values. How does every item of the table match with FDA context? 1. Individuals and interactions over processes and tools. For every software development process, the most important element is the group of people within the team and their interactions. This is not different in FDA context. Trained and skilled people are needed in FDA context. However, the FDA context requires a process defined to assure the quality in the whole software life cycle. Nevertheless, the process can change during the product development. 2. Working software over comprehensive documentation. Working software is mandatory to sell a medical instrument in the market or to offer a service in connection with public health. If there is no software, there is no product. Medical device software development requires a certain amount of documentation as evidence of process compliance. For an

www.agilerecord.com

75

Agile development of medical device, the idea is to produce only the documentation that is necessary. 3. Customer collaboration over contract negotiation. This includes collaboration and a continuous feedback. This is a basic principle for medical devices development. Functional requirements are outside of the programming domain, but constant conversation and negotiation between stakeholders, developers and QA team is essential. Requirements and specifications of the software for medical devices must be developed by the whole team, including the stakeholders. 4. Responding to change over following a plan. In FDA context, a plan is required and it must be followed during the development process. However, the plan can include changes in the development process. The changes are not forbidden by the FDA. In fact, they are welcomed. However, there is one single restriction. All the changes done during the development process must be justified and tracable.

CFR reference

Design Control Element

7

820.30 (h)

Design transfer

8

820.30 (i)

Design changes

9

820.30 (j)

Design history file

In SCRUM methodologies, it is needed to reduce the amount of the documentation and give priority to working software. If some documentation does not add any value, this documentation is not required. The documentation required shall include the minimum to ensure there is evidence of working software. Moreover, documentation tasks in the iteration are needed, and it must set a minimum level for every iteration involving the whole team. There must be a final iteration or final step to complete the documentation that is required by the FDA submission. •

The main condition in the FDA context is that the functionality of the system must be verified as well as validated. In this context, objective evidence for the verification and validation of all the requirements of the medical device, including the software requirements, is required. Guidelines and tips to adapt the Agile methodologies to the FDA regulated context: • Implement a Risk Based Testing (RBT) approach. In FDA context, risk analysis is mandatory. In the FDA’s General Principles of Software Validation the following is stipulated: “The level of confidence, and therefore the level of software validation, verification, and testing effort needed, will vary depending on the safety risk (hazard) posed by the automated functions of the device”. Medical devices have large, complex developments and an RBT approach can guide the development by implementing the components with more risks or with greater impact. Using an RBT approach can ensure that the most risky parts of the system are tested during the whole development process. Make a risk matrix to obtain all the scores of the requirements or functionalities including severity for the business, severity for the patient, impact, frequency and likelihood. •

76





Only the documentation required. In the FDA regulated context, a certain amount of documentation is required because all the verifications and validations must have objective evidence. The required documentation is: CFR reference

Design Control Element

1

820.30 (b)

Design and development planning

2

820.30 (c)

Design Input

3

820.30 (d)

Design output

4

820.30 (e)

Design review

5

820.30 (f)

Design verification

6

820.30 (g)

Design validation

www.agilerecord.com



Continuous integration. One of the keys of implementing SCRUM in every context is the use of continuous integration. As described by Martin Fowler, “Continuous integration is a software development practice where members of a team integrate their work frequently; usually each person integrates at least daily leading to multiple integrations per day. Each integration is verified by an automated build (including test) to detect integration errors as quickly as possible. Many teams find that this approach leads to significantly reduced integration problems and allows a team to develop cohesive software more rapidly.” With continuous integration, the team can obtain an integrated working version of software to verify everyday. By using this system, you can detect some problems in early phases. Therefore, they can be fixed, and the effort needed during the latest phases is minimized. Testing in all the levels, involving all the team in testing activities. Involving all the team in the testing tasks can reduce the testing team effort in every iteration and increase software quality. In the development team, the need for unit testing is very important to minimize the bugs found in the testing phase. Testing must be present in all the levels including business features. The development team can help the testing team to take the best solution to test the business level. Automate the testing phase. Manual testing can mean a lot of effort and time for the testing team. In the SCRUM methodologies, it is advisable to automate a part of the test cases to allow the testing team to use other approaches like exploratory testing. In the book “Agile Testing” (by Lisa Crispin and Janet Gregory), there is a good layer approach of automation testing using the “automation pyramid”. There are some tools that can help you to automate your tests cases in all levels, like Ruby or Fitness in the business level and Selenium in the GUI level. Searching with the whole team for the best solution is necessary to produce your automated tests. Exploratory testing. Applying exploratory testing in every iteration can be very effective because it helps you to find other problems not detected with scripted or automated test. As mentioned above, provision of objective evidence



is needed in the FDA regulated context. An exploratory testing approach must include sufficient logs and provide coverage of your application. A video recorder of the exploratory testing session can be useful also to provide the objective evidence required for the FDA. Involve the stakeholders in every iteration. Stakeholders are one of the best sources of information. A fluent conversation between the team and stakeholders is basic to assure the quality criteria. It is important to request information from the customers at the beginning and to discuss with them the backlog items of every iteration involving them in the user stories. Finally, a demonstration after every iteration is essential to get their feedback.

Conclusion The conclusion is that Agile methodologies including SCRUM can be adapted for the FDA context. All the points of Agile Manifesto are compatible with FDA stipulations, but some must be adapted to obtain the greatest benefits. Implementing a Risk Based Testing approach and introducing documentation tasks in the iteration tasks can be useful to perform the transition from waterfall to Agile methodologies. Using continuous integration of the system and automating part of the testing in all levels (unit testing, business testing and GUI testing) allows the testing team to develop other approaches, such as exploratory testing, to find other problems not detected in other phases. The goal of the FDA regulation is to obtain a product with quality and safety, and this is the same also for Agile methodologies. References 1. Agile Manifesto: http://agilemanifesto.org/principles.html 2. Agile Alliance: http://www.agilealliance.org 3. Food and Drug Administration, Center for Devices and Radiological Health, [online] “General Principles of Software Validation; Final Guidance for Industry and FDA Staff” (Rockville, MD: FDA, CDRH, January 2002); available from Internet: www.fda.gov/cdrh/comp/guidance/938.html 4. CFR - Code of Federal Regulations Title 21: http://www. accessdata.fda.gov/scripts/cdrh/cfdocs/cfcfr/cfrsearch. cfm?cfrpart=820 5. Martin Fowler, Continuous Integration, article: http://martinfowler.com/articles/continuousIntegration. html 6. Agile Testing: A Practical Guide for Testers and Agile Teams by Lisa Crispin and Janet Gregory 2009.

Manual Tests

GUI Tests

Acceptance Tests (API Layer)

Unit Tests / Component Tests

> About the author Antonio Robres is a Test Engineer at Diagnostic Grifols in Barcelona, Spain. He studied Telecommunications Science at the Polytechnical University of Catalonia in Spain and holds a master’s degree in telecommunication administration. Also, he is an ISTQB© Certified Tester Foundation Level. During the past five years, he has been working in the fields of software testing and quality engineering for various companies like Telefonica, Gas Natural or Grifols. His work focuses on design and execution of several testing projects, mainly in embedded systems and web applications. He is also involved in the design and development of test automation projects with open source tools. He was a speaker in the last QA&TEST conference, where he presented the testing and QA structure of Diagnostic Grifols in detail. He is a writer in the testing blog www.softqatest.com.

www.agilerecord.com

77

Masthead EDITOR Díaz & Hilterscheid Unternehmensberatung GmbH Kurfürstendamm 179 10707 Berlin, Germany Phone: +49 (0)30 74 76 28-0



Fax: +49 (0)30 74 76 28-99



E-Mail: [email protected]

Díaz & Hilterscheid is a member of “Verband der Zeitschriftenverleger Berlin-Brandenburg e.V.” EDITORIAL José Díaz













LAYOUT & DESIGN Díaz & Hilterscheid WEBSITE www.agilerecord.com ARTICLES & AUTHORS [email protected] ADVERTISEMENTS [email protected] PRICE online version: free of charge print version: 8,00 € (plus shipping)

-> www.agilerecord.com -> www.testingexperience-shop.com

ISSN 2191-1320 In all publications Díaz & Hilterscheid Unternehmensberatung GmbH makes every effort to respect the copyright of graphics and texts used, to make use of its own graphics and texts and to utilise public domain graphics and texts. All brands and trademarks mentioned, where applicable, registered by third-parties are subject without restriction to the provisions of ruling labelling legislation and the rights of ownership of the registered owners. The mere mention of a trademark in no way allows the conclusion to be drawn that it is not protected by the rights of third parties. The copyright for published material created by Díaz & Hilterscheid Unternehmensberatung GmbH remains the author’s property. The duplication or use of such graphics or texts in other electronic or printed media is not permitted without the express consent of Díaz & Hilterscheid Unternehmensberatung GmbH. The opinions expressed within the articles and contents herein do not necessarily express those of the publisher. Only the authors are responsible for the content of their articles. No material in this publication may be reproduced in any form without permission. Reprints of individual articles available.

Index Of Advertisers Agile Req  25

Testen in der Finanzwelt  79

Agile Testing Days  13

Testing & Finance  31

Díaz & Hilterscheid  80

UNICOM  9

iSQI  48 Kanzlei Hilterscheid  37 Knowledge Transfer  19

78

www.agilerecord.com

HANDBUCH

TESTEN

I N D E R F I N A N Z W E LT

Das Qualitätsmanagement und die Software-Qualitätssicherung nehmen in Projekten der Finanzwelt einen sehr hohen Stellenwert ein, insbesondere vor dem Hintergrund der Komplexität der Produkte und Märkte, der regulatorischen Anforderungen, sowie daraus resultierender anspruchsvoller, vernetzter Prozesse und Systeme. Das vorliegende QS-Handbuch zum Testen in der Finanzwelt soll

2. 3. 4. 5.

einen grundlegenden Einblick in die Software-Qualitätssicherung (Methoden & Verfahren) sowie entsprechende Literaturverweise bieten aber auch eine „Anleithilfe“ für die konkrete Umsetzung in der Finanzwelt sein. Dabei ist es unabhängig davon, ob der Leser aus dem Fachbereich oder aus der IT-Abteilung stammt. Dies geschieht vor allem mit Praxisbezug in den Ausführungen, der auf jahrelangen Erfahrungen des Autorenteams in der Finanzbranche beruht. Mit dem QSHandbuch sollen insbesondere folgende Ziele erreicht werden: Sensibilisierung für den ganzheitlichen Software- Qualitätssicherungsansatz Vermittlung der Grundlagen und Methoden des Testens sowie deren Quellen unter Würdigung der besonderen Anforderungen in Kreditinstituten im Rahmen des Selbststudiums Bereitstellung von Vorbereitungsinformationen für das Training „Testing for Finance!“ INvon DERFallstudien FINANZWELT Angebot der Wissensvertiefungtesten anhand Einblick in spezielle Testverfahren und benachbarte Themen des Qualitätsmanagements Herausgegeben von Norbert Bochynek und José M. Díaz Delgado

HAnDBUCH testen IN DER FINANZWELT

1.

Testmanagern, Testanalysten und Testern sowie Projektmanagern, Qualitätsmanagern und IT-Managern

Die Autoren Björn Lemke, Heiko Köppen, Jenny Siotka, Jobst Regul, Lisa Crispin, Lucia Garrido, Manu Cohen-Yashar, Mieke Gevers, Oliver Rupnow, Vipul Kocher Herausgegeben von Norbert Bochynek und José M. Díaz Delgado

Gebundene Ausgabe: 431 Seiten ISBN 978-3-00-028082-5

Die Autoren Björn Lemke, Heiko Köppen, Jenny Siotka, Jobst Regul, Lisa Crispin, Lucia Garrido, Manu Cohen-Yashar, Mieke Gevers, Oliver Rupnow, Vipul Kocher Lektorat Annette Schwarz Satz/Layout/Design Daniel Grötzsch

1. Auflage 2010 (Größe: 24 x 16,5 x 2,3 cm) 48,00 € (inkl. Mwst.) www.diazhilterscheid.de

ISBN 978-3-00-028082-5 Printed in Germany © Díaz&Hilterscheid 1. Auflage 2010

48,00 €

1. Auflage 2010

• •

HAnDBUCH

testen

I N D E R F I N A N Z W E LT herausgegeben von

Norbert Bochynek José Díaz

Training with a View

18.04.11–19.04.11

Testen für Entwickler

German

Berlin

03.05.11–05.05.11

Certified Tester Foundation Level - Kompaktkurs

German

Mödling/Austria

09.05.11–13.05.11

Certified Tester Advanced Level - TESTMANAGER

German

Mödling/Austria

09.05.11–11.05.11

Certified Tester Foundation Level - Kompaktkurs

German

Berlin

12.05.11–13.05.11

HP Quality Center

German

Berlin

16.05.11–20.05.11

CAT - Certified Agile Tester

English

Berlin

23.05.11–27.05.11

Certified Tester Advanced Level - TESTMANAGER

German

Berlin

23.05.11–25.05.11

Certified Professional for Requirements Engineering - Foundation Level

German

Mödling/Austria

30.05.11–01.06.11

Certified Tester Foundation Level - Kompaktkurs

German

Mödling/Austria

06.06.11–10.06.11

CAT - Certified Agile Tester

English

Berlin

06.06.11–10.06.11

Certified Tester Advanced Level - TEST ANALYST

German

Mödling/Austria

06.06.11–09.06.11

Certified Tester Foundation Level

German

Frankfurt

06.06.11–10.06.11

CAT - Certified Agile Tester

English

Düsseldorf/Cologne

08.06.11–09.06.11

HP QuickTest Professional

German

Berlin

14.06.11–16.06.11

Certified Professional for Requirements Engineering - Foundation Level

German

Berlin

14.06.11–16.06.11

Certified Tester Foundation Level - Kompaktkurs

German

Mödling/Austria

20.06.11–22.06.11

Certified Tester Foundation Level - Kompaktkurs

German

Berlin

27.06.11–01.07.11

Certified Tester Advanced Level - TEST ANALYST

German

Berlin

11.07.11–14.07.11

Certified Tester Foundation Level

German

Frankfurt

13.07.11–15.07.11

Certified Professional for Requirements Engineering - Foundation Level

German

Berlin

18.07.11–22.07.11

Certified Tester Advanced Level - TEST ANALYST

German

Düsseldorf/Köln

25.07.11–27.07.11

Certified Tester Foundation Level - Kompaktkurs

German

Berlin

01.08.11–05.08.11

Certified Tester Advanced Level - TECHNICAL TEST ANALYST

German

Berlin

01.08.11–04.08.11

Certified Tester Foundation Level

German

München

08.08.11–10.08.11

ISEB Intermediate Certificate in Software Testing

German

Berlin

11.08.11–11.08.11

Anforderungsmanagement

German

Berlin

- subject to modifications -

Kurfürstendamm, Berlin © Katrin Schülke

more dates and onsite training worldwide in German, English, Spanish, French at http://training.diazhilterscheid.com/